Can We “Cheat” in the Creation of and Marking of Academic Assessment Material By Getting a Machine to Do It? Naive attempt at using ChatGPT to generate an assessment qustion to test understanding of a particular concept, geberate a marking guide, generate example solutions as if from different ability students, grade thos solutions according to the marking guide, and then provide different sorts of tutor feedback
Another Chat With ChatGPT…: another attempt at generating a simple assessment activity, prompting responses to it as if generated by students of different abilities.
And Another ChatGPT Example…: to what extent can we generate different quality answers to a generated question, along with different sorts of feedback?
ChatGPT Can’t Execute the Code It Generates (Yet) To Test It, But It Can Suggest How To: can we generate code tests for generated code?
Generating (But Not Previewing) Diagrams Using ChatGPT: there are plenty of packages that generate diagrams from text descriptions written using simple formalisms, so can we generate those formal descriptions from natural language text? For example, can we generate a flow chart diagram for a particular algorithm?
Can We use ChatGPT to Render Diagrams From Accessible Diagram Descriptions: given a long description of a diagram such as a flow chart, can we generate a flow chart diagram from the text? Could this be used as part of the quality process for checking both descriptions and diagrams?
Feedback From an Unreliable ChatGPT Tutor With a Regional Idiom: can we generate tutor feedback with a regional accent? How about feedback in different tones of voice?
See related tag: https://blog.ouseful.info/tag/chatgpt/