Pondering various ways in which tools can be used to help students improve the quality of their sumitted assessment materials, from spell-checking, to grammar checking, to automated code formatting tools, code quality tools, code suggestion/completion tools (from simple name completion to Github Co-Pilot code generation), maths engines (Mathematica, Wolfram Alpha etc), to getting someone else to read over your submission to make sure it makes sense, to working independently but conversationally with a study buddy or tutor, to conversations with AI models, to generating complete draft texts with AI models (ChatGPT etc), to pasting questions on cheat sites and copying the suggested answers, to paying someone to do an assessment for you.
If I were a “qualified” educator, I would probably be able to reel off various taxonomies that distinguish between various forms of “cheating”, but as I donlt recall any offhand, I’ll “cheat”:
More “cheating”…
And even more “cheating”…
If I were formally reporting this, I’d probably use a table. Time to “cheat” again…
Using such taxonomies, we can start to talk about them separately and preclude students from using different approaches in different contexts, or, conversely, allowing, encouraging or requiring their use, perhaps in an assessment that also requires the sudent to demonstrate process.
My late New Year resolution will be to refine how I talk about things like ChatGPT in an education context, and regard them primarily as machine assist tools. I can then separately consider about how and why it might be appropriately used in teaching, learning and assessment, and how we might then try to justify excluding its use for particular assessments. How we might detect such use, if we have tried to exclude it, is another matter.
Thought provoking. There should be serious discussion about AI in the classroom and indeed in many fields and industries with AI quickly improving these conversations need to much happen sooner rather than later