The following screenshot beautifully captures one of the things that I have been arguing is wrong with OUr learning material production process.
The screenshot is from an interaction with the ChatGPT large language model (LLM), one of those computer services that generally gets called “an AI”.
As you skim the computer generated code and the output, you think, “yeah, that looks fine”. But the model is unreliable, its responses are not necessarily true, correct, or even internally consistent. And the supposed code output is what the language model things the code (that it generated) should do if the code were executed, again, generated by a statistical language model.
When you run code, it runs as written (unless it’s HTML in a browser, when the browser is very forgiving…). And produces the outputs it should output.
When we produce educational materials, we often write code that works then copy and past it into crappy authoring environments that don’t really like code and don’t really know how to render it and cerainly can’t execute it. And then from somewhere, possibly somewhere else altogether, we copy and paste something that claims to represent the output of the code into the same crappy authoring tool that also doesn’t really know how to render code outputs, and doesn’t really like them. And then maybe someone edits them so they look a bit nicer, and now they don’t match what the actual and exact output would have been from executing the code. And then maybe something in the code is changed, at best, a piece of punctuation in an output statement, something “minor”, or slightly worse, a single character change in the code that breaks the code, and now nothing is correct any more. And that horrible mess of a crappy production process generates a text in which one thing apparently generates another but none of that is true any more. The thing presented as generating the output is not the thing that generated the output, and the output that claims to have been generated is output that has actually been edited, and nothing actually is what the reader is presumably being being led to believe it is supposed to be. It is inherently unreliable. And that same thing is being played out in the ChatGPT output. Although the ChatGPT example is perhaps more explicit in its unreliable statements: “The output will be:” Not “The output from running the code is:” Which is the sort of mistruth we put into our course materials. Which might more truthfully be written as: “The output we have copied and pasted and possibly edited and probably reformatted may well be very different to the pathway that was used to create and test the original code that we are claiming generated this output; the code that was, possibly, actually used to generate the original output from the claimed code, that is, the code we claimed earlier in this document as the code that generated this output, is quite possibly not actually the code that was executed in order to generate the output that this output is claimed to be; furthermore, the production pathway followed by the claimed code and the output claimed to be generated by the code, may well have taken different physcial pathways (different computer files handled by different people and subject to different processes), so there is a potential that the different versions of the claimed code and claimed output are being used within this document, even prior to any edits, modification, or reformatting, substantive or not, that would mean the claimed code or claimed output is not actually the code that actually executed to generate the actual output.”
Context: Single Piece Generative Document Workflows
PS Potentially useful phrases for my Unreliable Education manfiesto: unreliable vs. reliable production process.
PPS Legitimising unreliablility by couching things in terms of doubt: if you were to run something like the previous code, you might expect to get something like the following output…