Fragmentary Notes On Producing A New AI Qualification…

A process is in play internally for creating a new AI qualification, which will probably require the creation of some new modules.

When I joined the OU as an academic — in the Technology Faculty as it then was, in the “Telematics” Department (not cars… I don’t know what it was supposed to mean either; I’m not sure anyone did…) — the culture in that part of the campus very much had the feel of trying to use the technologies we were teaching students about as part of the course delivery. We’d teach students about various online communications tools as a topic of study, but we also expected students to use them in the course of their studies. And when I’ve written on new courses modules, I’ve tried often tried to be reflexive in using the topic of study to help teach through, or learn about, that topic, as well as exploring the subject matter from a “so what?” perspective: how might I be able to use this (and how have I used this in producing this material?); why is it useful to know this thing I am being told (what can I now do or understand that I couldn’t before? what can I now stop doing/happening)? I’ve also been happiest producing materials on subjects that I am (or have become) curious about, but that aren’t strictly my area. This leads me to producing materials that are essentially write-ups and reflections on my own learning journey through the topic (notes made by an expert, and reflective, learner rather than a subject matter expert). This can also help provide a strong sense of narrative through the material. (Again, I’m of the OU school of “materials with personality” — when I started in the OU, each unit or block was “Produced by X on behalf of the course team”, rather than being a bland, generic, anonymous OU voiced text, each one the same. You could read many of the units and would know exactly who had written them from the writing style. Talking to students at residential school (which was an important part of the process then, for students and academics alike), you’d often find students keen to put a face to a name, to find out what the person they’d imagined in their mind’s eye, conjured up from the style of their writing, or from hearing their voice from audio support materials (delivered by cassette!), was actually like in real-life.)

One of the many things I learned from working in the Technology Faculty was that we should always try to consider the impact of any technology. By producing teaching materials that are related to a technology, we are bringing it to the attention of students as a thing that can be used/deployed. That is, we are “promoting” it, at least in the sense of raising awareness around it. We might also assess the technology according to its impact in various ways (SPEL – social, political, ethical, legal). We might even review “value judgements” or personal preferences that could be made around the technology although we would probably stop short of promoting a particular view.

(I note that when we had materials with personality, there may have been a mechanism for expressing personal views. For example, a breakout box where one academic author argues for a particular technology on personal grounds — the adoption of nuclear power, for example — and another against it. That a fair and balanced consideration of the technology can be presented in the materials in academic terms, and opposing personal preferences that weight the various considerations in different ways can be expressed in personal terms, demonstrates two things: 1) that you can provide a neutral academic view (or a “balanced” view: pros and cons); 2) opposing personal positions may be possible; 3) opposing personal positions may be differently legitimate.)

So when it comes to possibly engaging with a new qualification, or new modules in support of such a qualification, particularly one in an emerging area where it’s not clear what the actual scope is, what the benefits or implications might be, what the applications are, what the best practice techniques are, how the things actually work, or whether they do actually work, then we should be: a) reflexive in using the things in the way we claim they could/should/might be used; b) admitting we aren’t experts in the area, but that we do claim to be: i) expert learners who can demonstrate a sound learning path through the topic; ii) good communicators / teachers / learning facilitators; iii) reasonably expert in related matters (e.g. as “computing” academics).

In terms of being reflexive and dog-fooding what we are describing (“you could do this, so we have…”), if part of the offering of the course is the sense that it will “help you use AI in practice”:

  • if that practice includes generating reports for use in business or government or education (where it’s important the documents are in some sense correct), then we should produce at least some of the study materials in that way (and also explain why we haven’t produced all the study materials in that way);
  • if that practice includes generating images, then we should produce at least some of the image assets in that way (and also explain why we haven’t produced all the materials in that way);
  • if that practice includes generating image descriptions (an accessibility requirement), then we should produce at least some of the image assets in that way (and also explain why we haven’t produced all the materials in that way);
  • if that practice includes generating videos or animations, then we should produce at least some of any video or animation assets used in the module in that way (and also explain why we haven’t produced all the materials in that way);
  • if that practice includes generating audio commentary, then we should produce at least some audio assets in that way (and also explain why we haven’t produced all the materials in that way);
  • if that practice includes generating summaries of text documents, then we should produce at least some summaries of module materials in that way (and also explain why we haven’t produced all the materials in that way);
  • if that practice includes generating questions that can be answered by a particular set of text documents, then we should generate at least some questions based on the module materials in that way and use them as part of our assessment process (and also explain why we haven’t produced all the materials in that way);
  • if that practice includes evaluating documents against a set of criteria, then we should assess some or all student assessment submissions in that way (and also explain why we don’t assess all their submissions in that way).

If this qualification is intended to promote an AI powered future, we should use the qualification and any new modules developed for it as a prototype example of how that future might look, at least as we understand it today. And then suffer the consequences. If the qualification is intended to argue the risks of AI powered future, it might be interesting to use the qualification and any new modules developed for it as a prototype example of how that future might look, at least as we understand it today. Alternatively, we might attempt to show how, for each claimed use of large generative AI models, other, less harmful alternatives exist Or, if we do not feel comfortable using the techniques we are “promoting”, then we should consider ways in which we might demonstrate to learners there are legitimate reasons for not using those technologies by describing our own reasons for not using them.

Taking the “reflective and reflexive” idea a step further, I note that as an organisation, the OU has a sustainability policy. This presumably reflects the values of the organisation, and as such we should presumably stand by them. If we claim in our study materials that organisations might reasonably use AI models as part of their everyday business, we should review (reflexively and reflectively) and report on how our use of AI techniques as used to produce, present, and study the module materials as part of the materials ranks in terms of our sustainability goals. (Once you start being reflective and reflexive over topics covered by the course and used to produce it, study it, or deliver it, lots of material falls out for free…) Hmm.. I wonder: should each module have a sustainability/impact report that: a) reviews the impact of producing the module; b) reviews the impact of presenting the module; c) the impact of using the techniques that are effectively “promoted” by the module, or at least ways of assessing those techniques under a sustainability policy view?

On the OU home webpage, we also find a link to a statement on eradicating slavery in the supply chain as well as an official copyright statement. (Also note that the OU has an active rights clearance unit that goes to considerable lengths to ensure any third party materials we use in study materials are properly rights cleared.) If we were to use third party AI models to generate study materials, in whole or part, it would seem appropriate to reflexively and reflectively demonstrate how the organisation satisfied itself that, as part of the supply chain, appropriate labour was to produce the models (and their original datasets); and, as an organisation that is heavily invested in publishing materials within a legally defined copyright and copyright licensing framework, that rights were appropriately cleared in producing the models (including the sourcing of the training data sets) that could be said to be part of the materials production supply chain.

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.