Grant Potter picked up on a line in yesterday’s post on Building Your Own Learning Tools:
including such tools as helpers in an arts history course would help introduce students to the wider question of how well such algorithms work in general, and the extent to which tools or applications that use them can be trusted
The example in point related to the use of k-means algorithms to detect common colour values in images of paintings in order to produce a “dominant colour” palette for the image. The random seed nature of the algorithm means that if you run the same algorithm on the same image multiple times, you may get a different palette each time. Which makes a point about algorithms, if nothing else, and encourages you to treat them just like any other (un)reliable witness. In short, the tool demoed was a bit flakey, but no less useful (or instructive) for that…
Grant’s pick-up got me thinking that about another advantage of bringing interactive digital — algorithmic — tools into the wider curriculum using things like responsive Jupyter notebooks: by trying to apply algorithms to the analysis of things people care about, like images of paintings in art history, like texts in literature, the classics, or history, like maps in geography and politics, and so on, and showing how they can be a bit rubbish, we help students see the limits of the power of similar algorithms that are applied elsewhere, and help them develop healthy — and a slightly more informed — scepticism about algorithms through employing them in a context they (academically) care about.
Using machine “intelligence” for analysis or critique in arts or humanities also shows how many perspectived those subject matters can be, and how biases can creep in. (We can also use algorithmic exploration to dig one level deeper in the sciences. In this respect, when anyone quotes an average at me I think of Anscombe’s Quartet, or for a population trend, Simspon’s Paradox.)
Introducing computational tools as part of the wider curriculum, even in passing, also helps reveal what’s possible with a simple (perhaps naive) application of technology, and the possible benefits, limits and risks associated with such application.
(Academic algorithm researchers often tout things like a huge increase in performance on some test problem from 71.2% success to 71.3% for example, albeit at the expense of using a sizeable percentage of the world’s computers to run the computation. Which means 3 times in 10 it’s still wrong and the answer still needs checking all the time to catch those errors. If I can do the same calculation to 60% success on my Casio calculator watch, I still get a benefit 60% of the time, and the risk of failure isn’t that much different. You pays your money and you takes your choice.)
This also relates to “everyone should learn to code” dogma too. Putting barebones computational tools inline into course materials show’s how you can often achieve some sort of result with just a line or two of code, but more importantly, it shows you what sorts of thing can be done with a line or two of code, and what sorts of thing can’t be done that easily…
In putting together educational materials, we often focus on teaching people the right way to do something. But perhaps when it comes to code, rather than “everyone should learn to code”, maybe we need “everyone should experience the limitation of simple algorithms”? The solutionist good reason for using computational in methods the curriculum would be to show how useful computers can be (for some things) for automating, measuring, sorting and filtering. The resistance reason is that we can start to show people what the limits of the algorithms are, and how the computer answer isn’t always (isn’t often?) the right one.
2 thoughts on “Algorithms That Work, A Bit… On Teaching With Things That Don’t Quite Work”
Thanks for these thoughts Tony. I am v uncertain about the ‘learn to code’ mantra – there’s value in demystifying, but coding at a fairly basic level seems increasingly oversold as a silver bullet to permanent employment, which doesn’t feel right… I like the idea of showing how things can be a bit flaky, to avoid the perception of software as magic and perfect too.
A useful set of ideas for leaders and others learning about digital, not just humanities etc…
Thanks Laura… the demo notebooks I’ve got on Azure Notebooks include a range of things I’d class as coding but my computing colleagues presumably want in any of their courses, such as writing LaTeX or tikz, or getting defining musical scores as computational objects using tiny or abc notation then rendering or playing them. I had been treating ‘learn to code’ as ‘beingvable to use text recipes to script computers to do things’ but am refining that to include seeing how scripted text recipes can do things and appreciating the limits of what they do.
Comments are closed.