Build Your Own Learning Tools (BYOT)

A long time ago, I realised that one of the benefits of using simple desktop (Lego) robots to teach programming was that it allows you to try – and test – code out in a very tangible way. (Programming a Logo turtle provides a similar, though less visceral, form of direct feedback*.

One of the things I’ve started exploring recently is the extent to which we can create “reproducible” (open) educational resources using Jupyter notebooks. Part of the rationale for this is that if the means of producing a particular asset are provided, then it becomes much easier to reuse the assets with modification. However, I’ve also come to appreciate that having a computational environment to hand also means we can explore the taught subject matter in a wider variety of ways.

In that context, one of the units I am looking at is an art history course on OpenLearn (Making sense of art history). One of the activities asks learners Are the colours largely bright or dull? in a selection of paintings, although I struggled to find a definition of what “bright” and “dull” may be. This got me thinking about how images can be represented, and the extent to which we could create simple tools using powerful libraries to support student exploration of a particular topic. For example, helping them “test” particular images for different attributes in both a mechanical way (based on physical measurements) as well as personal experience.

As another example, the unit introduces the notion of a colour wheel, which made me wonder if I could find a way of filtering the “blueish” colours by doing some image processing on the colour wheel image provided in the materials:

The original image is the one on the left; the “blueish” values filtered image is the one on the right.

(I couldn’t find a general colour filter function – just an example on Stack Overflow for filtering blues… What I did wonder though was about a control that would let you select an area of the colour wheel and then apply that as a filter to another image.)

With such a filter in place, when the course materials suggest an image is “predominantly blue” I can check it to see exactly where it is predominantly blue…

(This also raises questions about our perception of colour, which is another important aspect of art appreciation and perhaps demonstrates certain limitations with computational analysis; which is a Good Thing to demonstrate, right? And it also makes us think about things like cognitive psychology and the art of the artist…)

Another question in the OpenLearn unit asked students to compare the colour palette used in two different images. I tried to make sense of that in terms of trying to build some sort of instrumentation that would identify the dominant palette / colours in an image, which is – and isn’t – as simple as it sounds. From a quick search, it seems that the best approach is to use cluster analysis to try to identify the dominant colour values. Several online recipes demonstrated how to use k-means clustering to achieve this, whilst the color-thief-py package uses a median cut algorithm:

(Another advantage of analysing images in this way is that it may provide us with things we can describe (automatically, as text) when trying to make our materials accessible. For example, Automatically Generating Accessible Text Descriptions of ggplot Charts in R.)

Basic code for generating the palette was quite easy to find, and the required packages (PIL, opencv2, scikit-image) are all preinstalled on Azure notebooks (my demos are here – check the image processing notebook). This meant I was relatively quick to get started with a crappy tool for exploring the images. But tools that provided immediate feedback relating to questions I could ask of arbitrary images, and tools that could be iterated on (for example, improving the palette display by ordering the palette relative to a colour wheel).

One of the tricks I saw in the various palette-cluster demos was to add the palette to the side of an image, which is a neat way of displaying it. This also put me in mind of a two-axis display in which we might display the dominant colour in particular horizontal and vertical bands of an image as a sidebar/bottom bar. That’s on my to do list.

Using techniques such as k-means clustering for the palette analysis also made me think that including such tools as helpers in an arts history course would help introduce students to the wider question of how well such algorithms work in general, and the extent to which tools or applications that use them can be trusted. k-means algorithms typically have a random seed, so each time you run them you may get a different answer (a different palette, in the above case, even when the same algorithm is applied to the same image). This encourages learners to see the computer as providing an opinion about an image rather than a truth. The computer’s palette analysis can thus be seen by the learner as another perspective on how to read an image, say, but not the only way of reading it, nor even a necessarily reliable way of reading it, although one that is “informed” in a particular sense. (The values used to see the k-means clusterer can be viewed as biases of the clusterer that change each time it’s run; many times, these differing biases may not result in significantly different resulting palettes – but sometimes they may…)

Anyway… the point of this post was supposed to be: the computational engine that is available to us when we present educational materials as live Jupyter notebooks means that we can build quite simple computational tools to extend the environment and allow students to interact with, and ask a range of questions of, the subject matter we are trying to engage them with. Because after all, everyone should learn to code / programme, right? Which presumably includes educators…? Which means we can all be ed-techies now…

See also: Jupyter Notebooks, Cognitive Tools and Philosophical Instruments.

* I’ve recently started to learn to play a musical instrument, as well as read music, for the first time, and this also provides a very powerful form of direct feedback. In case you’re wondering: a Derwent Adventurer 20 harp from the Devon Harp Center in Totnes. By the by, there is also an Isle of Wight harp festival in Ryde each year.

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

One thought on “Build Your Own Learning Tools (BYOT)”

Comments are closed.