Reproducible Modifiable Inset Maps

Over the weekend, I starting having a look at generating static maps (rather than interactive web maps) using matplotlib/Basemap, with one eye on the reproducible educational materials production idea, and the ways in which Basemap might be useful to authors creating new materials that are capable of being reused and/or maintained, with modification.

One of the things that struck me was how authors may want to produce different sorts of map. Basemap has importers for several different flavours of map tile that can essentially be treated as map styles, so once you have defined your map, you should be able to tile it in different ways without having to redefine the map.

It’s also worth noting how easy it is to change the projection…

Another thing that struck me was how maps-on-maps (inset maps?) can often help provide a situate the wider geographical context of a region that is being presented in some detail.

I couldn’t offhand find an off the shelf function to create inset maps, so I hacked my own together:

There are quite a few hardwired defaults baked in and the generator could be parameterised in all sorts of ways (eg changing plot size, colour themes, etc.)

Also on my to do list for the basic maps is a simple way of adding things like great circle connectors between two points, adding clear named location points, etc etc.

You can find my work in progress/gettingstarted demos on this theme on Azure Notebooks here.

If you’re interested, here’s what I came up with for the inset maps… It’s longer than it needs to be because it incorporates various bits and pieces for rendering default / demo views if no actual regions are specified.

from operator import itemgetter

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
import numpy as np

def createInsetMap(base=None, inset=None,
                   zoom=None, loc=1, baseresolution=None,
                   insetresolution=None, connector=True):

    ''' Create an inset map for a particular region. '''

    fig = plt.figure()
    ax = fig.add_subplot(111)

    #Create a basemap

    #Add some default logic to provide some demos
    #If no args, use a World map
    if base is None or base.lower=='world':
        baseargs={'projection':'cyl','lat_0':0, 'lon_0':0}
    #If set UK base, then we can add a more detailed inset...
    elif isinstance(base,str) and base.lower()=='uk':
        #should really check base is a dict

    if baseresolution is not None:




    #Now define the inset map

    #This default is to make for some nice default demos
    #With no explicit settings, inset UK on World map
    if base is None and inset is None:
    #If the base is UK, and no inset is selected, demo an IW inset
    elif (isinstance(base,str) and base.lower()=='uk') and inset is None:
        #Should really check inset is a dict...

    axins = zoomed_inset_axes(ax, zoom, loc=loc)

    #The following seem to be set automatically?
    #axins.set_xlim(llcrnrlon, llcrnrlat)
    #axins.set_ylim(llcrnrlat, urcrnrlat)

    if insetresolution is not None:

    map2 = Basemap(ax=axins,**insetargs)

    map2.fillcontinents(color='#ddaabb', lake_color='#7777ff', zorder=0)

    #Add connector lines from marked area on large map to zoomed area
    if connector:
        mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5")

So, what other fragments of workflow, or use case, might be handy when creating (reusable / modifiable) maps in an educational context?

PS inset diagrams/maps in ggplot2 / ggmaps. See also Inset maps with ggplot2.

Build More of Your Own Learning Tools

In Build Your Own Learning Tools I described how I was inspired to try to build a simple colour palette explorer to complement an OpenLearn unit on art history that asked learners various questions about the colour themes used in particular paintings.

This is in part driven by the Jupyter notebook solutionism obsession that’s currently consuming me, where I’m trying to put together various demos that show how Jupyter notebooks might be used as an authoring tool to support the creation  – and enhancement – of reproducible (which is also to say, easily modifiable) (open) educational resources.

Over the last couple of days, I’ve started looking at cltk, a classics fork of the nltk Python natural language toolkit. This provides access to a wide range of classical texts in a variety of languages, and text analysis tools to work with them. I’m struggling to find easy ways in to work with this package, so progress is a bit slower than I’d liked in just familiarising myself with it, and I’m yet to look at it in the context of some actual Latin or Greek open teaching material (eg Discovering Ancient Greek and Latin or Getting started on classical Latin).

As I’ve been trying to familiarise myself with the package, I’ve been reflecting on things that may be helpful to me as a learner if I was trying to get started with reading Latin or Greek texts.

One thing would be accessing texts: cltk does have a wide range of corpuses available, but I’ve struggled to find any index files or metadata files to help me know what’s in them and how to retrieve them.

Apols for screenshots rather than code, and incomplete code screenshots at that. A link to the notebook is provided at the end of the post if you want the src.

One you have found and loaded a text, it’s easy to search:

You can also find concordances:

(Related: n-gram / Multi-Word / Phrase Based Concordances in NLTK.)

There is a trained named entity tagger, although it seems to be a bit basic/ ropey. It’s something to work with, though:

On my to do list generally is learn How to Train your Own Model with NLTK and Stanford NER Tagger? (for English, French, German…).

Latin declensions are something I remember from my schooldays. Enumerating them may be handy when creating resources, and might also be useful to students. One thing I did find, though, was a lack of documentation about how to decode the, anyone?, anyone?, person/tense information? Anyone, anyone? Really… Anyone? (I imagine v1spia--- is verb1st person singular, present, maybe but then what. Also, isn’t is conjugation for verbs, rather than declension? )

From a declension/conjucation/whatever it is, we can do a lookup, but to do this you need to know the root(?), and for it to be useful, you need to know how to read the grammar(?) code string:

So how do I properly decode those strings? Docs anywhere? Maybe even a simple py function that turns a string like v2pfia--- into words?

Another issue I’m guessing students face when reading classical texts, and that educators must grapple with when teaching people how to read the texts in a prosodically meaningful way, is how to spilt the words into syllables and how to sound them out.

Splitting texts into syllables may help with this, and is also likely to be useful when working out the meter of a verse, for example?

Transliteration into a phonetic alphabet may also be useful, although this requires that the learner also knows how to read and sound out characters in that alphabet. (The one used in cltk (IPA phonetic transliteration alphabet) differs from the one used in the OpenLearn texts I skimmed (any idea what that one is called? Anyone, anyone? I’m not sure if cltk offers other alphabets? Or how you’d train a system to use one?)

As I mentioned, I’ve been struggling to find many useful docs/tutorials for working with cltk, and haven’t managed to find any meaningful corpus metadata (or a recipe for building it in a standard way). If you can point me to anything useful, please do so via the comments…

You can find my current work in progress notebook on Azure notebooks: Getting Started With Notebooks/4.2.0 Classics.ipynb.

Build Your Own Learning Tools (BYOT)

A long time ago, I realised that one of the benefits of using simple desktop (Lego) robots to teach programming was that it allows you to try – and test – code out in a very tangible way. (Programming a Logo turtle provides a similar, though less visceral, form of direct feedback*.

One of the things I’ve started exploring recently is the extent to which we can create “reproducible” (open) educational resources using Jupyter notebooks. Part of the rationale for this is that if the means of producing a particular asset are provided, then it becomes much easier to reuse the assets with modification. However, I’ve also come to appreciate that having a computational environment to hand also means we can explore the taught subject matter in a wider variety of ways.

In that context, one of the units I am looking at is an art history course on OpenLearn (Making sense of art history). One of the activities asks learners Are the colours largely bright or dull? in a selection of paintings, although I struggled to find a definition of what “bright” and “dull” may be. This got me thinking about how images can be represented, and the extent to which we could create simple tools using powerful libraries to support student exploration of a particular topic. For example, helping them “test” particular images for different attributes in both a mechanical way (based on physical measurements) as well as personal experience.

As another example, the unit introduces the notion of a colour wheel, which made me wonder if I could find a way of filtering the “blueish” colours by doing some image processing on the colour wheel image provided in the materials:

The original image is the one on the left; the “blueish” values filtered image is the one on the right.

(I couldn’t find a general colour filter function – just an example on Stack Overflow for filtering blues… What I did wonder though was about a control that would let you select an area of the colour wheel and then apply that as a filter to another image.)

With such a filter in place, when the course materials suggest an image is “predominantly blue” I can check it to see exactly where it is predominantly blue…

(This also raises questions about our perception of colour, which is another important aspect of art appreciation and perhaps demonstrates certain limitations with computational analysis; which is a Good Thing to demonstrate, right? And it also makes us think about things like cognitive psychology and the art of the artist…)

Another question in the OpenLearn unit asked students to compare the colour palette used in two different images. I tried to make sense of that in terms of trying to build some sort of instrumentation that would identify the dominant palette / colours in an image, which is – and isn’t – as simple as it sounds. From a quick search, it seems that the best approach is to use cluster analysis to try to identify the dominant colour values. Several online recipes demonstrated how to use k-means clustering to achieve this, whilst the color-thief-py package uses a median cut algorithm:

(Another advantage of analysing images in this way is that it may provide us with things we can describe (automatically, as text) when trying to make our materials accessible. For example, Automatically Generating Accessible Text Descriptions of ggplot Charts in R.)

Basic code for generating the palette was quite easy to find, and the required packages (PIL, opencv2, scikit-image) are all preinstalled on Azure notebooks (my demos are here – check the image processing notebook). This meant I was relatively quick to get started with a crappy tool for exploring the images. But tools that provided immediate feedback relating to questions I could ask of arbitrary images, and tools that could be iterated on (for example, improving the palette display by ordering the palette relative to a colour wheel).

One of the tricks I saw in the various palette-cluster demos was to add the palette to the side of an image, which is a neat way of displaying it. This also put me in mind of a two-axis display in which we might display the dominant colour in particular horizontal and vertical bands of an image as a sidebar/bottom bar. That’s on my to do list.

Using techniques such as k-means clustering for the palette analysis also made me think that including such tools as helpers in an arts history course would help introduce students to the wider question of how well such algorithms work in general, and the extent to which tools or applications that use them can be trusted. k-means algorithms typically have a random seed, so each time you run them you may get a different answer (a different palette, in the above case, even when the same algorithm is applied to the same image). This encourages learners to see the computer as providing an opinion about an image rather than a truth. The computer’s palette analysis can thus be seen by the learner as another perspective on how to read an image, say, but not the only way of reading it, nor even a necessarily reliable way of reading it, although one that is “informed” in a particular sense. (The values used to see the k-means clusterer can be viewed as biases of the clusterer that change each time it’s run; many times, these differing biases may not result in significantly different resulting palettes – but sometimes they may…)

Anyway… the point of this post was supposed to be: the computational engine that is available to us when we present educational materials as live Jupyter notebooks means that we can build quite simple computational tools to extend the environment and allow students to interact with, and ask a range of questions of, the subject matter we are trying to engage them with. Because after all, everyone should learn to code / programme, right? Which presumably includes educators…? Which means we can all be ed-techies now…

See also: Jupyter Notebooks, Cognitive Tools and Philosophical Instruments.

* I’ve recently started to learn to play a musical instrument, as well as read music, for the first time, and this also provides a very powerful form of direct feedback. In case you’re wondering: a Derwent Adventurer 20 harp from the Devon Harp Center in Totnes. By the by, there is also an Isle of Wight harp festival in Ryde each year.

Fragment: On Reproducible Open Educational Resources

Via O’Reilly’s daily Four Short Links feed, I notice the Open Logic Project, “a collection of teaching materials on mathematical logic aimed at a non-mathematical audience, intended for use in advanced logic courses as taught in many philosophy departments”. In particular, “it is open-source: you can download the LaTeX code [from Github]”.


the TeX source does mean you need a (La)TeX environment to run it (and the project does bundle some of the custom .sty style files you need in the repo, which is handy).

Compare this with the Simple Maths Equations and Notation notebook I’ve started sketching as part of a self-started, informal “reproducible OERs with Jupyter notebooks” project I’m dabbling with:

Here, a Jupyter notebook contains LaTeX code can then be rendered (in part?) through the notebook previewer – at least in so far as expressions are written in Mathjax parseable code – and also within a live / running Jupyter notebook. Not only do I share the reproducible source code (as a notebook), I also share a link to at least one environment capable of running it, and that allows it to be reused with modification. (Okay, in this case, not openly so because you have to have an Azure Notebooks account. But the notebook could equally run on Binderhub or a local install, perhaps with one or two additional requirements if you don’t already run a scientific Python environment.)

In short, for a reproducible OER that supports reuse with modification, sharing the means of production also means sharing the machinery of production.

To simplify the user experience, the notebook environment can be preinstalled with packages needed to render a wider range of TeX code, such as drawings rendered using TikZ. Alternatively, code cells can be populated with package installation commands to custom a more vanilla environment, as I do in several demo notebooks:

What the Open Logic Project highlights is that reproducible OERs not only provide ready access to the “source code” of a resource so that it can be easily reused with modification, but that access to an open environment capable of processing that source code and rendering the output document also needs to be provided. (Open / reproducible science researchers have known this for some time…)

Getting a Tex/LateX environment up and running can be a faff – and can also take up a lot of disk space – so the runtime environment requirements are not negligible.

In the case of Jupyter notebooks, LateX support is available, and container images capable of running on Binderhub, for example, relatively easily defined (see for example the Binder LateX example). (I’m not sure how rich Stencila support for LaTeX is too, and/or whether it requires an external LaTeX environment when running the Stencila desktop app?)

It also strikes me that another thing we should be doing is export a copy of the finished work, eg as a PDF or complete, self-standing HTML archive, in case the machinery does break. This is also important where third party services are called. It may actually make sense to use something like requests for all third party URL requests, and save a cached version of all requests (using requests-cache) to provide a local copy of whatever it was that was called when originally flowing the document.

See also: OER Methods – Generative Designs for Reuse-With-Modification

Jupyter Notebooks, Cognitive Tools and Philosophical Instruments

A placeholder post, as much as anything, to mark the AJET Call for Papers for a Special Issue on Re-Examining Cognitive Tools: New Developments, New Perspectives, and New Opportunities for Educational Technology Research as a foil for thinking about what Jupyter notebooks might be good for.

According the the EduTech Wiki, “[c]ognitive tools refer to learning with technology (as opposed to learning through technology)” which doesn’t really makes sense as as sentence and puts me off the idea of ed-tech academe straight away.

In the sense that cognitive tools support a learning process, I think they can do so in several ways. For example, in Programming in Jupyter Notebooks, via the Heavy Metal Umlaut I remarked on several different ways in which the same programme could be constructed within a notebook, each offering a different history and each representing a differently active approach to code creation and programming problem solving.

One of the things I try to do is reflect on my own practice, as I have been doing recently whilst trying to rework fragments of some OpenLearn materials as reproducible educational resources (which is to say, materials that generate their own resources and as such support reuse with modification more generally than many educational resources).

For example, consider the notebook at

You can also run the notebook interactively; sign in to Azure notebooks (if you’re OU staff, you can use your staff OUCU/OU password credentials) and clone my Getting Started library into your workspace. If notebooks are new to you, check out the 1.0 Using Jupyter Notebooks in Teaching and Learning - READ ME FIRST.ipynb notebook.

In creating the electronics notebook, I had to learn a chunk of stuff (the lcapy package is new to me and I had to get my head round circuitikz) but I found trying to figure out how to make examples related to the course materials provide a really useful context for giving me things to try to do with the package. In that the sense, the notebook was a cognitive tool (I guess) that supported my learning about lcapy.

For the notebook, I had to start getting my head round sympy and on the way cobble together bits and pieces of code that might be useful when trying to produce maths related materials in a reproducible way. (For example, creating equations in sympy that can then be rendered, manipulated and solved throughout the materials in a way that’s appropriate for a set of educational (that is, teaching and/or learning) resources.

Something else that came to mind is that the notebook medium as both an authoring medium and a delivery medium (we can use it just to create assets; or we can also use it deliver content to students) changes the sorts of things you might want to do in the teaching. For example, I had the opportunity to create self test functions, and there is the potential for interactives that let students explore the effect of changing component values in a circuit. (We could also plot responses over a range of variable values, but I haven’t demoed that yet.) In a sense, the interactive affordances of the medium encouraged me to think of opportunities to create philosophical instruments that allow authors – as well as students – to explore the phenomena being described by the materials. Although not a chemistry educator, putting together a reworking of some OpenLearn chemistry materials – – gave me some ideas about the different ways in which the materials could be worked up to support interactive / self-checking /constructive learning use. (That is, ways in which we could present the notebooks as interactive cognitive tools to support the learning process on the one hand, or as philosophical instruments that would allow the learner explore the subject matter in an investigative and experimental way.)

I like to think the way I’m using the Jupyter notebooks as part of an informal “reproducible-OER” exploration is in keeping with some of the promise of live authoring using the OU’s much vaiunted, though still to be released, OpenCreate authoring environment (at least, as I understand the sorts of thing it is supposed to be able to support) with the advantage of being available now.

It’s important to recognise that Jupyter notebooks can be thought of as a medium that behaves in several ways. In the first case, it’s a rich authoring medium to work with – you can create things in it and for it, for example in the form of interactive widgets or reusable components such as IPython magics (for example, this interactive mapping magic: ). Secondly, it’s a medium qua environment that can itself be extended and customised through the enabling and disabling of notebook extensions, such as ones that support WYSIWYG markdown editing, or hidden, frozen and read-only executable cells, which can be used to constrain the ways in which learners use some of the materials, perhaps as a counterpoint to getting them to engage more actively in editing other cells. Thirdly, it acts as a delivery medium, presenting content to readers who can engage with the content in an interactive way.

I’m not sure if there are any good checklists of what makes a “cognitive tool” or a “philosophical instrument”, but if there are it’d be interesting to try to check Jupyter notebooks off against them…

OER Methods – Generative Designs for Reuse-With-Modification

Via my feeds (The stuff ain’t enough), I notice Martin pointing to some UNESCO draft OER Recommendations.

Martin writes:

… the resources are a necessary starting point, but they are not an end point. Particularly if your goal is to “ensure inclusive and equitable quality education and promote lifelong opportunities for all”, then it is the learner support that goes around the content that is vital.

And on this, the recommendations are largely silent. There is a recommendation to develop “supportive policy” but this is focused on supporting the creation of OER, not the learners. Similarly the “Sustainability models for OER” are aimed at finding ways to fund the creation of OER. I think we need to move beyond this now. Obviously having the resources is important, and I’d rather have OER than nothing, but unless we start recognising, and promoting, the need for models that will support learners, then there is a danger of perpetuating a false narrative around OER – that content is all you need to ensure equity. It’s not, because people are starting from different places.

I’ve always thought that too much focus has always been on “the resources”, but I’ve never really got to grips with how the resources are supposed to be (re)used, either by educators or learners.

For educators, reuse can often come in the form of “assign that thing someone else wrote, and wrap it with your own teaching context”, or “pinch that idea and modify it for your own use”. So if I see a good diagram, I might “reuse” it by inserting it in my own materials or I might redraw it with some tweaks.

Assessment reuse (“open assessment resources”?) can be handy too: a question form that someone else has worked up that I can make use of. In some cases, the question may include either exact, or ‘not drawn to scale’ media assets. But in many cases, I would still need to do work to generalise or customise the answer, and work out my own correct answer or marking guide.

(See for example Generative Assessment Creation.)

If an asset is not being reused directly, but the idea is, with some customisation, or change in parameter values, then creating the new asset may require significant effort, as well as access to, and skills in using, particular drawing packages. In some cases the liquid paper method works: Tipp-Ex out the original numbers, write in your own, photocopy to produce the new asset. Digital cut or crop alternatives are available.

Another post in my feeds today – Enterprise Dashboards with R Markdown, via Rbloggers – described a rationale for using reproducible methods to generate dashboards:

We have been living with spreadsheets for so long that most office workers think it is obvious that spreadsheets generated with programs like Microsoft Excel make it easy to understand data and communicate insights. Everyone in a business, from the newest intern to the CEO, has had some experience with spreadsheets. But using Excel as the de facto analytic standard is problematic. Relying exclusively on Excel produces environments where it is almost impossible to organize and maintain efficient operational workflows. …

[A particular] Excel dashboard attempts to function as a real application by allowing its users to filter and visualize key metrics about customers. It took dozens of hours to build. The intent was to hand off maintenance to someone else, but the dashboard was so complex that the author was forced to maintain it. Every week, the author copied data from an ETL tool and pasted it into the workbook, spot checked a few cells, and then emailed the entire workbook to a distribution list. Everyone on the distribution list got a new copy in their inbox every week. There were no security controls around data management or data access. Anyone with the report could modify its contents. The update process often broke the brittle cell dependencies; or worse, discrepancies between weeks passed unnoticed. It was almost impossible to guarantee the integrity of each weekly report.

Why coding is important

Excel workbooks are hard to maintain, collaborate on, and debug because they are not reproducible. The content of every cell and the design of every chart is set without ever recording the author’s actions. There is no simple way to recreate an Excel workbook because there is no recipe (i.e., set of instructions) that describes how it was made. Because Excel workbooks lack a recipe, they tend to be hard to maintain and prone to errors. It takes care, vigilance, and subject-matter knowledge to maintain a complex Excel workbook. Even then, human errors abound and changes require a lot of effort.

A better approach is to write code. … When you create a recipe with code, anyone can reproduce your work (including your future self). The act of coding implicitly invites others to collaborate with you. You can systematically validate and debug your code. All of these things lead to better code over time.

Many of the issues described there are to do with maintenance. Many of the issues associated with “reusing OERs with modification” are akin to maintenance issues. (When an educator updates their materials year on year – maintenance – they are reusing materials they have permission to use, with modification.)

In both the maintenance and the wider reuse-with-modification activity, it can really help if you have access to the recipe that created the thing you are trying to maintain. Year on year reuse is not buying 10 exact clone pizzas in the first year, freezing 9, taking one out each year, picking off the original topping and adding this year’s topping du jour for the current course presentation. It’s about saving and/or sharing the recipe and generating a fresh version of the asset each year, perhaps with some modification to the recipe.

In other words, the asset created under the reuse-with-modification licence is not subtractive/additive to the original asset, it is (re)generative from the original recipe.

This is where things like Jupyter notebooks or Rmd documents come in – they can be used to deliver educational resources that are in principle reusable-with-modification because they are generative of the final asset: the asset is produced from a modifiable recipe contained within the asset.

I’ve started trying to put together some simple examples of topic based recipes as Jupyter notebooks that can run on Microsoft’s (free) Azure Notebooks service: Getting Started With OER notebooks.

To run the notebooks, you need to create a Microsoft Live account, log in to, and then clone the above linked repository.

OU staff and ALs should be able to log in using their credentials. If you work for a company that uses Office 365 / Live online applications, ask them to enable notebooks too…

Once you have cloned the notebooks, you should be able to run them…

PS if you have examples of other things I should include in the demos, please let me know via the comments. I’m also happy to do demos, etc.

OERs in Practice: Re-use With Modification

Over the years, I’ve never really got my head round what other people mean by OERs (Opern Educational Resources) in terms of how they might be used.

From my own perspective, wholesale reuse (“macro reuse”) of a course isn’t relevant to me. When tasked with writing an OU unit, if I just point to a CC licensed course somewhere else and say “use that”, I suspect it won’t go down well.

I may want to quote a chunk a material, but I can do that with books anyway. Or I may want to reuse an activity, and then depending on how much rework or modification is applied, I may reference the original or not.

Software reuse is an another possibility, linking out to or embedding a third party application, but that tends to fall under the banner of openly licensed software reuse as much as OER reuse. Sometimes the embed may be branded; sometimes it may be possible to remove the branding (depending on how the asset is created, and the license terms), sometimes the resource might be a purely white label resource that can be rebranded.

Videos and audio clips are another class of resource that I have reused, partly because they are harder to produce. Video clips tend to come in various forms: on the one hand, things like lectures retain an association with the originator (a lecture given by Professor X of university Y is very obviously associated with Professor X and university Y); on the other hand, an animations, like software embeds, might come in a branded form, white labelled, or branded as distributed but white label licensed so you can remove/rebrand if you want to put the effort in.

Images are also handy things to be able to reuse, again because they can be hard to produce in at least two senses: firstly, coming up with the visual or graphical idea, i.e. how to depict something in a way that supports teaching or learning; secondly, actually producing the finished artwork. One widely used form of image reuse in the OU is the “redrawing” of an image originally produced elsewhere. This represents a reuse, or re-presentation, of an idea. In a sense, the image is treated as a sketch that is then redrawn.

This level of “micro reuse” of a resource, rather than the “macro reuse” of a course, is not something that was invented by OERs – academics have always incorporated and referenced words and pictures created by others – but it can make reuse easier by simplifying the permissions pathway (i.e. simplifying what otherwise might be a laborious copyright clearance process).

One of the other ways of making use of “micro” resources is to reuse them with modification.

If I share a text with you as a JPG of a PDF document, it can be quite hard for you to grab the text and elide a chunk of it (i.e. remove a chunk of it and replace it with … ). If I share the actual text as text, for example, in a Word document, you can edit it as you will.

Reuse with modification is also a fruitful way of reusing diagrams. But it can be harder to achieve in practical terms. For example, in a physics or electronics course, or a geometry course, there are likely to be standard mechanical principle diagrams, electrical circuits or geometrical proofs that you are likely to want to refer to. These diagrams may exist as openly licensed resources, but… The numbers or letters you want to label the diagram with may not be the same as in the original. So what do you do? Redraw the diagram? Or edit the original, which may reduce the quality of the original or introduce some visual artefact the reveals the edit (“photocopy lines”!).

But what if the “source code” or means of producing the diagram. For example, if the diagram is created in Adobe Illustrator or CorelDRAW and the diagram made available as an Adobe Artwork .ai file or a  CorelDRAW .cdr file, and you have an editor (such as the original, or an alternative such as Inkscape) that imports those file formats, you can edit and regenerate a modified version of the diagram at the same level of quality as the original. You could also more easily restyle the diagram, even if you don’t change any of the content. For example, you could change line thickness, fonts or font sizes, positioning, and so on.

One of the problems with sharing image project files for particular applications is that the editing and rendering environment for working with project file is likely separate from your authoring environment. If, while writing the text, you change an item in the text and want to change the same item as referenced in the image, you need to go to the image editor, make the change, export the image, copy it back into your document. This makes document maintenance hard and subject to error. It’s easy for the values of the same item as referenced in the text and the diagram to drift. (In databases, this is why you should only ever store the value of something once and then refer to its value by reference. If I have your address stored in two places, and you change address, I have to remember to change both of them; it’s also quite possible that the address I have for you will drift between the two copies I have of it…)

One way round this is to include the means for creating and editing the image within your text document. This is like editing a Microsoft Word document and including a diagram by using Microsoft drawing tools within the document. If you share the complete document with someone else, they can modify the diagram quite easily. If you share a PDF of the document, they’ll find it harder to modify the diagram.

Another way of generating diagrams is to “write” it, creating a “program” that defines how to draw the diagram and that can be run in a particular environment to actually produce the diagram. By changing the “source code” for the diagram, and rerunning it, you can generate a modified version of the diagram in whatever format you choose.

This is what packages like TikZ support [docs].

And this is what I’ve been exploring in Jupyter notebooks and Binderhub, where the Jupyter notebook contains all the content in the output document, including the instructions to create image assets or interactives, and the Binder container contains all the software libraries and tools required to generate and embed the image assets and interactives within the document from the instructions contained within the document.

That’s what I was trying to say in Maybe Programming Isn’t What You Think It Is? Creating Repurposable OERs (which also contains a link to a runnable example).

PS by the by, I also stumbled across this old post, an unpursued bid, today, that I have no recollection of at all: OERs: Public Service Education and Open Production. Makes me wonder how many other unfinished bids I started…

Maybe Programming Isn’t What You Think It Is? Creating Repurposable & Modifiable OERs

With all the “everyone needs to learn programming” hype around, I am trying to be charitable when it comes to what I think folk might mean by this.

For example, whilst trying to get some IPython magic working, I started having a look at TikZ, a LaTex extension that supports the generation of scientific and mathematical diagrams (and which has been around for decades…).

Getting LaTeX environments up and running can be a bit of a pain, but several of the Binderhub builds I’ve been putting together include LateX, and TikZ,  which means I have an install-free route trying snippets of TikZ code out.

As an example, in my showntell/maths demo includes an OpenLearn_Geometry.ipynb notebook that includes a few worked examples of how to “write” some of the figures that appear in an OpenLearn module on geometry.

From the notebook:

The notebook includes several hidden code cells that generate the a range of geometric figures. To render the images, go to the Cell menu and select Run All.

To view/hide the code used to generate the figures, click on the Hide/Reveal Code Cell Inputs button in the notebook toolbar.

To make changes to the diagrams, click in the appropriate code input cell, make your change, and then run the cell using the Run Cell (“Play”) button in the toolbar or via the keyboard shortcut SHIFT-ENTER.

Entering Ctrl-Z (or CMD-Z) in the code cell will undo your edits…

Launch the demo notebook server on Binder here.

Here’s an example of one of the written diagrams (there may be better ways; I only started learning how to write this stuff a couple of days ago!)

Whilst tinkering with this, a couple of things came to mind.

Firstly, this is programming, but perhaps not as you might have thought of it. If we taught adult novices some of the basic programming and coding skills using Tikz rather than turtle, they’d at least be able to create professional looking diagrams. (Okay, so the syntax is admittedly probably a bit scary and confusing to start with… But it could be simplified with some higher level, more abstracted, custom defined macros that learners could then peek inside.)

So when folk talk about teaching programming, maybe we need to think about this sort of thing as well as enterprise Java. (I spent plenty of time last night on the Stack Exchange TEX site!)

Secondly, the availability of things like Binderhub make it easier to build preloaded distributions that can be run by anyone, from anywhere (or at least, for as long as public Binderhub services exist). Simply by sharing a link, I can point you to a runnable notebook, in this case, the OpenLearn geometry demo notebook mentioned above.

One of the things that excites me, but I can’t seem to convince others about, is the desirability of constructing documents in the way the OpenLearn geometry demo notebook is constructed: all the assets displayed in the document are generated by the document. What this means is that if I want to tweak an image asset, I can do. The means of production – in the example, the TikZ code – is provide; it’s also editable and executable within the Binder Jupyter environment.

When HTML first appeared, web pages were shonky as anything, but there were a couple of buts…: the HTML parsers were forgiving, and would do their best to whatever corruption of HTML was thrown at them; and the browsers supported the ability to View Source (which still exists today; for example, in Chrome, go to the View menu then select Developer -> View Source).

Taken together, this meant that: a) folk could copy and paste other people’s HTML and try out tweaks to “cool stuff” they’d seen on other pages; b) if you got it wrong, the browser would have a go at rendering it anyway; you also wouldn’t feel as if you’d break anything serious by trying things out yourself.

So with things like Binder, where we can build disposable “computerless computing environments” (which is to say, pre-configured computing environments that you can run from anywhere, with just a browser to hand), there are now lots of opportunities to do powerful computer-ingy things (technical term…) from a simple, line at a time notebook interface, where you (or others) can blend notes and/or instruction text along with code – and code outputs.

For things like the OpenLearn demo notebook, we can see how the notebook environment provides a means by which educators can produce repurposeable documents, sharing not only educational materials for use by learners, or appropriation and reuse by other educators, but also the raw ingredients for producing customised forms of the sorts of diagrams contained in the materials: if the figure doesn’t have the labels you want, you can change them and re-render the diagram.

In a sense, sharing repurposeable, “reproducible” documents that contain the means to generate their own media assets (at least, when run in an appropriate environment: which is why Binderhub is such a big thing…) is a way of sharing your working. That is, it encourages open practice, and the sharing of how you’ve created something (perhaps even with comments in the “code” explaining why you’ve done something in a particular way, or where the inspiration/prior art came from), as well as the what of the things you have produced.

That’s it, for now… I’m pretty much burned out on trying to persuade folk of the benefits of any of this any more…

PS TikZ and PGF TikZ and PGF: TeX packages for creating graphics programmatically. Far more useful than turtle and Scratch?