One of the books I’m reading at the moment is Michael Hiltzik’s Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age (my copy is second hand, ex-library stock…), birthplace to ethernet and the laser printer, as well as many of the computer user interactions we take for granted today. One thing I hadn’t fully appreciated was Xerox’s interests in publishing systems, which is in part what put it in mind for this post. The chapter I just finished reading tells of their invention of a modeless, WYSIWYG word processor, something that would be less hostile than the mode based editors of the time (I like the joke about accidentally entering command mode and typing edit – e: select entire document, d: delete selection, i:insert, t: the letter inserted. Oops – you just replaced your document with the letter t).
It must have been a tremendously exciting time there, having to invent the tools you wanted to use because they didn’t exist yet (some may say that’s still the case, but in a different way now, I think: we have many more building blocks at our disposal). But it’s still an exciting time, because while a lot of stuff has been invented, whether or not there is more to come, there are still ways of figuring out how to make it work easier, still ways of figuring out how to work the technology into our workflows in more sensible way, still many, many ways of trying to figure out how to use different bits of tech in combination with each other in order to get what feels like much more than we might reasonably expect from considering them as a set of separate parts, piled together.
One of the places this exploration could – should – take place is in education. Whilst at HE we often talk down tools in place of concepts, introducing new tools to students provides one way of exporting ideas embodied as tools into wider society. Tools like Jupyter notebooks, for example.
The more I use Jupyter notebooks, the more I see their potential as a powerful general purpose tool not just for reproducible research, but also as general purpose computational workbench and as a powerful authoring medium.
Enlightened publishers such as O’Reilly seem to have got on board with using interactive notebooks in a publishing context (for example, Embracing Jupyter Notebooks at O’Reilly) and colleges such as Bryn Mawr in the US keep coming up with all manner of interesting ways of using notebooks in a course context – if you know of other great (or even not so great) use case examples in publishing or education, please let me know via the comments to this post – but I still get the feeling that many other people don’t get it.
“Initially the reaction to the concept [of the Gypsy, GUI powered wordprocessor that was to become part of the Ginn publishing system] was ‘You’re going to have to drag me kicking and screaming,'” Mott recalled. “But everyone who sat in front of that system and used it, to a person, was a convert within an hour.”
Michael Hiltzik, Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age, p210
For example, in writing computing related documents, the ability to show a line of code and the output of that code, automatically generated by executing the code, and then automatically inserted into the document, means that when writing code examples, “helpful corrections” by an over-zealous editor go out of the window. The human hand should go nowhere near the output text.
Similarly when creating charts from data, or plotting equations: the charts should be created from the data or the equation by running a script over a source dataset, or plotting an equation directly.
Again, the editor, or artist, should have no hand in “tweaking” the output to make it look better.
If the chart needs restyling, the artist needs to learn how to use a theme (like this?!) or theme generator rather then messing around with a graphics package (wrong sort of graphic). To add annotations, again, use code because it makes the graphic more maintainable.
There are also several toolkits around for creating other sorts of diagram from code, as I’ve written about previously, such as the tools provided on blockdiag.com:
Aside from making diagrams more easily maintainable, rendering them inline within a Jupyter notebook that also contains the programmatic “source code” for the diagram, written diagrams also provide a way in to the automatic generation of figure londesc text.
Electrical circuit schematics can also be written and embedded in a Jupyter notebook, as this Schemdraw example shows:
So far, I haven’t found an example of a schematic plotting library that also allows you to simulate the behaviour of the circuit from the same definition though (eg I can’t simulate(d, …) in the above example, though I could presumably parameterise a circuit definition for a simulation package and use the same parameter values to label a corresponding Schemdraw circuit).
There are some notations that are “executable”, though. For example, the sympy (symbolic Python) package lets you write texts using python variables that can be rendered either as a symbol using mathematical notation, or by their value.
(There’s a rendering bug in the generated Mathjax in the notebook I was using – I think this has been corrected in more recent versions.)
We can also use interactive widgets to help us identify and set parameter values to generate the sort of example we want:
Sympy also provides support for a wide range of calculations. For example, we can “write” a formula, render it using mathematical notation, and then evaluate it. A Jupyter notebook plugin (not shown) allows python statements to be included and executed inline, which means that expressions and calculations can be included – and evaluated – inline. Changing the parameters in an example is then easy to achieve, with the added benefit that the guaranteed correct result of automatically evaluating the modified expression can also be inlined.
(For interactive examples, see the notebooks in the sympy folder here; the notebooks are also runnable by launching a mybinder container – click on the launch:binder button to fire one up.)
It looks like there are also tools out there for converting from LateX math expressions to sympy equivalents.
As well as writing mathematical expressions than can be both expressed using mathematical notation, and evaluated as a mathematical expression, we can also write music, expressing a score in notational form or creating an admittedly beepy audio file corresponding to it.
(For an interactive example, run the midiMusic.ipynb notebook by clicking through on the launch:binder button from here.)
We can also generate audio files from formulae (I haven’t tried this in a sympy context yet, though) and then visualise them as data.
Packages such as librosa also seem to provide all sorts of tools for analysing an visualising audio files.
When we put together the Learn to Code MOOC for FutureLearn, which uses Jupyter notebooks as an interactive exercise environment for learners, we started writing the materials in (web pages for the FutureLearn teaching text, notebooks for the interactive exercises) in Jupyter notebooks. The notebooks can export as markdown, the FutureLearn publishing systems is based around content entered as a markdown, so we should have been able to publish direct from the notebooks to FutureLearn, right? Wrong. The workflow doesn’t support it: editor takes content in Microsoft Word, passes it back to authors for correction, then someone does something to turn it into markdown for FutureLearn. Or at least, that’s the OU’s publishing route (which has plenty of other quirks too…).
Or perhaps will be was the OU’s publishing route, because there’s a project on internally (the workshops around which I haven’t been able to make, unfortunately) to look at new authoring environments for producing OU content, though I’m not sure if this is intended to feed into the backend of the current route – Microsoft Word, Oxygen XML editor, OU-XML, HTML/PDF etc output – or envisages a different pathway to final output. I started to explore using Google docs as an OU XML exporter, but that raised little interest – it’ll be interesting to see what sort of authoring environment(s) the current project delivers.
(By the by, I remember being really excited about the OU-XML a publishing system route when it was being developed, not least because I could imagine its potential for feeding other use cases, some of which I started to explore a few years later; I was less enthused by its actual execution and the lack of imagination around putting it to work though… I also thought we might be able to use FutureLearn as a route to exploring how we might not just experiment with workflows and publishing systems, but also the tech – and business models around the same – for supporting stateful and stateless interactive, online student activities. Like hosting a mybinder style service, for example, or embedded interactions like the O’Reily Thebe demo, or even delivering a course as a set of linked Jupyter notebooks. You can probably guess how successful that’s been…)
So could Jupyter notebooks have a role to play in producing semi-automated content (automated, for example in the production of graphical objects and the embedding of automatically evaluated expressions)? Markdown support is already supported and it shouldn’t take someone too long (should it?!) to put together an nbformat exporter that could generate OU-XML (if that is still the route we’re going?)? It’d be interesting to hear how O’Reilly are getting on…