Potential Issues With Institutionally Mediated Reproducible Research Environments

One of the advantages, for me, of the Jupyter Binderhub enviornment is that it provides with a large amount of freedom to create my own computational environment in the context of a potentially managed institutional service.

At the moment, I’m lobbying for an OU hosted version of Binderhub, probably hosted via Azure Kubernetes, for internal use in the first instance. (It would be nice if we could also be part of an open and federated MyBinder provisioning service, but I’m not in control of any budgets.) But in the meantime, I’m using the open MyBinder service (and very appreciative of it, too).

To test the binder builds locally, I use repo2docker, which is also used as part of the Binderhub build process.

What this all means is that I should be able to write – and test – notebooks locally, and know that I’ll be able to run them “institutionally” (eg on Binderhub).

However, one thing I noticed today was that notebooks in a binder container that was running okay, and that still builds and runs okay locally, have broken when run through Binderhub.

I think the error is a permissions error in creating temporary directories or writing temporary image files in either the xelatex commandline command used to generate a PDF from the LaTeX script, or the ImageMagick convert command used produce an image from the PDF which are both used as part of some IPython magic that renders LaTeX tikz diagram generating scripts. It certainly affects a couple of my magics. (It might be an issue with the way the magics are defined too. But whatever the case, it works for me locally but not “institutionally”.)

Broken notebook: https://mybinder.org/v2/gh/psychemedia/showntell/maths?filepath=Mechanics.ipynb
magic code: https://github.com/psychemedia/showntell/tree/maths/magics/tikz_magic
Error is something to do with the ImageMagick convert command not converting the .pdf to an image. At least one of issues seems to be that ghostscript is lost somewhere?

So here’s the issue. Whilst the notebooks were running fine in a container generated from an image that was itself created presumably before a Binderhub update, rebuilding the image (potentially without making any changes to the source Github repository) can lead to notebooks that were running fine to break.

Which is to say, there may be a dependency in the way a repository defines an environment on some of the packages installed by the repo2docker build process. (I don’t know if we can fully isolate out these dependencies by using a Dockerfile to define the environment rather than apt.txt and requirements.txt?)

This raises a couple of questions for me about dependencies:

  • what sort of dependency issues might there be in components or settings introduced by the jupyter2repo process, and how might we mitigate against these?
  • are there other aspects of the Binderhub process that can produce breaking changes that impact on notebooks running in a repository that specifies a computational environment run via Binderhub?

Institutionally, it also means that environments run via an institutionally supported Binderhub environment could break downstream environments (that is, ones run via Binderhub) through updates to the Binderhub environment.

This is a really good time for this to happen to me, I think, because it gives me more things to think about when considering the case for providing a Binderhub service institutionally.

On the other hand, it means I can’t update any of the other repos that use the tikz or asymptote magic until I find the fix because otherwise they will break too…

Should users of the institutional service, for example, be invited to define test areas in their Binder repositories (for example, using nbval) that the institution can use as test cases when making updates to the institutional service? If errors are detected through the running of these tests by the institutional service provider against their users’ tests, then the institutional service provider could explore whether the issue can be addressed by their update strategy, or alert the Binderhub user there may be breaking changes and how to explore what they are or mitigate against them. (That is, perhaps it falls to the institutional provider to centrally explore the likely common repercussions of a particular update and identify fixes to address them?)

For example, there might be dependencies on particular package version numbers. In this case, the user might then either want to update their own code, or add in a build requirement that regresses the package to the desired version. (Institutional providers might have something to say about that if the upgrade was for valid security reasons, though running things in isolation in containers should reduce that risk?) Lists of affected packages could also be circulated to other users using the same packages, along with mitigation strategies for coping with updates to the institutionally provided service.

There are also updating issues associated with a workflow strategy I am exploring around Binderhub which relates to using “base containers” to seed Binderhub builds (Note On My Emerging Workflow for Working With Binderhub). For example, if a build uses a “latest” tagged base image, any updates to that base image may break things built on top of it. In this case, mitigating against update risk to the base container is achieved by building from a specifically tagged version of the container. However, if an update to the Binderhub environment can break notebooks running on top of a particularly labelled base container, the fix for the notebooks may reside in making a fix to the environment in the base container (for example, which specifically acts to enforce a package version). This suggests that the base container might need doubly tagging – one tag paying heed to the downstream end users (“buildForExptXYZ”) – and the other that captures the upstream Binderhub environment (“BinderhubBuildABC”).

I’m also wondering know about where responsibility arises for maintaining the integrity of the user computing environment (that is, the local computational environment within which code in notebooks should continue to operate once the user has defined their environment). Which is to say, if there are changes to the wider environment that somehow break that local user environment, who should help fix it? If the changes are likely to impact widely, it makes sense to try to fix it once and then share the change, rather than expecting every user suffering from the break to have to find the fix independently?

Also, I’m wondering about classes of error that might arise. For example, ones that can be fixed purely by changing the environmental definition (baking package versions into config files, for example, which is probably best practice anyway) and ones that require changes to code in notebooks?

PS Hmm.. noting… are whitelists and blacklists also specifiable in Binderhub config? eg https://github.com/jupyterhub/mybinder.org-deploy/pull/239/files

        c.GitHubRepoProvider.banned_specs = [

Fragment – Virtues of a Programmer, With a Note On Web References and Broken URLs

Ish-via @opencorporates, I came across the “Virtues of a Programmer”, referenced from a Wikipedia page, in a Nieman Lab post by Brian Boyer on Hacker Journalism 101,, and stated as follows:

  • Laziness: I will do anything to work less.
  • Impatience: The waiting, it makes me crazy.
  • Hubris: I can make this computer do anything.

I can buy into those… Whilst also knowing (from experience) that any of the above can lead to a lot of, erm, learning.

For example, whilst you might think that something is definitely worth automating:

the practical reality may turn out rather differently:

The reference has (currently) disappeared from the Wikipedia page, but we can find it in the Wikipedia page history:


The date of the NiemanLab article was 


So here’s one example of a linked reference to a web resource that we know is subject to change and that has a mechanism for linking to a particular instance of the page.

Academic citation guides tend to suggest that URLs are referenced along with the date that the reference was (last?) accessed by the person citing the reference, but I’m not sure that guidance is given that relates to securing the retrievability of that resource, as it was accessed, at a later date. (I used to bait librarians a lot for not getting digital in general and the web in particular. I think they still don’t…;-)

This is an issue that also hits us with course materials, when links are made to third party references by URI, rather than more indirectly via a DOI.

I’m not sure to what extent the VLE has tools for detecting link rot (certainly, they used to; now it’s more likely that we get broken link reports from students failing to access a particular resource…) or mitigating against broken links.

One of the things I’ve noticed from Wikipedia is that it has a couple of bots for helping maintain link integrity: InternetArchiveBot and Wayback Medic.

Bots help preserve link availability in several ways:

  • if a link is part of a page, that link can be submitted to an archiving site such as the Wayback machine (or if it’s a UK resource, the UK National Web Archive);
  • if a link is spotted to be broken (header / error code 404), it can be redirected to the archived link.

One of the things I think we could do in the OU is add an attribute to the OU-XML template that points to an “archive-URL”, and tie this in with service that automatically makes sure that linked pages are archived somewhere.

If a course link rots in presentation, students could be redirected to the archived link, perhaps via a splash screen (“The original resource appears to have disappeared – using the archived link”) as well as informing the course team that the original link is down.

Having access to the original copy can be really helpful when it comes to trying to find out:

  • whether a simple update to the original URL is required (for example, the page still exists in its original form, just at a new location, perhaps because of a site redesign); or,
  • whether a replacement resource needs to be found, in which case, being able to see the content of the original resource can help identify what sort of replacement resource is required.

Does that count as “digital first”, I wonder???

Scratch Materials – Using Blockly Style Resources in Jupyter Notebooks

One of the practical issues associated with using the Scratch desktop application (or it’s OU fork, OUBuild) for teaching programming is that runs on the desktop (or perhaps a tablet? It’s an Adobe Air app which I think runs on iOS?). This means that the instructional material is likely to be separated from the application, either as print or as screen based instructional material.


If delivered via the same screen as the application, there can be a screen real estate problem when trying to display both the instructional material and the application.

In OU Build, there can also be issues if you want to have two projects open at the same time, for example to compare a provided solution with your own solution, or to look at an earlier project as you create a new one. The solution is to provide two copies of the application, each running its own project.

Creating instructional materials can also be tricky, requiring the capturing of screenshots from the application and then inserting them in the materials, along with the attendant risk when it comes to updating the materials that screenshots as captured in the course materials may drift from the actuality of the views in the application.

So here are a couple of ways that we might be able to integrate Scratch like activities and guidance into instructional materials.

Calysto/Metakernel Jigsaw Extension for Jupyter Notebooks

The Calysto/Metakernel* Jigsaw extension for Jupyter notebooks wraps the Google Blockly package for use in a Jupyter notebook.

Program code is saved as an XML file, which means you can save and embed multiple copies of the editor within the same Jupyter notebook. This means an example programme can be provided in one embed, and the learner can build up the programme themselves in another, all in the same page.

The code cell input (the bit that contains the %jigsaw line) can be hidden using the notebook Hide Input Cell extension so only the widget is displayed.

The use of the editor is a bit tricky – it’s easy to accidentally zoom in and out, and I’m guessing not very accessible, but it’s great as a scratchpad, and perhaps as an instructional material authoring environment?

Live example on Binderhub

For more examples, see the original Jigsaw demo video playlist.

For creating instructional materials, we should be able to embed multiple steps of a programme in separate cells, hiding the code input cell (that is, the %jigsaw line) and then export or print off the notebook view.

LaTeX Scratch Package

The LaTeX Scratch package provides a way of embedding Blockly style blocks in a document through simple LaTeX script.

Using a suitable magic we can easily add scripts to the document (the code itself could be hidden using the notebook Hide Code Cell Input extension.

(Once again, the code cell input (the cell that contains the lines of LaTeX code) can be hidden using the notebook Hide Input Cell extension so only the rendered blocks are displayed.)

We can also create scripts in strings and then render those using line magic.

Live example on Binderhub

One thing that might be quite interesting is a parser that can take the XML generated from the Jigsaw extension and generate LaTeX script from it, as well as generating a Jigsaw XML file from the LaTeX script?

Historical Context

The Scratch rebuild – OU Build – used in the OU’s new level 1 introductory computing course is a cross platform, Adobe Air application. I’d originally argued that if the earlier taken decision to use a blocks style environment was irreversible, the browser based BlockPy (review and code) application might be a more interesting choice: the application was browser based, allowed users to toggle between blocks and Python code views, displayed Python errors messages in a simplified form, and used a data analysis, rather than animation, context, which meant we could also start to develop data handling skills.


One argument levelled against adopting BlockPy was that it looked to be a one man band in terms of support, rather than the established Scratch community. I’m not sure how much we benefit from, or are benefit to, the Scratch community though? If OU Build is a fork,  we may or may not be able to benefit from any future support updates to the Scratch codebase directly. I don’t think we commit back?

If the inability to render animations had also been a blocker, adding an animation canvas as well as the charting canvas would have been a possibility? (My actual preference was that we should do a bigger project and look to turn BlockPy into a Jupyter client.)

Another approach that is perhaps more interesting from a “killing two birds with one stone” perspective is to teach elementary programming and machine learning principles at the same time. For example, using something like Dale Lane’s excellent Scratch driven Machine Learning for Kids resources.

PS the context coda is not intended to upset, besmirch or provoke anyone involved with OUBuild. It’s self-contempt / self-critical, directed at myself for not managing to engage/advocate my position/vision in a more articulate or compelling way.

PPS new JupyterLab blockly extension with blocks to code and back again support: https://olney.ai/category/2020/01/20/intelliblocks.html Repo: aolney/fable-jupyterlab-blockly-extension

Maybe Programming Isn’t What You Think It Is? Creating Repurposable & Modifiable OERs

With all the “everyone needs to learn programming” hype around, I am trying to be charitable when it comes to what I think folk might mean by this.

For example, whilst trying to get some IPython magic working, I started having a look at TikZ, a LaTex extension that supports the generation of scientific and mathematical diagrams (and which has been around for decades…).

Getting LaTeX environments up and running can be a bit of a pain, but several of the Binderhub builds I’ve been putting together include LateX, and TikZ,  which means I have an install-free route trying snippets of TikZ code out.

As an example, in my showntell/maths demo includes an OpenLearn_Geometry.ipynb notebook that includes a few worked examples of how to “write” some of the figures that appear in an OpenLearn module on geometry.

From the notebook:

The notebook includes several hidden code cells that generate the a range of geometric figures. To render the images, go to the Cell menu and select Run All.

To view/hide the code used to generate the figures, click on the Hide/Reveal Code Cell Inputs button in the notebook toolbar.

To make changes to the diagrams, click in the appropriate code input cell, make your change, and then run the cell using the Run Cell (“Play”) button in the toolbar or via the keyboard shortcut SHIFT-ENTER.

Entering Ctrl-Z (or CMD-Z) in the code cell will undo your edits…

Launch the demo notebook server on Binder here.

Here’s an example of one of the written diagrams (there may be better ways; I only started learning how to write this stuff a couple of days ago!)

Whilst tinkering with this, a couple of things came to mind.

Firstly, this is programming, but perhaps not as you might have thought of it. If we taught adult novices some of the basic programming and coding skills using Tikz rather than turtle, they’d at least be able to create professional looking diagrams. (Okay, so the syntax is admittedly probably a bit scary and confusing to start with… But it could be simplified with some higher level, more abstracted, custom defined macros that learners could then peek inside.)

So when folk talk about teaching programming, maybe we need to think about this sort of thing as well as enterprise Java. (I spent plenty of time last night on the Stack Exchange TEX site!)

Secondly, the availability of things like Binderhub make it easier to build preloaded distributions that can be run by anyone, from anywhere (or at least, for as long as public Binderhub services exist). Simply by sharing a link, I can point you to a runnable notebook, in this case, the OpenLearn geometry demo notebook mentioned above.

One of the things that excites me, but I can’t seem to convince others about, is the desirability of constructing documents in the way the OpenLearn geometry demo notebook is constructed: all the assets displayed in the document are generated by the document. What this means is that if I want to tweak an image asset, I can do. The means of production – in the example, the TikZ code – is provide; it’s also editable and executable within the Binder Jupyter environment.

When HTML first appeared, web pages were shonky as anything, but there were a couple of buts…: the HTML parsers were forgiving, and would do their best to whatever corruption of HTML was thrown at them; and the browsers supported the ability to View Source (which still exists today; for example, in Chrome, go to the View menu then select Developer -> View Source).

Taken together, this meant that: a) folk could copy and paste other people’s HTML and try out tweaks to “cool stuff” they’d seen on other pages; b) if you got it wrong, the browser would have a go at rendering it anyway; you also wouldn’t feel as if you’d break anything serious by trying things out yourself.

So with things like Binder, where we can build disposable “computerless computing environments” (which is to say, pre-configured computing environments that you can run from anywhere, with just a browser to hand), there are now lots of opportunities to do powerful computer-ingy things (technical term…) from a simple, line at a time notebook interface, where you (or others) can blend notes and/or instruction text along with code – and code outputs.

For things like the OpenLearn demo notebook, we can see how the notebook environment provides a means by which educators can produce repurposeable documents, sharing not only educational materials for use by learners, or appropriation and reuse by other educators, but also the raw ingredients for producing customised forms of the sorts of diagrams contained in the materials: if the figure doesn’t have the labels you want, you can change them and re-render the diagram.

In a sense, sharing repurposeable, “reproducible” documents that contain the means to generate their own media assets (at least, when run in an appropriate environment: which is why Binderhub is such a big thing…) is a way of sharing your working. That is, it encourages open practice, and the sharing of how you’ve created something (perhaps even with comments in the “code” explaining why you’ve done something in a particular way, or where the inspiration/prior art came from), as well as the what of the things you have produced.

That’s it, for now… I’m pretty much burned out on trying to persuade folk of the benefits of any of this any more…

PS TikZ and PGF TikZ and PGF: TeX packages for creating graphics programmatically. Far more useful than turtle and Scratch?

Open Education Versions of Open Source Software: Adding Lightness and Accessibility to User Interfaces?

In a meeting a couple of days ago discussing some of the issues around what sort of resources we might want to provide students to support GIS (geographical information system) related activities, I started chasing the following idea…

The OU has, for a long time, developed software application in-house that is provided to students to support one or more courses. More often than not, the code is devloped and maintained in-house, and not released / published as open source software.

There are a couple of reasons for this. Firstly, the applications typically offer a clean, custom UI that minimises clutter and is designed in order to support usability for learners learning about a particular topic. Secondly, we require software provided by students to be accessible.

For example, the RobotLab software, originally developed, an still maintained, by my colleague Jon Rosewell was created to support a first year undergrad short course, T184 Robotics and the Meaning of Life, elements of which are still used in one of our level 1 courses today. The simulator was also used for many years to support first year undergrad residential schools, as well as a short “build a robot fairground” activity in the masters level team engineering course.

As well as the clean design, and features that support learning (such as a code stepper button in RobotLab that lets students step through code a line at a time), the interfaces also pay great attention to accessibility requirements. Whilst these features are essential for students with particular accessibility needs, they also benefit all out students by adding to the improved usability of the software as a whole.

So those are two, very good reasons, for developing software in-house. But as a downside, it means that we limit the exposure of students to “real” software.

That’s not to say all our courses use in-house software: many courses also provide industry standard software as part of the course offering. But this can present problems too: third party software may come with complex user interfaces, or interfaces that suffer from accessibility issues. And software versions used in the course may drift from latest releases if the software version is fixed for the life of the course. (In fact, the software version may be adopted a year before the start of the course and then expected to last for 5 years of course presentation). Or if software is updated, this may cause significant updates to be made to the course material wrapping the software.

Another issue with professional software is that much of it is mature, and has added features over its life. This is fine for early adopters: the initial versions of the software are probably feature light, and add features slowly over time, allowing the user to grow with them. Indeed, many latterly added features may have been introduced to address issues surrounding a lack of functionality, power or “expressiveness” in use identfied by, and frustrating to, the early users, particularly as they became more expert in using the application.

For a novice coming to the fully featured application, however, the wide range of features of varying levels of sophistication, from elementary, to super-power user, can be bewildering.

So what can be done about this, particularly if we want to avail ourselves of some of the powerful (and perhaps, hard to develop) features of a third party application?

To steal from a motorsport engineering design principle, maybe we can add lightness?

For example, QGIS is a powerful, cross-platform GIS application. (We have a requirement for platfrom neutrality; some of us also think we should be browser first, but let’s for now accept the use of an application that needs to be run on a computer with a “desktop” applciation system (Windows, OS/X, Linux) rather than one running a mobile operating system (iOS, Android) or eveloped for use by a netbook (Chrome OS).)

The interface is quite busy, and arguably hard to quickly teach around from a standing start:

However, as well as being cross-platform, QGIS also happens to be open source.

That is, the source code is available [github: qgis/QGIS].


Which means that as well as the code that does all the clever geo-number crunching stuff, we have access to the code that defines the user interface.

*[UPDATE: in this case, we don’t need to customise the UI by forking the code and changing the UI definition files – QGIS provides a user interface configuration / customisation tool.]

For example, if we look for some menu labels in he UI:

we can then search the source code to find the files that contribute to building the UI:

In turn, this means we can take that code, strip out all the menu options and buttons we don’t need for a particular course, and rebuild QGIS with the simplified UI. Simples. (Or maybe not that simples when you actually start getting into the detail, depending on how the software is designed!)

And if the user interface isn’t as accessible as we’d like it, we can try to improve that, and contribute the imporvements back the to parent project. The advantage there is that if students go on to use the full QGIS application outside of the course, they can continue to benefit from the accessiblity improvements. As can every other user, whether they have accessibility needs or not.

So here’s what I’m wondering: if we’re faced with the decision between wanting to use an open source, third party “real” application with usability and access issues, why build the custom learning app, especially if we’re going to keep the code closed and have to maintain it ourselves? Why not join the developer community and produce a simplified, accessible skin for the “real” application, and feed accessibility improvements at least back to the core?

On reflection, I realised we do, of course, do the first part of this already (forking and customising), but we’re perhaps not so good at the latter (contributing accessibility or alt-UI patterns back to the community).

For operational systems, OU developers have worked extensively on Moodle, for example (and I think, committed to the parent project)… And in courses, the recent level 1 computing course uses an OU fork of Scratch called OUBuild, a cross-platform Adobe Air application (as is the original), to teach basic programming, but I’m not sure if any of the code changes have been openly published anywhere, or design notes on why the original was not appropriate as a direct/redistributed download?

Looking at the Scratch open source repos, Scratch looks to be licensed under BSD 3-clause “New” or “Revised” License (“a permissive license similar to the BSD 2-Clause License, but with a 3rd clause that prohibits others from using the name of the project or its contributors to promote derived products without written consent”). Although it doesn’t have to be, I’m not sure the OUBuild source code has been released anywhere or whether commits were made back to the original project? (If you know differently, please let me know:-)) At the very least, it’d be really handy if there was a public document somewhere that identifies the changes that were made to the original and why, which could be useful from a “design learning” perspective. (Maybe there is a paper being worked up somewhere about the software development for the course?) By sharing this information, we could perhaps influence future software design, for example by encouraging developers to produce UIs that are defined from configuration files that can be easily customised and selected from, in that that users can often select language packs).

I can think of a handful of flippant, really negative reasons why we might not want to release code, but they’re rather churlish… So they’re hopefully not the reasons…

But there are good reasons too (for some definition of “good”..): getting code into a state that is of “public release quality”; the overheads of having to support an open code repository (though there are benefits: other people adding suggestions, finding bugs, maybe even suggesting fixes). And legal copyright and licensing issues. Plus the ever present: if we give X away, we’re giving part of the value of doing our courses away.

At the end of the day, seeing open education in part as open and shared practice, I wonder what the real challenges are to working on custom educational software in a more open and collaborative way?

Keeping Up With What’s Possible – Daily Satellite Imagery from AWS

Via @simonw’s rebooted blog, I  spotted this – Landsat on AWS: “Landsat 8 data is available for anyone to use via Amazon S3. All Landsat 8 scenes are available from the start of imagery capture. All new Landsat 8 scenes are made available each day, often within hours of production.”

What do things like this mean for research, and teaching?

For research, I’m guessing we’ve gone from a state 20 years ago – no data [widely] available – to 10 years ago – available under license, with a delay and perhaps as periodics snapshots – to now – daily availability. How does this imapct on research, and what sorts of research are possible? And how well suited are legacy workflows and tools to supporting work that can make use of daily updated datasets?

For teaching, the potential is there to do activities around a particular dataset that is current, but this introduces all sorts of issues when trying to write and support the activity (eg we don’t know what specific features the data will turn up in the future). We struggle with this anyway trying to write activities that give students an element of free choice or open-ended exploration where we don’t specifically constrain what they do. Which is perhaps why we tend to be so controlling – there is little opportunity for us to respond to something a student discovers for themselves.

The realtime-ish ness of data means we could engage students with contemporary issues, and perhaps enthuse them about the potential of working with datasets that we can only hint at or provide a grounding for in the course materials. There are also opportunities for introducing students to datasets and workflows that they might be able to use in their workplace, and as such act as a vector for getting new ways of working out of the Academy and out of the tech hinterland that the Academy may be aware of, and into more SMEs (helping SMEs avail themselves of emerging capabilities via OUr students).

At a more practical level, I wonder, if OU academics (research or teaching related) wanted to explore the LandSat 8 data on AWS, would they know how to get started?

What sort of infrastructure, training or support do we need to make this sort of stuff accessible to folk who are interested in exploring it for the first time (other than Jupyter notebooks, RStudio, and Docker of course!;-) ?

PS Alan Levine /@cogdog picks up on the question of what’s possible now vs. then: http://cogdogblog.com/2017/11/landsat-imagery-30-years-later/. I might also note: this is how the blogosphere used to work on a daily basis 10-15 years ago…

From the University of the Air to the University of the Cloud…

Skimming over a recent speech given to the European Association of Distance Teaching Universities conference by the OU VC (The future for Open and Distance Universities. Discussing the move from the University of the Air to the University of the Cloud), the following quotes look like they may be handy at some point…

We were disruptive and revolutionary in our use of technology back then [1969], and as we approach our 50th year, we intend to be disruptive and revolutionary again, to transform the life chances of tens of thousands of future students. When we are thinking of change, it is important that our own enthusiasm for it should not run away with itself. It should be for the sake of our students and for our mission.

Disruptive and revolutionary… I wonder either of those mean in practical terms? Or is that still to be defined… In which case… ;-)

At a time of unprecedented change and recognising future economic challenges, we have a crucial role to play in helping employers and employees respond to the rapid rise in automation which is expected to sweep away millions of existing jobs.

The ability for people to upskill and reskill will become crucial in ways we can’t yet predict, and where students will need to be equipped to thrive as digitally-enabled citizens – people who are not just victims of digital change, but people who can take advantage of it.

“[D]igitally-enabled citizens” – defined how?

We can and should help tackle this economic inequality from this employment disruption, and the resulting social inequality, by creating a positive digital learning experience and building essential digital skills – truly modernising our missions for this Century.

How so?

Reflecting on changes to BBC newsroom:

But more significantly using the capabilities of digital media to their full – by which I mean interactivity, direct contribution from the audience, collaborative newsgathering and a levelling of the relationship between institution and audience/consumer.

BBC Me?! ;-)

I recall the BBC’s then UK political editor, Nick Robinson, starting to blog (this was preTwitter). He would post updates after he had picked up initial political intelligence in the morning. He found that political insiders would contact him either privately or online, adding information or possibly contradicting the initial account he had published.

By making his journalism more open and more contingent he gathered more information and tested his thinking, so that by the end of the day when he came to broadcast on the “conventional” broadcast bulletin he would not only have provided a better and faster news service during the day but his final polished TV output would have benefitted by that open testing and development.

T151 was blogged in its production. The content is still there (content from several years ago on http://digitalworlds.wordpress.com). I wish I’d added notes to some related presentations from the time…

[W]e don’t need to invent some radical vision of the future in order to think how we should be changing. Rather we need to look around us carefully now and observe what is interesting and extrapolate from there.

There’s a lot of current world out there that I don’t think we’ve been watching… And a lot of recent past/passed blogged here on OUseful.info over the last 10 years…

So, I suggest, looking at trends in knowledge sectors – publications, books, music – that have changed earlier and faster, such as the news media, can provide lessons for universities. I realise that it can be sacrilegious in some academic circles to draw comparisons with media, content and indeed the news.

Yep. I’d also be looking at things like reproducible research workflows…

News of course is ephemeral and inevitably less perfect or polished than carefully crafted academic content. But there are at least some lessons.

Firstly, the cultural ones. In parts of academia, although thankfully less so in distance and online universities, there is still a patrician culture, de haut en bas, in terms of professional practice. That we are the intellectual priesthood, dispensing tablets of knowledge. Of course we need to treasure our expertise and our standards. But when we are teaching people who are often mature, who have their own experience of life and work, we have to be more modest. And the internet and interactivity keeps us honest and modest.

And we could maybe be more transparent in our working, as per Nick Robinson…

And we need to be aware that we are competing with news media, and other content, for the attention of students, either in the initial choice of whether they sign up for our courses or for their attention when attractive content is drawing them away from their studies once they are taking a course.

Competition in a couple of ways: attention and economic (eg pounds per hour of attention as well as number of hours).

So why don’t we care even more about how readable, how visual, how stimulating and grabby, how entertaining or provocative our courses are?

Or whether anyone even looks at them?

And do our materials always have to be absolutely perfect, especially if perfection is costly and slow, unresponsive and non-topical? Good enough content, I’m afraid to say, has a huge following. Just look at YouTube. And when it is online if it needs improving, it can be done easily.

I think if we are responsive in posting corrections, we can be much quicker in production, and also benefit from “production in presentation” in first run (at least) of courses. Or uncourse them in their production.

I always told BBC journalists and producers that making content attractive was not a contradiction with quality, it is not selling out or dumbing down, it is an essential accompaniment. If you don’t make academic content and the learning experience as stimulating and modern as the other content choices in the lives of students, don’t be surprised if students lose attention or drop out.

Repeated rinse and repeats in drafts and editing take all the character out of our content… And it still goes to students littered with errors and untested by “users” in the first presentation at least…

Of course the immediacy of the feedback of on-line helps enormously as we can know at once what is working for students.

But then, when we get feedback about eg errors in material, in can take till the next presentation of the course a year later for them to be properly addressed. (I don’t know why we can’t A/B test stuff, either? Clinical trials seem to get away with it…)

I hope you can see how many of those cultural and professional practice issues in other content fields have a direct application to universities and distance learning. Too many of us are still working in a mindset where we see digital as a cost effective alternative to the traditional pedagogy of distance learning books and materials.

What’s that saying? Digital isn’t cost effective? Erm…

At the centre of the UK Open University’s changes in the months and years ahead will be to exploit fully the affordances of digital to the learning needs of future society and future students. Of course, we will take into account concerns about delivering for our existing students and make sure that the transition to that more fully digitally designed world is carried out carefully, carrying them with us.

So what are the “affordances of the digital”? I can think of a view but they are predicated on changed production and presentation models together

[I]t is not the radical, niche technologies that should interest us, but rather those that have the possibility to become, as Shirky has it, ‘boring’. The basic attributes of digital that can reform learning have not changed significantly since the beginning of social media about ten years ago. It is just that they are not fully adopted in our learning practices.

Still not sure what the point is here? Such as…?

With this in mind I will also add the usual caveat that attempting to predict the future is nearly always foolhardy, and so I will limit my conjectures to thinking about two aspects: the main areas that we might suggest will drive change within open and distance universities; and the context within which those universities are operating.

Best way to predict is invent; next best way is to explore the stuff other folk are inventing. That’s partly what OUseful.info is about…

To look at the first of these, what are the current trends, developments or technologies that might represent what William Gibson described as the future that is already here.

There are three broad elements of particular interest to open and distance universities that I will highlight, although there are undoubtedly more we could address. These are Data, Openness and Flexibility.

To take the first of these, data, it is a commonplace to observe that the generation, analysis and interpretation of data is now a significant factor in society in a manner it was not just ten years ago. There is talk of data capitalism, data surveillance and data as the new oil. But what does this mean for universities, and in particular ones operating at a distance?

There are undoubted benefits we can give to our students in a data rich world, via learning analytics. At the Open University we are aligning analytics with learning design to help us inform which designs are more effective in retaining students and meeting their needs.

We can tell which elements of a course are aligned with effective performance and which ones are less well correlated. This is the type of feedback we have never managed before when we were sending out boxes of printed materials. The critical thing is to show students that their experience with something that for some of them is less familiar is going to create benefits for them.

I still don’t know if anyone ever reads a particular page, clicks on a particular link, etc etc…

And this type of feedback changes the definitions of our engagement with students and our ability to be able to respond to their needs. Our previous techniques for capturing student feedback would involve them completing a written, then later online, survey after taking a module, quite often a long time after their learning experience in question. Those feedback methods inevitably require some effort on the part of the student and the face to face focus group necessarily involves a behaviour – travelling to a physical point – that inevitably excludes certain categories of students.

We are now introducing much more immediate forms of response (I’m not sure that feedback is an accurate term any more, as this is now a less deliberate process for students) We are capturing immediate response data. For instance on our Student Home help page students are asked to click a simple green thumbs-up or red thumbs-down to indicate whether their query has been answered effectively.

Our teams monitor those “thumbs” in real time and refine responses in turn and feedback issues immediately to the learning/module teams. We intend to roll out this approach from our student experience site to all of the virtual learning environment next year, in time for our main autumn presentation, so that we can be responding to students and improving their learning experience in real time.

We are also able to use data to help inform our tutors, our Associate Lecturers, about their students. Of course, Associate Lecturers have their own direct relationships with students who are studying most intensively or enthusiastically – but it is the students who are not engaging and the data that is not being created on our system that can help tutors intervene positively.

And we should also be generous and non-proprietary with the data we give to students to help them monitor and shape their own learning.

We should also be more thoughtful about who we divulge student data to, eg though the use of third party tracking services where we reveal student behaviours to third parties, who then sell the data back to us. (And if they don’t sell it to us, how are they generating revenue from it?)

To now consider Openness. Openness now comes in many different forms, it is not just about the open access to higher education it was when the OU was founded. Now it covers open educational resources, MOOCs, open access publications, open textbooks and open educational practice.

In this, open universities need to continue to adapt and be involved in the changing nature of openness in higher education. The adoption of elements of openness across the higher education sphere really hints at a much bigger shift, which is the blurring of boundaries.

This brings me onto the third element, that of flexibility. This can come in many different forms. The open model of education has always been about flexibility – allowing students to choose from a range of courses, to take a break in their study, to combine different size courses.

However, we need to challenge ourselves. When we have asked our students and our potential students about flexibility they have told us that the flexibility is often only a flexibility that is on the university’s terms, not on theirs. Some students want to speed up their study, others want to be able to slow it down. Some want the option to be able to do both, according to the circumstances of their lives. And this is where digital’s infinite flexibility will be the servant of the student’s demand for flexibility.

This challenges the traditional assumptions of the academic year that are still built into the mindset of many academics. And it challenges us to offer a varied and flexible experience that might make us have to be more flexible than we have been used to.

I fancy the idea of alumni as lifelong learners, paying  a subscription to access all our content (think: Netflix), perhaps including course materials that are also currently in production (if we can’t be so open as to draft out materials, and try them out, in public), chunked in tiny chunks (say, 30 mins of “attention time”, or so). We could track the popular pathways – there may be new courses or market intelligence in them…

I come from a digital news media environment where the expectation of immediate high quality content on the terms of the audience were gradually adopted by the organisation – an organisation that had been used to serving the news at a time when the BBC was ready to give it to people. That revolution happened in news at least 15 years ago. Universities are just about catching up.

But we will in the future push this flexibility further as students and employers demand it. For instance we are, as many of you are I expect, exploring flexible forms of Assessment. Can we accredit much more learning from elsewhere? Can we assess and offer credit for practical learning from the workplace on a much more systematic and responsive basis? Can we give the student a more flexible choice of assessment? Are we prepared to move from assessment “of learning” to assessment “for learning”?

Just to note, BXM871 – Managing in the digital economy: “This module offers a process to gain academic credit for your study of The Open University MOOCs that comprise the FutureLearn Digital Economy program. Your knowledge, understanding and skills from the MOOCs will be supplemented by learning materials supporting critical thinking, reflection and study skills appropriate to masters level assessment. You will have access to ‘light touch’ advice from a learning advisor, but please be aware that (as with the MOOCs you bring to the module as prior learning) you need to be a proactive learner to benefit from the materials and activities supplied (peer-review, case studies, readings and online discussion). Activities and assessment address your own professional situation, culminating in an extended written assignment integrating your prior MOOC learning in the context of challenges posed by technological change.”

The use of data, open resources and artificial intelligence has the potential to offer students different types of content within an overall course structure, better personalised to their interests and needs.

Oh, God, no, please not AI Snake Oil…

On the changing economics and business models, if we were following tech, we’d be looking for two-sided market opportunities. But do we really want to do that..?

We need to consider these three elements in relation to a final aspect – the context within which universities operate, and the changing nature of society.

We live in a world where fake news and the negative role of social media sometimes determine public policy. I suspect that quite a large number of us in this room were naturally early techno-optimists. But as the polarising, degrading and demeaning aspects of extreme opinions and abusive content online undermine the cohesion of societies I believe that there is a natural swing towards techno-pessimism.

But the overwhelming shift towards a digital world cannot be held back just because we have some reservations and we should not despair. We need to be as committed to creating a constructive information society in the digital world as we have been over centuries IRL. And we will succeed in our civilising role.

All universities, but particularly I believe, open and distance ones who have a purpose in educating the wider population, have a particular role in helping to produce graduates who understand how to make effective use of these tools in their education, but also in being good networked citizens.

I always liked the strapline of the Technology Short Course Programme – “Relevant Knowledge”. I also think folk should leave our courses knowing how to do things, or seeing how some “big ideas” could help them in the workplace. In short, we should be equipping people to engage critically, as well as productively, with technology.  As it is, I’m not convinced we always deliver on that…:-(

Here at The Open University we are trying to respond to these challenges while retaining our core mission of offering higher education to all, regardless of background or previous qualifications.

We want to transform the University of the Air envisaged by Harold Wilson in the 1960s to a University of the Cloud – a world-leading institution which is digital by design and has a unique ability to teach and support our students in a way that is responsive both to their needs and those of the economy and society.

Open and Distance education universities face an exciting and challenging time. Exciting in that they hold much of the expertise and practice needed to address many of the challenges facing higher education and society in general. Challenging in that they no longer hold a monopoly on much of this and must adapt to new market forces and pressures.

I like a lot of those words. But I’ve no idea (really; really no idea, at all) what anyone else thinks they might mean. (I’m guessing it’s not what I think they mean! ;-)