A few days ago I came across a project that has been looking at digital preservation, and in particular the long term archiving of “functional” digital objects, such as software applications: bwFLA — Emulation as a Service [EaaS]. (I wonder how long that site will remain there…?!)
The Emulation-as-a-Service architecture simplifies access to preserved digital assets allowing end users to interact with the original environments running on different emulators.
I’d come across the project in part via search for examples of Docker containers and other sorts of VM being used via portable “compute sticks”. It seems that the bwFLA folk have been exploring two ways of making emulated services available: EaaS using Docker and a boot to emulation route from machine images on bootable USBs, although they don’t seem (yet) to have described a delivery system that includes a compute stick. (See a presentation on their work here. )
One of the things that struck me about the digital preservation process was the way in which things need to be preserved so that they can be run in an arbitrary future, or at least, in an arbitrary computing environment. In the OU context, where we have just started a course that shipped a set of interlinked applications to students via a virtual machine that could be run across different platforms, we’re already finding issues arising from flaky combinations of VirtualBox and Windows; what we really need to do is be shipping something that is completely self-bootable (but then, that may in turn, turn up problems?). So this got me thinking that when we design, and distribute, software to students it might make sense to think of the distribution process as an exercise in distributing preserved digital objects? (This actually has implications in a couple of senses: firstly, in terms of simply running the software: how can we distribute it so that students can run it; secondly, in terms of how we contextualise the versioning of the software – OU courses can be a couple of years in planning and five years in preservation, which means that the software we ship may be several versions behind the latest release, if the software has continued to be updated).
So if students have problems running software in virtual machines because of problems running the virtual machine container, what other solutions are there?
One way is to host the software and make it available as a service accessed via a web browser or other universal client, although that introduces two complications: firstly, the need for network access; secondly, ensuring that the browser (which is to say, browser-O/S combination?) or universal client can properly service the service…
A second way is to ship the students something bootable. This could be something like a live USB, or it could be a compute stick that comes with preinstalled software on it. In essence, we distribute the service rather than the software. (On this note, things like unikernels look interesting: just enough O/S to run the service or application you’re interested in.) There are cost implications here, of course, although the costs might scale differently depending on who pays: does the OU cover the cost of distribution (“free” to students); does the student pay at-cost and buy from the OU; does the student pay a commercial rate (eg covering their own hosting fees on a cloud service); and so on?
The means students have at their disposal for running software is also an issue. The OU has used to publish different computing specification guidelines for individual courses, but now I think a universal policy applies. From discussions I’ve had with various folk, I seem to be in a minority of one suggesting that students may in the future not have access to general purpose computers onto which they can install software applications, but instead may be using netbooks or even tablet computers to do their studies. (I increasingly use a cloud host to run services I want to make use of…)
I can see there is an argument for students needing access to a keyboard to make life easier when it comes to typing up assessment returns or hacking code, and also the need for access to screen real estate to make life easier reading course materials, but I also note that increasing numbers of students seem to have access to Kindles which provide a handy second screen way of accessing materials.
(The debate about whether we issue print materials or not continues… Some courses are delivered wholly online, others still use print materials. When discussions are held about how we deliver materials, the salient points for me are: 1) where the display surface is (print is a “second screen’ display surface that can be mimicked by a Kindle – and hence an electronically distributed text; separate windows/tabs on a computer screen are display surfaces within a display surface); 2) whether the display surface supports annotations (print does, beautifully); 3) the search,navigation and memory affordances of the display surface (books open to where you were last reading them, page corners can be folded, you have a sense of place/where you are in the text, and (spatial) memory of where you read things (in the book as we well as on the page); 4) where you can access the display surface (eg in the bath?); 5) whether you can arrange the spatial location of the display surface to place it in proximity to another display surface).
Print material doesn’t come without its own support issues though…
“But (computing) students need a proper computer”, goes the cry, although never really unpacked…
From my netbook browser (keyboard, touchpad, screen, internet connection, but not the ability to install and run “traditional” applications), I can, with a network connection, fire up an arbitrary number of servers in London, or Amsterdam, or Dublin, or the US, and run a wide variety of services. (We require students to have access to the internet so they can access the VLE…)
From my browser, I could connect to a Raspberry Pi, or presumably a compute stick (retailing at about £100), that could be running EaaS applications for me.
So I can easily imagine an “OU Compute Stick” – or “FutureLearn Compute Stick” – that I can connect to over wifi, that runs a Kitematic like UI that can install applications from an OU/FutureLearn container/image repository or from an inserted (micro)SD card. (For students with a “proper” computer, they’d be able to grab the containers off the card and run them on their own computer.)
At the start of their degree, students would get the compute stick; when they need to run OU/FutureLearn provided apps, they grab them from the OU-hub, or receive them in the post on an SD card (in the new course, we’ve noticed in some situations, some problems in downloading large files reliably). The compute stick would have enough computational power to run the applications, which could be accessed over wifi via a browser on a “real” computer, or a netbook (which has a keyboard), or a tablet computer, or even a mobile device. The compute stick would essentially be a completely OU managed environment, bootable, and with it’s own compute power. The installation problems would be reduced to finding a way for the stick to connect to the internet (and even that may not be necessary), and the student to connect to the stick.
Developing such a solution might also be of interest to the digital preservation folk…even better if the compute stick had a small screen so you could get see a glimpse at least of what the application looked like. Hmm..thinks… rather than a compute stick, would shipping students a smartphone rooted to run EaaS work?! Or do smartphones have the wrong sort of processor?