Rethinking: Distance Education === Bring Your Own Device?

In passing, an observation…

Many OU modules require students to provide their own computer subject to a university wide “minimum computer specificiation” policy. This policy is a cross-platform one (students can run Windows, Mac or Linux machines) but does allow students to run quite old versions of operating systems. Because some courses require students to install desktop software applications, this also means that tablets and netbooks (eg Chromebooks) do not pass muster.

On the module I work primarily on, we supply students with a virtual machine preconfigured to meet the needs of the course. The virtual machine runs on a cross-platform application (Virtualbox) and will run on a min spec machine, although there is a hefty disk space requirement: 15GB of free space required to install and run the VM (plus another 15-20GB you should always have free anyway if you want your computer to keep running properly, be able to install updates etc.)

Part of the disk overhead comes from another application we require students to use called vagrant. This is a “provisioner” application that manages the operation of the VirtualBox virtual machine from a script we provide to students. The vagrant application caches the raw image of the VM we distribute so that fresh new instances of it can be created. (This means students can throw away the wokring copy of their VM and create a fresh one if they break things; trust me, in distance edu, this is often the best fix.)

One of the reasons why we (that is, I…) took the vagrant route for managing the VM was that it provided a route to ship VM updates to students, if required: just provide them with a new Vagrantfile (a simple text file) that is used to manage the VM and add in an update routine to it. (In four years of running the course, we havenlt actually done this…)

Another reason for using Vagrant was that it provides an abstraction layer between starting and stopping the virtual machine (via a simple commandline command such as vagrant up, or desktop shortcut that runs a similar command) and the virtual machine application that runs the virtual machine. In our case, vagrant instructs Virtualbox running on the student’s own computer, but we can also create Vagrantfiles that allow students to launch the VM on a remote host if they have credentials (and credit…) for that remote host. For example, the VM could be run on Amazon Web Services/AWS, Microsoft Azure, Google Cloud, Linode, or Digital Ocean. Or on an OU host, if we had one.

For the next presentation of the module, I am looking to move away from the Virtualbox VM and move the VM into a Docker container†. Docker offers an abstraction layer in much the same way that vagrant does, but using a different virtualisation model. Specifically, a simple Docker command can be used to launch a Dockerised VM on a student’s own computer, or on a remote host (AWS, Azure, Google Cloud, Digital Ocean, etc.)

We could use separate linked Docker containers for each service used in the course — Jupyter notebooks, PostgreSQL, MongoDB, OpenRefine — or we could use a monolithic container that includes all the services. There are advantages and disadvantages to each that I really do need to set down on paper/in a blog post at some point…

So how does this help in distance education?

I’ve already mentioned that we require students to provide a certain minimum specification computer, but for some courses, this hampers the activities we can engage students in. For example, in our databases course, giving students access to a large database running on their own computer may not be possible; for an upcoming machine learning course, access to a GPU is highly desirable for anything other than really simple training examples; in an updated introductory robotics module, using realistic 3D robot simulators for even simple demos requires access to a gamer level (GPU supported) computer.

In a traditional university, physical access to computers and computer labs running pre-installed, university licensed software packages on machines capable of providing them for students who can’t run the same on their own machines may be available.

In my (distance learning) institution, access to university hosted software is not the norm: students are expected to provide their own computer hardware (at least to minimum spec level) and install the software on it themselves (albeit software we provide, and software that we often build installers for, at least for users of Windows machines).

What we don’t do, however, is either train students in how to provision their own remote servers, or provide software to them that can easily be provisioned on remote servers. (I noted above that our vagrant manager could be used to deploy VMs to remote servers, and I did produce demo Vagrantfiles to support this, but it went no further than that.)

This has made me realise that we make OUr distance learning students pretty much wholly responsible for meeting any computational needs we require of them, whilst at the same time not helping them develop skills that allow that them to avail themselves of self-service, affordable, metered remote computation-on-tap (albeit with the constraint of requiring a netwrok connection to access the remote service).

So what I’m thinking now is that now really is the time to start upskilling OUr distance learners, at least in disciplines that are computationally related, early on and in the following ways:

  1. a nice to have — provide some academic background: teach students about what virtualisation is;

  2. an essential skill, but with a really low floor — technical skills training: show students how to launch virtual servers of their own.

We should also make software available that is packaged in a way that the same environment can be run locally or remotely.

Another nice to have might be helping students reason about personal economic consequences, such as the affordability of different approaches in their local situation, which is to say: buying a computer and running things locally vs. buying something that can run a browser and run things remotely over a network connection.

As much as anything, this is about real platform independence, being open as to, and agnostic of, what physical compute device a student has available at home (whether it’s a gamer spec desktop computer or a bottom of the range Chromebook) and providing them with both software packages that really can run anywhere and the tools and skills to help students run them anywhere.

In many respects, using abstraction layer provisioning tools like vagrant and Docker, the skills to run software remotely are the same as running them locally, with the additional overhead that students have a once only requirement to sign up to a remote host and set up credentials that allow them to access the remote service from the provisioner service that runs on their local machine.

Accessing a Legacy Windows Application Running Under Wine On A Containerised, RDP Enabled Desktop In a Browser Via A Guacamole Server Running in a Docker Container

I finally got round to finding, and fiddling with, an Apache Guacamole container that I could actually make sense of and it seems to work, with audio, when connecting to my demo RobotLab/Wine RDP desktop.

The container I tried is based on the Github repo oznu/docker-guacamole.

The container is started with:

mkdir guac_config
docker run -p 8080:8080 -v guac_config:/config oznu/guacamole

Login with user name and password guacadmin.

I then launched a RobotLab container that is running an RDP server:

docker run --name tm129 --hostname tm129demo --shm-size 1g -p 3391:3389 -d ousefulcoursecontainers/tm129rdp

Inside Guacamole, we need to create a new connection profile. From the admin drop down menu, select Settings, click on the Connections tab and create a New Connection:

Given the connection a name and specify the protocol as RDP:

The connection settings require the IP address and port noumber that the connection is to be made on. The port mapping was specified when we started the RobotLab container (3391) but what’s the network address? If we try to claim “localhost” in the Guacamole container, that refers the container’s localhost, not localhost on host. On a Mac, we can pick up the host IP address from the Network panel in the System Preferences:

Enter the appropriate connection parameters and save them:

From the admin menu, select Home. From the home screen you should be able to select the new connection…

When the connection is opened, I was presented with a warning dialogue:

but clicking OK cleared it okay…

Then I could enter the RobotLab RDP connection details (username and password are both ubuntu):

and I was in to the desktop.

The application files can be found within the File System in the /opt directory.

As mentioned previously, the base container needs some fettling… When you first run the RobotLab or Neural applications, Wine wants to do some updates (which requires a network connection). If I could figure out how to create users in the base image, rather than user creation occurring as part of the entrypoint, following the recipe here.

Although it’s a little bit ropey, the Guacamole desktop does play out audio.

RobotLab has three instructions for playing audio: sound, send and tone. The sound and send commands play an audio file, and this works, sort of (the spoken works played using the send command are, erm, very robotic!). The tone command doesn’t work, but I’ve seen in docs that this was an outstanding issue for some versions of Windows, so maybe it doesn’t work properly under Wine anyway…

Finally, I note that if you leave the remote desktop running, a screensaver kicks in…

Although the audio support isn’t brilliant (maybe there are “settings” in the container config that can improve it?) the support is more or less good enough, as is, for audio feedback / beeps etc. And just about good enough for the RobotLab activities.

What this means is that now I do have a route for running RobotLab, via a browser, with sort of desktop support.

One other thing to note relates to the network addressing. If I start the Guacamole and RobotLab containers together via a docker-compose.yml file, I’m guessing I should be able to define a Docker Compose network to connect them and use that as the network address/alias name in the Guacamole connection setting?

But I’m creatively drained atm and can’t face trying to get anything else working today…

PS another remote desktop protocol, SPICE, which I found via mentions in OpenStack docs…: [t]he SPICE project aims to provide a complete open source solution for remote access to virtual machines in a seamless way so you can play videos, record audio, share usb devices and share folders without complications [docs and a simple looking howto]. Not sure how deprecated / live this is?

Browser Based Virtualised Environments for Cybersecurity Education – Labtainers and noVNC

Whilst my virtualisation ramblings may seem to be taking a scattergun approach, I’m actually trying to explore the space in a way that generalises meaningfully in the context of the open and distance education.

The motivating ideas essentially boil down to these two questions / constraints:

  • can we package a software application once that we can then run it cross-platform, anywhere, both locally and remotely?
  • can we package the same software application so that it is available via a universal client? I tend to favour the browser as a universal client, but until I can figure out how to do audio from remote desktops via a browser, I also appreciate there may be a need for something like an RDP client too.

I’m also motivated by “open” on the one hand – can we share the means of production, as well as the result — and factory working: will the approach used to deliver one application scale to other applications in different subject areas, or the same application, over time, as it goes through various versions.

My main focus has been on environments for running our TM351 applications (Jupyter notebooks, various databases, OpenRefine) as well as keeping legacy applications running (RobotLab, Genie, Daisyworld) as well as exploring other virtualised desktops (eg for the VREP simulator) but there is also quite a lot of discussion internally around used virtualised environments to support our cybersecurity courses.

I suspect this is both a mature and an evolving space:

  • mature, in that folk have been using virtual machines to support this sort of course for some time; for example, this Offline Capture The Flag-Style Virtual Machine for Cybersecurity Education from University of Birmingham that dates back to 2015, or this SEED Labs — Hands-on Labs for Security Education from Syracuse University that looks like it dates back to 2002. There is also the well-known Kali Linux distribution that is widely used for digital forensics, penetration testing, ethical hacking training, and so on. (The OU also has a long standing Masters level course that has been using a VM for years…)
  • emerging, in that the technology for packaging (eg Docker) and running (eg the growth in cloud services) is evolving quickly, as are the increasing opportunities for creating things like structured notebook scripts around cybersecurity activities).

Recently, I also came across Labtainers, a set of virtual machines produced by the US Naval Postgraduate School’s Center for Cybersecurity and Cyber Operations billed as “fully packaged Linux-based computer science lab exercises with an initial emphasis on cybersecurity. Labtainers include more than 40 cyber lab exercises and tools to build your own.”

Individual activities are packaged in individual Docker containers, and a complete distribution is available bundled into a VirtualBox virtual machine (there’s also a Labtainer design guide). There’s also a paper here: Individualizing Cybersecurity Lab Exercises with Labtainers, Michael F. Thompson & Cynthia E. Irvine, IEEE Security & Privacy, Vol 16(2), March/April 2018, pp. 91-95, DOI: 10.1109/MSP.2018.1870862.

I actually spotted Labtainers from a demo by Olivier Berger / @olberger that was in part demonstrating a noVNC bridge container he’s been working on. I first posted about an X11 / XPRA bridge container I’d come across here; that post describes the JAremko/docker-x11-bridge container which I can run to provide an noVNC desktop through my browser; we can then run application separate application containers and mount the bridge container as a device, exposing the container application on the noVNC desktop. Olivier’s patched noVNC desktop container (fcwu/docker-ubuntu-vnc-desktop offers access to “an Ubuntu LXDE and LXQT desktop environment” so that it can be used in a similar way.

You can see it in action with the labtainers here:

A supporting blog post can be found here: Labtainers in a Web desktop through noVNC X11 proxy, full docker containers; there’s also an associated repo.

From the looks of it, Olivier has been on a similar journey to myself. Another post, this time from last year, describes a Demo of displaying labtainers labs in a Web browser through Guacamole (repo). Guacamole is an Apache project that provides a browser based remote desktop that can act as a noVNC or RDP client (I think…?!).

One thing I’m wondering now is can this sort of thing be packaged using the “new”, (to my recollection, third(?) time of launching?!), Docker Application CNAB packaging format?

(For all their attempts to appeal to a wider audience, I think Docker keep missing a trick by not putting the Kitematic crew back together…)

Virtualisation and the Chances of a Google Chrome (Virtual) App(liance) Store

If nothing else, 2010 should see the launch of Google Chrome OS, a PC operating system to rival Linux and, if the Googlers have their way, Microsoft Windows and Mac OS/X.

Part of the unique proposition of Chrome OS is the notion that applications will run on the web, rather than on the desktop. This doesn’t mean that you won’t be able to run when you’re offline though – several of Google’s current “web” applications, such as Google Docs and GMail already support an offline mode using a browser extension called Google gears. (Note that Gears looks set to be deprecated in favour of native HTML 5).

Google is also gearing up (doh!) to offer cloud based storage through Google docs (upload any file to Google docs), so you’ll be able to use that as a backup for your files (letting Picasa take care of the photos, and Youtube the videos, if you want to let Google play the “all your files are belong to us” game). NB it occurs to me that Google doesn’t yet have a movie or audio editing product…? (The Youtube Remixer that appeared in 2007 was quickly dropped.) One to watch there on the acquisition trail, methinks…? (Why didn’t they take Jumpcut off Yahoo’s hands, I wonder?)

One thing that I don’t understand is the implication that, if Chrome O/S won’t run desktop apps, will it limit its appeal? As ZDNet put it: Google’s Chrome OS: Will you give up desktop apps?

I have to admit that when Chrome O/S was originally announced, one of my first thoughts was that they would offer an in-built virtualisation manager. Virtualisation allows you to create a sandbox that is isolated from your current operating system into which you can drop another operating system and its attendant applications).

So for example, VMWare, Parallels, Virtualbox all offer the ability to install one or more isolated containers on your own desktop within which you can install and run additional operating systems at the same time. So for example, I could run Windows and Linux within separate containers on my Mac desktop.

If Chrome OS had in-built virtualisation support, users could download and install their own virtual appliances (preconfigured operating system+application stacks bundles) in order to run desktop applications.

Although there are a few virtual appliance download sites already out there, it seems to me as if they’d a natural, if heavyweight, opportunity to provide an app store (i.e. a virtual appliance store)?

But Google doesn’t seem to be doing that. That said, Chrome OS will apparently support the ability to write applications in native code (i.e. programmes that can be compiled to run against the computer’s processor rather than on top of Javascript or Flash virtual machines) – Google Chrome OS goes native (code). This is apparently being done for performance reasons; but I can’t quite get my head round the extent to which this differs from a traditional desktop app model? Maybe the idea is that web applications can actually download and install native code plugins that run a tiny sandbox (“virtual plugins/libraries” as opposed to virtual appliances?)

(If truth be told, I’m getting a little out of my depth here… my relevant knowledge is about 20 years out of date;-)

PS In a move I don’t understand, and that prompted this post, virtualisation company VMWare today announced it had bought Zimbra, providers of online email and collaboration apps (In Acquiring Zimbra, VMware Moves Squarely Toward Apps and Collaboration). WTF is going on?