Running lOCL

So I think have the bare bones of a lOCL (local Open Computing Lab) thing’n’workflow running…

I’m also changing the name… to VOCL — Virtual Open Computing Lab … which is an example of a VCL, Virtual Computing Lab, that runs VCEs, Virtual Computing Environments. I think…

If you are Windows, Linux, Mac or a 32 bit Raspberry Pi, you should be able to do the following:

Next, we will install a universal browser based management tool, portainer:

  • install portainer:
    • on Mac/Linux/RPi, run: docker run -d -p 80:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce
    • on Windows, the start up screen suggests docker run -d -p 80:8000 -p 9000:9000 --name=portainer --restart=always -v \\.\pipe\docker_engine:\\.\pipe\docker_engine portainer/portainer-ce may be the way to go?

On my to do list is to customise portainer a bit and call it something lOCL.

On first run, portainer will prompt you for an admin password (at least 8 characters).

You’ll then have to connect to a Docker Engine. Let’s use the local one we’re actually running the application with…

When you’re connected, select to use that local Docker Engine:

Once you’re in, grab the feed of lOCL containers: <s></s> (I’ll be changing that URL sometime soon…: NOW IN OpenComputingLab/locl-templates Github repo) and use it to feed the portainer templates listing:

From the App Templates, you should now be able to see a feed of examaple containers:

The [desktop only] containers can only be run on desktop (amd64) processors, but the other should run on a desktop computer or on a Raspberry Pi using docker on a 32 bit Rasbperry Pi operating system.

>I should be able to generate Docker images for the 64 bit RPi O/S too, but need to get a new SD card… Feel free to chip in to help pay for bits and bobs — SD cards, cables, server hosting, an RPi 8GB and case, etc — or a quick virtual coffee along the way…

The magic that allows containers to be downloaded to Raspberry Pi devices or desktop machines is based on:

  • Docker cross-builds (buildx), which allow you to build containers targeted to different processors;
  • Docker manifest lists that let you create an index of images targeted to different processors and associate them with a single "virtual" image. You can then docker pull X and depending on the hardware you’re running on, the appropriate image will be pulled down.

For more on cross built containers and multiple architecture support, see Multi-Platform Docker Builds. This describes the use of manifest lists which let us pull down architecture appropriate images from the same Docker image name. See also Docker Multi-Architecture Images: Let docker figure the correct image to pull for you.

To cross-build the images, and automate the push to Docker Hub, along with an appropriate manifest list, I used a Github Action workflow using the recipe decribed here: Shipping containers to any platforms: multi-architectures Docker builds.

Here’s a quick summary of the images so far; generally, they either run just on desktop machines (specifically, these are amd64 images, but I think that’s the default for Docker images anyway? At least until folk start buying the new M1 Macs.:

  • Jupyter notebook (oulocl/vce-jupyter): a notebook server based on andresvidal/jupyter-armv7l because it worked on RPi; this image runs on desktop and RPi computers. I guess I can now start iterating on it to make a solid base Jupyter server image. The image also bundles pandas, matplotlib, numpy, scipy and sklearn. These seem to take forever to build using buildx so I built wheels natively on an RPi and added them to the repo so the packages can be installed directly from the wheels. Pyhton wheels are named according to a convention which bakes in things like the Python version and processor architecture that the wheel is compiled for.

  • the OpenRefine container should run absolutely everywhere: it was built using support for a wide range of processor architectures;

  • the TM351 VCE image is the one we shipped to TM351 students in October; desktop machines only at the moment…

  • the TM129 Robotics image is the one we are starting to ship to TM129 students right now; it needs a rebuild because it’s a bit bloated, but I’m wary of doing that with students about to start; hopefully I’ll have a cleaner build for the February start;

  • the TM129 POC image is a test image to try to get the TM129 stuff running on an RPi; it seems to, but the container is full of all sorts of crap as I tried to get it to build the first time. I should now try to build a cleaner image, but I should really refactor the packages that bundle the TM129 software first because they distribute the installation weight and difficulty in the wrong way.

  • the Jupyter Postgres stack is a simple Docker Compose proof of concept that runs a Jupyter server in one container and a PostgreSQL server in a second, linked container. This is perhaps the best way to actually distribute the TM351 environment, rather than the monolithic bundle. At the moment, the Jupyter environment is way short of the TM351 environment in terms of installed Python packages etc., and the Postgres database is unseeded.

  • TM351 also runs a Mongo database, but there are no recent or supported 32 bit Mongo databases any more so that will have to wait till I get a 64 bit O/S running on my RPi. A test demo with an old/legacy 32 bit Mongo image did work okay in a docker-compose portainer stack, and I could talk to it from the Jupyter notebook. It’s a bit of a pain because it means we won’t be able to have the same image running on 32 and 64 bit RPis. And TM351 requires a relatively recent version of Mongo (old versions lack some essentially functionality…).

Imagining a Local Open Computing Lab Server (lOCL)

In the liminal space between sleep and wakefulness of earlier this morning, several pieces of things I’ve been pondering for years seemed to come together:

  • from years ago, digital application library shelves;
  • from months ago, a containerised Open Computing Lab <- very deprecated; things have moved on…)
  • from the weekend, trying to get TM129 and TM351 software running in containers on a Raspberry Pi 400;
  • from yesterday, a quick sketch with Portainer.

And today, the jigsaw assembled itself in the form of a local Open Computing Lab environment.

This combines a centrally provided, consistently packaged approach to the delivery of self-contained virtualised computational environments (VCEs) in the form of Docker containerised services and applications accessed over http) with a self-hosted (locally or remotely) via an environment that provides discovery, retrieval and deployment of these VCEs.

At the moment, for TM351 (live in October 2020), we have a model where students on the local install route (we also have a hosted offering):

  • download and install Docker
  • from the command line, pull the TM351 VCE
  • from the command line, start up the VCE and then access it from the browser.

In TM129, there is a slightly simpler route available:

  • download and install Docker
  • from the command line, pull the TM129 VCE
  • using the graphical Docker Dashboard (Windows and Mac), launch and manage the container.

The difference arises from the way a default Jupyter login token is set in the TM129 container, but a random token is generated in the TM351 VCE, which makes logging in trickier. At the moment, we can’t set the token (as an environment variable) via the Docker Dashboard, so getting into the TM351 container is fiddly. (Easily done, but another step that requires fiddly instructions.)

Tinkering with the Raspberry Pi over the last few days, and thinking about the easiest route to setting it up as a Docker engine server that can be connected to over a network, @kleinee reminded me in a Twitter exchange of the open-source portainer application [repo], a browser based UI for managing Docker environments.

Portainer offers several things:

  • the ability to connect to local or remote Docker Engines
  • the ability to manage Docker images, inclduing pulling them from a specified repo (DockerHub, or a private repo)
  • the ability to manage containers (start, stop, inspect, view logs, etc); this includes the ability to set environment variables at start-up
  • the ability to run compositions
  • a JSON feed powered menu listing a curated set of images / compositions.

So how might this work?

  • download and install Docker
  • pull a lOCL portainer Docker image and set it running
  • login
  • connect to a Docker Engine; in the screenshot below, the Portainer application is running on my RPi
  • view the list of course VCEs
  • select one of the listed VCEs to run it using it’s predefined settings
  • customise container settings via advanced options
  • we can also create a container from scratch, including setting environment variables, start directories etc
  • we can specify volume mounts etc

All the above uses the default UI, with custom settings via a control panel to set the logo and specify the application template feed (the one I was using is here).

I’m also using the old portainer UI (I think) and need to try out the new one (v2.0).

So… next steps?

  • fork the portainer repo and do a simplified version of it (or at least, perhaps such CSS display:none some of the more confusing elements, perhaps toggled with a ‘simple/full UI’ button somewhere
  • cross build the image for desktop machines (Win, Mac etc) and RPi
  • cross build TM351 and TM129 VCE images for desktop machines and RPi, and perhaps also some other demo containers, such as a minimal OU branded Jupyter notebook server and perhaps an edge demo OU branded Jupyter server with lots of my extensions pre-installed. Maybe an package up an environment and teaching materials for the OpenLearn Learn to Code for Data Analysis course as a demo for how OpenLearn might be able to make use of this approach
  • instructions for set up on:
    • desktop computer (Mac, Win, Linux)
    • RPi
    • remote host (Digital Ocean is perhaps simplest; portainer does have a tab for setting up against Azure, but it seems to require finding all sorts of fiddly tokens)

Two days, I reckon, to pull bits together (so four or five, when it doesn’t all "just" work ;-)

But is it worth it?

Keeping Track of TM129 Robotics Block Practical Activity Updates…

Not being the sort of person to bid for projects that require project plans and progress reports and end of project reports, and administration, and funding that you have to spend on things that don’t actually help, I tend to use this blog to keep track of things I’ve done in the form of blog posts, as well as links to summary presentations of work in progress (there is nothing ever other than work in progress…).

But I’ve not really been blogging as much as I should, so here’s a couple of links to presentations that I gave last week relating to the TM129 update:

  • Introducing RoboLab: an integrated robot simulator and Jupyter notebook environment for teaching and learning basic robot programming: this presentation relates to the RoboLab environment I’ve been working that integrates a Javascript robot simulator based on ev3devsim in a jupyter_proxy_widget with Jupyter notebook based instructional material. RoboLab makes heavy use of Jupyter magics to control the simulator, download programs to it, and retrieve logged sensor data from it. I think it’s interesting but no-one else seems to. I had to learn a load of stuff along the way: Javascript, HTML and CSS not the least among them.
  • Using Docker to deliver virtual computing environments (VCEs) to distance education students: this represents some sort of summary about my thinking around delivering virtualised software to students in Docker containers. We’ve actually be serving containers via Kubernetes on Azure using LTI-authed links from the Moodle VLE to launch temporary Jupyter notebook servers via an OU hosted JupyterHub server since Spring, 2018, and shipped the TM351 VCE (virtual computing environment) to TM351 students this October, with TM129 students getting starting to access their Dockerised VCE, which also bundles all the practical activity instructional notebooks. I believe that the institution is looking to run a "pathfinder" project (?) regarding making containerised environments avaliable to students in October, 2021. #ffs Nice to know when your work is appreciated, not… At least someone will make some internal capital in promotion and bonus rounds from that groundbreaking pathfinder work via OU internal achievement award recognition in 2022.

The TM129 materials also bundle some neural network / MLP / CNN activities that I intend to write up in a similar way at some point (next week maybe; but the notes take f****g hours to write…). I think some of the twists in the way the material is presented is quite novel, but then, wtf do I know.

There’s also bits and bobs I explored relating to embedding audio feedback into RoboLab, which I thought might aid accessibility as well as providing a richer experience for all users. You’d have thought I might be able to find someone, anyone, in the org who might be interested in bouncing more ideas around that (we talk up our accessibilitiness(?!)), or maybe putting me straight about why it’s a really crappy and stupid thing to do, but could I find a single person willing to engage on that? Could I f**k…

In passing, I note I ranted about TM129 last year (Feb 2019) in I Just Try to Keep On Keeping On Looking at This Virtual(isation) Stuff…. Some things have moved on, some haven’t. I should probably do a reflective thing comparing that post with the things I ended up tinkering with as part of the TM129 Robotics block practical activity update, but then again, maybe I should go read a book or listen to some rally podcasts instead…

Quick Tinker With Raspberry Pi 400

Some quick notes on a quick play with my Rapsberry Pi 400 keyboard thing…

Plugging it in to my Mac (having found the USB2ethernet dongle becuase Macs are too "thin" to have proper network sockets) and having realised the first USB socket I tried on my Mac doesn’t seem to work (no idea if this is at all, or just with the dongle) I plugged an ethernet between the Mac and the RPi 400, tried a ping which seemed to work:

ping raspberry.local

then tried to SSH in:

ssh pi@raspberry.local

No dice… seems that SSH is not enabled by default, so I had to find the mouse and HDMI cable, rewire the telly, go into the Raspberry Pi Configuration tool, Interfaces tab, and check the ssh option, unwire everything, reset the telly, redo the ethernet cable between Mac and RPi 400 and try again:

ssh pi@raspberry.local

and with the default raspberry password (unchanged, of course, or I might never get back in again!), I’m in. Yeah:-)

> I think the set-up just requires a mouse, but not a keyboard. If you buy a bare bones RPi, I think this means to get running you need: RPi+PSU+ethernet cable, then for the initial set-up: mouse + micro-HDMI cable + access to screen with HDMI input.

> You should also be able to just plug your RPi 400 into your home wifi router using an ethernet cable, and the device should appear (mine did…) at IP address-name raspberry.local.

> Security may be an issue so need to tell user to change the pi password when they have keyboard access. During setup, users could unplug the broadband in cable to their home router until they have a chance to reset the password, or swtich off wifi on their laptop etc if they set-up via an ethernet cable connection to the laptop etc.

Update everything (I’d set up the Raspberry Pi’s connection settings to our home wifi network when I first got it, though with a direct ethernet cable connection, you shouldn’t need to do that?):

sudo apt update && sudo apt upgrade -y

and we’re ready go…

Being of a trusting nature, I’m lazy enough to use the Docker convenience installation script:

curl -sSL | sh

then add the pi user to the docker group:

sudo usermod -aG docker pi

and: logout

then ssh back in again…

I’d found an RPi Jupyter container previously at andresvidal/jupyter-armv7l (Github repo: andresvidal/jupyter-armv7l), so does it work?

docker run -p 8877:8888 -e JUPYTER_TOKEN="letmein" andresvidal/jupyter-armv7l

It certainly does… the notebook server is there and running on http://raspberrypi.local:8877 and the token letmein does what it says on the tin…

> For a more general solution, just install portainer (docker run -d -p 80:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce and then go to http://raspberry.local via a browser and you should be able to install / manage Docker images and containers via that UI.

Grab a container with a bloated TM129 style container (no content):

docker pull outm351dev/nbev3devsimruns

(note that you may need to free space on SD Cards; suggested delections somewhere further down this post).

Autostart container: sudo nano /etc/rc.local before the exit 0 add:

docker run -d -p 80:8888 --name tm129vce  -e JUPYTER_TOKEN="letmein" outm351dev/nbev3devsimruns

Switch off / unplug RPi and switch it on again, server should be viewable at: http:raspberry.local with token letmein. Note that files are not mounted onto desktop. They could be but I think I heard somewhere that repeated backup writes every few seconds may degrade SD card over time?

How about if we try docker-compose?

This isn’t part of the docker package, so we need to install it separately:

pip3 install docker-compose

(I think that pip may be set up to implicitly use --extra-index-url= which seems to try to download prebuilt RPi wheels from…?)

The following docker-compose.yaml file should load a notebook container wired to a PostgreSQL container.

version: "3.5"

    image: andresvidal/jupyter-armv7l
      JUPYTER_TOKEN: "letmein"
      - "$PWD/TM351VCE/notebooks:/home/jovyan/notebooks"
      - "$PWD/TM351VCE/openrefine_projects:/home/jovyan/openrefine"
      - tm351
      - 8866:8888

    image: arm32v7/postgres
      - 5432:5432
      - tm351

    image: apcheamitru/arm32v7-mongo
      - 27017:27017
      - tm351


Does it work?

docker-compose up

It does, I can see the notebook server on http://raspberrypi.local:8866/.

Can we get a connection to the database server? Try the following in a notebook code cell:

# Let's install a host of possibly useful helpers...
%pip install psycopg2-binary sqlalchemy ipython-sql

# Load in the magic...
%load_ext sql

# Set up a connection string

# Connect the magic...
%sql {PGCONN}

Then in a new notebook code cell:

CREATE TABLE quickdemo(id INT, name VARCHAR(20), value INT);
INSERT INTO quickdemo VALUES(1,'This',12);
INSERT INTO quickdemo VALUES(2,'That',345);

SELECT * FROM quickdemo;

And that seems to work too:-)

How about the Mongo stuff?

%pip install pymongo
from pymongo import MongoClient
#Monolithic VM addressing - 'localhost',27351
# docker-compose connection - 'mongo', 27017



# And test
db = c.get_database('test-database')
collection = db.test_collection
post_id = collection.insert_one({'test':'test record'})


A quick try installing the ou-tm129-py package and it seemed to get stuck on the Installing build dependencies ... step, though I could install most packages separately, even if the builds were a bit slow (scikit-learn seemed to cause the grief?).

Running pip3 wheel PACKAGENAME seems to build .whl files into the local directory, so it might be worth creating some wheels and popping them on Github… The Dockerfile for the Jupyter container I’m using gives a crib:

# Copyright (c) Andres Vidal.
# Distributed under the terms of the MIT License.
FROM arm32v7/python:3.8

LABEL created_by=
ARG wheelhouse=


RUN pip install \
    $wheelhouse/kiwisolver-1.1.0-cp38-cp38-linux_armv7l.whl # etc

Trying to the run the nbev3devsim package to load the nbev3devsimwidget, and jp_proxy_widget threw an error, so I raised an issue and it’s already been fixed… (thanks, Aaron:-)

Trying to install jp_proxy_widget from the repo threw an error — npm was missing — but the following seemed to fix that:



tar xvzf node-v15.2.0-linux-armv7l.tar.gz 

mkdir -p /opt/node
cp -r node-v15.2.0-linux-armv7l/* /opt/node

#Add node to your path so you can call it with just "node"
#Add these lines to the file you opened

export PATH
echo "$PROFILE_TEXT" >> ~/.bash_profile
source ~/.bash_profile

# linking for sudo node (TO FIX THIS - NODE DOES NOT NEED SUDO!!)
ln -s /opt/node/bin/node /usr/bin/node
ln -s /opt/node/lib/node /usr/lib/node
ln -s /opt/node/bin/npm /usr/bin/npm
ln -s /opt/node/bin/node-waf /usr/bin/node-waf

From the notebook code cell, nbev3devsim install requires way too much (there’s a lot of crap for the NN packages which I need to separate out… crap, crap, crap:-( Eveything just hangs on sklearn AND I DON"T NEED IT.

%pip install nest-asyncio seaborn tqdm  nb-extension-empinken Pillow
%pip install sklearn
%pip install --no-deps nbev3devsim

So I have to stop now – way past my Friday night curfew… why the f**k didn’t I do the packaging more (c)leanly?! :-(

Memory Issues

There’s a lot of clutter on the memory card supplied with the Raspberry Pi 400, but we can free up some space quite easily:

sudo apt-get purge wolfram-engine libreoffice* scratch -y
sudo apt-get clean
sudo apt-get autoremove -y

# Check free space
df -h

Check O/S: cat /etc/os-release

Installing scikit learn is an issue. Try adding more support for build inside container:

! apt-get update &amp;&amp; apt-get install gfortran libatlas-base-dev libopenblas-dev liblapack-dev -y
%pip install scikit-learn

There is no Py3.8 wheel for sklearn on piwheels at the moment (only 3.7)?

More TM351 Components

Lookup processor:

cat /proc/cpuinfo

Returns: ARMv7 Processor rev 3 (v7l)


uname -a


Linux raspberrypi 5.4.72-v7l+ #1356 SMP Thu Oct 22 13:57:51 BST 2020 armv7l GNU/Linux

Better, get the family as:


I do think there are 64 bit RPis out there thoughm using ARMv8? And RPi 400 advertises as "Featuring a quad-core 64-bit processor"? So what am I not understanding? Ah…

So presumably, with the simple 32 bit O/S we can’t use arm64v8/mongo and instead we need a 32 bit Mongo, which was deprecated in Mongo 3.2? Old version here:

But TM351 has a requirement on much more recent MongoDB… SO we maybe do need to a new SD card image? That could also be built as a much lighter custom image, perhaps with an OU customised dektop…

In the meantime, maybe worth moving straight to Ubuntu 64 bit server?

Ubuntu installation guide for RPi:

There also looks to be a 64 bit RPi / Ubuntu image with Docker already baked in here:

Building an OpenRefine Docker container for Raspberry Pi

I’ve previously posted a cribbed Dockerfile for building an Alpine container that runs OpenRefine ( How to Create a Simple Dockerfile for Building an OpenRefine Docker Image), so let’s have a go at one for building an image that can run on an RPi:

FROM arm32v7/alpine

#We need to install git so we can clone the OpenRefine repo
RUN apk update &amp;&amp; apk upgrade &amp;&amp; apk add --no-cache git bash openjdk8

#Download a couple of required packages
RUN apk update &amp;&amp; apk add --no-cache wget bash
#We can pass variables into the build process via --build-arg variables
#We name them inside the Dockerfile using ARG, optionally setting a default value

#ENV vars are environment variables that get baked into the image
#We can pass an ARG value into a final image by assigning it to an ENV variable
#There's a handy discussion of ARG versus ENV here:
#Download a distribution archive file
RUN wget --no-check-certificate$RELEASE/openrefine-linux-$RELEASE.tar.gz
#Unpack the archive file and clear away the original download file
RUN tar -xzf openrefine-linux-$RELEASE.tar.gz  &amp;&amp; rm openrefine-linux-$RELEASE.tar.gz
#Create an OpenRefine project directory
RUN mkdir /mnt/refine
#Mount a Docker volume against the project directory
VOLUME /mnt/refine
#Expose the server port
#Create the state command.
#Note that the application is in a directory named after the release
#We use the environment variable to set the path correctly
CMD openrefine-$RELEASE/refine -i -d /mnt/refine

Following the recipe here — Building Multi-Arch Images for Arm and x86 with Docker Desktop — we can build an arm32v7` image as follows:

# See what's available...
docker buildx ls

# Create a new build context (what advantage does this offer?)
docker buildx create --name rpibuilder

# Select the build context
docker buildx use rpibuilder

# And cross build the image for the 32 bit RPi o/s:
docker buildx build --platform linux/arm/v7 -t outm351dev/openrefinetest:latest --push .

For more on cross built containers and multiple architecture support, see Multi-Platform Docker Builds. This describes the use of manifest lists which let us pull down architecture appropriate images from the same Docker image name. For more on this, see Docker Multi-Architecture Images: Let docker figure the correct image to pull for you. For an example Github Action workflow, see Shipping containers to any platforms: multi-architectures Docker builds. For issues around new Mac Arm processors, see eg Apple Silicon M1 Chips and Docker.

With the image built and pushed, we can add the following to the docker-compose.yaml file to launch the container via port 3333:

    image: outm351dev/openrefinetest
      - 3333:3333

which seems to run okay:-)

Installing the ou-tm129-py package

Trying to buld the ou-tm129-py package into an image is taking forever on the sklearn build step. I wonder about setting up a buildx process to use something like Docker custom build outputs to genarate wheels. I wonder if this could be done via a Github Action with the result pushed to a Github repo?

Hmmm.. maybe this will help for now? oneoffcoder/rpi-scikit (and Github repo). There is also a cross-building Github Action demonstrated here: Shipping containers to any platforms: multi-architectures Docker builds. Official Docker Github Action here:

Then install node in a child container for the patched jp_widget_proxy build (for some reason, pip doesn’t run in the Dockerfile: need to find the correct py / pip path):

FROM oneoffcoder/rpi-scikit

RUN wget 

RUN tar xvzf node-v15.2.0-linux-armv7l.tar.gz 

RUN mkdir -p /opt/node
RUN cp -r node-v15.2.0-linux-armv7l/* /opt/node

RUN ln -s /opt/node/bin/node /usr/bin/node
RUN ln -s /opt/node/lib/node /usr/lib/node
RUN ln -s /opt/node/bin/npm /usr/bin/npm
RUN ln -s /opt/node/bin/node-waf /usr/bin/node-waf

and in a notebook cell try:

!pip install --upgrade
!pip install --upgrade tqdm
!apt-get  update &amp;&amp; apt-get install -y libjpeg-dev zlib1g-dev
!pip install --extra-index-url=  Pillow #-8.0.1-cp37-cp37m-linux_armv7l.whl 
!pip install nbev3devsim
from nbev3devsim.load_nbev3devwidget import roboSim, eds

%load_ext nbev3devsim

Bah.. the Py 3.low’ness of it is throwing an error in nbev3devsim around character encoding of loaded in files. #FFS

There is actually a whole stack of containers at:

Should I fork this and start to build my own, more recent versions? They seem to use conda, which may simplify the sklearn installation? But it looks like recent Py supporting packages aren’t there? ARRGGHHHH.

Even the "more recent" is now deprecated.

Bits and pieces

WHere do the packages used for your current Python environment when using a Jupyter notebook live?

from distutils.sysconfig import get_python_lib

So.. I got scikit to pip install afteer who knows how long by installing from a Jupyter notebook code cell into the a container that I think was based on the following:

FROM andresvidal/jupyter-armv7l

RUN pip3 install --extra-index-url= myst-nb numpy pandas matplotlib jupytext plotly
RUN wget  &amp;&amp;  tar xvzf node-v15.2.0-linux-armv7l.tar.gz  &amp;&amp;  mkdir -p /op$

# Pillow support?
RUN apt-get install -y libjpeg-dev zlib1g-dev libfreetype6-dev  libopenjp2-7 libtiff5
RUN mkdir -p wheelhouse &amp;&amp; pip3 wheel --wheel-dir=./wheelhouse Pillow &amp;&amp; pip3 install --no-index --find-links=./wheelhouse Pillow
RUN pip3 install --extra-index-url= blockdiag blockdiagMagic

#RUN apt-get install -y gfortran libatlas-base-dev libopenblas-dev liblapack-dev
#RUN pip3 install --extra-index-url= scipy

RUN pip3 wheel --wheel-dir=./wheelhouse durable-rules &amp;&amp; pip3 install --no-index --find-links=./wheelhouse durable-rules

#RUN pip3 wheel --wheel-dir=./wheelhouse scikit-learn &amp;&amp; pip3 install --no-index --find-links=./wheelhouse scikit-learn
RUN pip3 install
RUN pip3 install --upgrade tqdm &amp;&amp;  pip3 install --no-deps nbev3devsim

In the notebook, I tried to generate wheels along the way:

!apt-get install -y  libopenblas-dev gfortran libatlas-base-dev liblapack-dev libblis-dev
%pip install --no-index --find-links=./wheelhouse scikit-learn
%pip wheel  --log skbuild.log  --wheel-dir=./wheelhouse scikit-learn

I donwnloaded the wheels from the notebook home page (select the files, clicl Download) so at the next attempt I’ll see if I can copy the wheels in via the Dockerfile and install sklearn from the wheel.

The image is way to heavy – and there is a lot of production crap in the `ou-tm129-py image that could be removed. But I got the simulator to run :-)

So nows the decision as to whether to try to pull together as lite a container as possible. Is it worth the effort?

Mounting but not COPYing wheels into a container

The Docker build secret Dockerfile feature looks like it will mount a file into the conatiner and let you use it but not actually leave the mouted file in a layer. So could we mount a wheel into the container and install from it, essentially giving a COPY...RUN ....&amp;&amp; rm *.wheel statement?

A recipe from @kleinee for building wheels (I think):

- run pipdeptree | grep -P '^\w+' &gt;requirements.txt
in installation that works (python 3.7.3)
- shift requirements.txt into your 64 bit container with Python x.x and bulid-deps
- in dir containing requirements.txt run pip3 wheel --no-binary :all: -w . -r ./requirements.txt

In passing, tags for wheels: and then See also which goes as far as linux-armv7l (what about arm8???)

Pondering "can we just plug an RPi into an iPad / Chromebook via an ethernet cable?", via @kleinee again, seems like yes, for iPad at least, or at least, using USB-C cable…: See also

For Chromebook, there are lots of USB2Ethernet adapters (which is what I am using with my Mac).. And there are also USB-C to ethernet dongles?

Via @kleinee: example RPi menu driven tool for creating docker-compose scripts: and walkthough video: Also refers to:

Portainer overview: Codeopolis: Huge Guide to Portainer for Beginners. For a video: On RPI example : (not sure if you must set up the volume?)

Barebones RPi?

So to get this running on a home network, you also need to add an ethernet cable (to connect to home router) and a mouse (so you can point and click to set ssh during setup), and have a micron-HDML2HDMI cable and access to a tv/monitor w/ HDMI input during setup, and then you’d be good to go?

Pi 4 may run hot, so maybe replace with a passive heatsink case such as (h/t @kleinee again)? VIa @svenlatham, “[m]ight be sensible to include an SD reader/writer in requirements? Not only does it “solve” the SSH issue from earlier, Pis are a pain for SD card corruption if (for instance) the power is pulled. Giving students the ability to quickly recover in case of failure.” eg maybe?

Idly Wondering – RPi 400 Keyboard

A couple of days ago, I noticed a new release from the Raspberry Pi folks, a "$70 computer" bundling a Raspberry Pi 4 inside a keyboard, with a mouse, power supply and HDMI cable all as part of the under a hundred quid "personal computer kit". Just add screen and network connection (ethernet cable, as well as screen, NOT provided).

Mine arrived today:

Raspberry Pi 400 Personal Computer Kit

So… over the last few months, I’ve been working on some revised material for a level 1 course. The practical computing environment, which is to be made available to students any time now for scheduled use from mid-December, is being shipped via a Docker image. Unlike the TM351 Docker environment (repo), which is just the computing environment, the TM129 Docker container (repo) also contains the instructional activity notebooks.

One of the issues with providing materials this way is that students need a computer that can run Docker. Laptops and desktop computers running Windows or MacOS are fine, but if you have a tablet, cheap Chromebook, or just a phone, you’re stuck. Whilst the software shipped in the Docker image is all accessed through a browser, you still need a "proper" computer to run the server…

Challenges using Raspberry Pis as home computing infrastructure

For a long time, several of us have muttered about the possibility of distributing software to students that can run on a Raspberry Pi, shipping the software on a custom programmed SD card. This is fine, but there are several hurdles to overcome, and the target user (eg someone whose only computer is a phone or a tablet) is likely to be the least confident sort of computer user at a "system" level. For example, to use the Raspberry Pi from a standing start, you really need access to:

  • a screen with an HDMI input (many TVs offer this);
  • a USB keyboard (yeah, right: not in my house for years, now…)
  • a USB mouse (likewise);
  • an ethernet cable (because that’s the easiest way to connect to you home broadband router, at least to start with).

You might also try to get your Raspberry Pi to speak to you if you can connect it to your local network, for example to tell you what IP address it’s on, but then you need an audio output device (the HDMI’d screen may do it, or you need a headset/speaker/set of earphones with an appropriate connector).

Considering these hurdles, the new RPi-containing keyboard, makes it easier to "just get started", particularly if purchased as part of the Personal Computer Kit: all you need is a screen if the SD card has all the software you need.

So I’m wondering again if this is a bit closer to the sort of thing we might give to students when they sign up with the OU: a branded RPi keyboard, with a custom SD card for each module or perhaps a single custom SD card and then a branded USB memory stick with additional software applications required for each module.

All the student needs to provide is a screen. And if we can ship a screensharing or collaborative editing environment (think: Google docs collaborative editing) as part of the software environment, then if a student can connect their phone or tablet to a server running from their keyboard, we could even get away without a phone.

The main challenges are still setting up a connection to a screen or setting up a network connection to a device with a screen.

> Notes from setting up my RPi 4000: don’t admit to having a black border on your screen: I did and lost the the desktop upper toolbar off the top of my TV and had to dive into a config file to reset it, eg as per instructions here: run sudo nano /boot/config.txt then comment out line: #disable_overscan=1

Building software distributions

As far as going software environments to work with the RPi, I had a quick poke around and there are several possible tools that could help.

One approach I am keen on is using Docker containers to distribute images, particularly if we can build images for different platfroms from the same build scripts.

Installing Docker on the RPi looks simple enough, eg this post suggests the following is enough:

sudo apt update -y
curl -fsSL -o && sh

although if that doesn’t work, a more complete recipe can be found here: Installing Docker on the Raspberry Pi .

A quick look for arm32 images on DockerHub turns up a few handy looking images, particulalry if you work with a docker-compose architecture to wire different containers together to provide the student computing environment, and there are examples out there of building Jupyter server RPi Docker images from these base containers (that repo includes some prevbuilt arm python package wheels; other pre-built packages can be found on See also: jupyter-lab-docker-rpi (h/t @dpmcdade) for a Docker container route and kleinee/jns for a desktop install route.

The Docker buildx cross-builder offers another route. This is available via Docker Desktop if uou enable experimental CLI features and should support cross-building of images targeted to the RPi Arm processor, as described here.

Alternatively, it looks like there is at least one possible Github Action out there that will build an Arm7 targeted image and push it to a Docker image hub? (I’m not sure if the official Docker Github Action supports cross-builds?)

There’s also a Github Action for building pip wheels, but I’m not convinced this will build for RPi/Arm too? This repo — Ben-Faessler/Python3-Wheels — may be more informative?

For building full SD card images, this Dockerised pi-builder should do the trick?

So, for my to do list, I think I’ll have a go at seeing whether I can build an RPi runnable docker-compose version of the TM351 environment (partitioning the services into separate containers that can then be composed back together has been on my to do list for some time, so this way I can explore two things at once…) and also have a go at building an RPi Docker image for the TM129 software. The TM129 release might also be interesting to try in the context of an SD Card image with the software installed on the desktop and, following the suggestion of a couple f colleagues, accessed via an xfce4 desktop in kiosk mode perhaps via a minimal o/s such as DietPi.

PS also wonders – anyone know of a jupyter-rpi-repodocker that will do the repo2docker thing on Raspberry Pi platforms…?!

Rally Review Charts Recap

At the start of the year, I was planning on spending free time tinkering with rally data visualisations and trying to pitch some notebook originated articles to Racecar Engineering. The start of lockdown brought some balance, and time away from scrren and keyboard in the garden, but then pointless made up organisational deadlines kicked in and my 12 hour plus days in the front of the screen kicked back in. As Autumn sets in, I’m going to try to cut back working hours, if not screen hours, by spending my mornings playing with rally data.

The code I have in place at the moment is shonky as anything and in desperate need of restarting from scratch. Most of the charts are derived from mutliple separate steps, so I’m thinking about pipeline approaches, perhaps based around simple web services (so a pipeline step is actually a service call). Thiw will give me an opportunity to spend some time seeing how production systems actually build pipeline and microsservice architectures to see if there is anything I can pinch.

One class of charts, shamelessly stolen from @WRCStan of @PushingPace /, are pace maps that use distance along the axis and a function of time on the y-axis.

One flavour of pace map uses a line chart to show the cumulative gap between a selected driver and other drivers. The following chart, for example, shows the progress of Thierry Neuville on WRC Rally Italia Sardegna, 2020. On the y-axis is the accumulated gap in seconds, at the end of each stage, to the identified drivers. Dani Sordo was ahead for the whole of the rally, building a good lead over the first four stages, then holding pace, then losing pace in the back end of the rally. Neuville trailed Seb Ogier over the first five stages, then after borrowing from Sordo’s set-up made a come back and a good battle with Ogier from SS6 right until the end.

Off the pace chart.

The different widths of the stage identify the stage length; the graph is thus a transposed distance-time graph, with the gradient showing the difference in pace in seconds per kilometer.

The other type of pace chart uses lines to create some sort of stylised onion skin histogram. In this map, the vertical dimension is seconds per km gained / loat relative to each driver on the stage, with stage distance again on the x-axis. The area is thus and indicator of the total time gained / lost on the stage, which marks this out as a histogram.

In the example below, bars are filled relative to a specific driver. In this case, we’re plotting pace deltas relative to Thierry Neuville, and highlighting Neuville’s pace gap to Dani Sordo.

Pace mapper.

The other sort of chart I’ve been producing is more of a chartable:

Rally review chart.

This view combines tabular chart, with in-cell bar charts and various embedded charts. The intention was to combine glanceable visual +/- deltas with actual numbers attached as well as graphics that could depict trends and spikes. The chart also allows you to select which driver to rebase the other rows to: this allows you to generate a report that tells the rally story from the perspective of a specified driver.

My thinking is that to create the Rally Review chart, I should perhaps have a range of services that each create one component, along with a tool that lets me construct the finished table from component columns in the desired order.

Some columns may contain a graphical summary over values contained in a set of neighbouring columns, in which case it would make sense for the service to itself be a combination of services: one to generate rebased data, others to generate and return views over the data. (Rebasing data may be expensive computationally, so if we can do it once, rather than repeatedly, that makes sense.)

In terms of trasnformations, then, there are are least two sorts of transformation service required:

  • pace transformations, that take time quantities on a stage and make pace out them by taking the stage distance into account;
  • driver time rebasing transformations, that rebase times relative to a specified driver.

Rummaging through other old graphics (I must have the code somewhere bit not sure where; this will almost certainly need redoing to cope with the data I currently have in the form I have it…), I also turn up some things I’d like to revisit.

First up, stage split charts that are inspired by seasonal subseries plots:

Stage split subseries

These plots show the accumulated delta on a stage relative to a specified driver. Positions on the stage at each split are shown by overplotted labels. If we made the x-axis a split distance dimension, the gradient would show pace difference. As it is, the subseries just indicate trend over splits:

Another old chart I used to quite like is based on a variant of quantised slope chart or bump charts (a bit like postion charts.

This is fine for demonstrating changes in position for a particular loop but gets cluttered if there are more than three or four stages. The colour indicates whether a driver gained or lost time realtive to the overall leader (I think!). The number represents the number of consecutive stage wins in the loop by that driver. The bold font on the right indicates the driver improved overall position over the course of the loop. The rows at the bottom are labelled with position numbers if they rank outside the top 10 (this should really be if they rank lower than the number of entruies in the WRC class).

One thing I haven’t tried, but probably should, is a slope graph comparing times for each driver where there are two passes of the same stage.

I did have a go in the past at more general position / bump charts too but these are perhaps a little too cluttered to be useful:

Again, the oberprintined numbers on the first position row indicate the number ofconsecutive stage wins for that driver; the labels on the lower rows are out of top 10 position labels.

What may be more useful would be adding the gap to leader of diff to car ahead, either on stage or overall, with colour indicating either that quantity, heatmap style, or for the overall case, whether that gap / difference increased or decreased on the stage.

Quiz Night Video Chat Server

Our turn to host quiz night this week, so rather than use Zoom, I thought I’d give Jitsi a spin.

Installation onto a Digital Ocean box is easy enough using the Jitsi server marketplace droplet, althugh it does require you to set up a domain name so the https routing works, which in turn seems to be required to grant Jitsi access via your browser to the camera and microphone.

As per guidance here, to add a domain to a Digital Ocean account simply means adding the domain name (eg to your account and then from your domain control panel, adding the Digital Ocean nameservers (,,

It’s then trivial to create and map a subdomain to a Digital Ocean droplet:

The installation process requires creating the droplet from the marketplace, assigning the domain name, and then from the terminal running some set-up scripts:

  • install server: ./
  • add LetsEncrypt https support: ./

Out of the can, anyone can gain access, although you can add a password to restrict access to a video room once you’ve created it.

To local access down when it comes to creating new meeting rooms, you can add simple auth to created user accounts. The Digital Ocean tutorial How To Install Jitsi Meet on Ubuntu 18.04 “Step 5 — Locking Conference Creation” provides step by step instructions. (It would be nice if a simple script were provided to automate that a little…)

The site is react powered, and I couldn’t spot an easy way to customise the landing page. The text all seems to be provided via language pack files (eg /usr/share/jitsi-meet/lang/main-enGB.json) but I couldn’t see how to actually put any changes into effect? (Again, another simple script in the marketplace droplet to help automate or walk you through that would be useful…)

Fragment: Revealing Otherwise Hidden Answers In Jupyter Notebooks

Some notes culled from an internal feedback forum regarding how to provide answers to questions set in Jupyter notebooks. To keep things simple, this does not extend to providing any automated testing (eg software code tests, computer marked assessments etc); just the provision of worked answers to questions. The sort of thing that might appear in the back of a text book referenced from a question set in the body of the book.

The issue at hand is: how do you provide friction that stops the student just looking at the answer without trying to answer the question themselves? And how much friction should you provide?

In our notebook using data management and analysis course, some of the original notebooks took a naive approach of presenting answers to questions in a separate notebook. This add lots of friction and a quite unpleasant and irritiating interaction, involving opening the answer notebook in a new tab, waiting for the kernel start, and then potentially managing screen real estate and multiple windows to compare the original question and student’s own answer with the answer. Other technical issues include state managament, if for answer the desired answer was not in and of itself reproducible, but instead required some “boilerplate” setting up of the kernel state to get it into a situation where the answer code could be meaningfully run.

One of the advantages of having answer code in a separate notebook is that a ‘Run All’ action in the parent notebook wonlt run the answer code.

Another approach taken in the original notebooks was to use an exercise extension that provided a button that could be clicked to reveal a hidden anwser in the original notebook. This required an extension to be enabled in the environment in which the original notebook was authored, so that the question and answer cells could be selected and then marked up with exercise cell metadata via a toolbar button, and on the student machine, so that the exercise button could be actively be rendered and the answer cell initially hidden by the notebook UI when the notebook was loaded in the browser. (I’m not sure if it would also be possible to explicitly code a button using HTML in a markdown cell that could hide/reveal another classed or id‘d HTML element; I suspect any javascript would be sanitised out of the markdown?)

Most recently, all the notebooks containing exercises in ourr databases course now use a model where we use a collapsible heading to hide the answer. Clicking the collapsed header indicator in the notebook sidebar reveals the answer, which is typically copmposed of several cells, both markdown and code cells. The answer is typically provided in a narrative form, often in the style of a “tutor at your side”, providing not just an explained walkthrough of the answer but also identifying likely possible incorrect answers and misunderstandings, why they might at first appear to have been “reasonable” answers, and why they are ultimately incorrect.

This answer reveal mechanic is very low friction, and degrades badly: if the collapse headings extension is note enabled, the answer is rendered as just any other notebook cell.

When using these approaches, care needs to be taken when considering what the Run All action might do when the answer code cells are reached: does the answer code run? Does it create an error? Is the code ignored and code execution continued in cells after the hidden answer cell.

In production workflows where a notebook may be executed to generated a particular output document, such as a PDF with all cells run, how are answer cells to be treated? Should they be displayed? SHould they be displayed and answer cells run? This might depend on audience: for example, a PDF output for a tutor may contain answers and answer outputs, a student PDF may actually be two or three documents: one containing the unanswered text, one containing the answer code but not executed, one cotnaining the answer code and executed code outputs.

Plagiarising myself from a forthcoming post, as well as adding friction, I think wonder if what we’re actually trying to achieve by the way the answer is accessed or aotherwise revealed might include various other psychological and motivational factors that affect learning. For example, setting up the anticipation that pays-off as a positive reward for a student who gets the answer right, or a slightly negative payoff that makes a student chastise themselves for clicking right to the answer. At the same time, we also need to accommodate students who may have struggled with a question and is reluctantly appealing to the answer as a request for help. There are also factors relating to the teaching, insofar as what we expect the student to do before they reveal the answer, what expectation students may have (or that we may set up) regarding what the “answer” might provide, how we expect the student to engage with the answer, and what we mght expect them to do after reading the answer.

So what other approaches might be possible?

If the answer can be contained in just the contents of a sinlge code cell, the answer can be placed in an external file, for example,, in the same directory or an inconvenient to access solutions directory such as ../../solutions. This effectively hides the solution but allows a student to reder it by running the following in a code cell:

%load ../../solutions/

The content of the file will be loaded into the code cell.

One advantage of this approach is that a student has to take a positive action to render the answer code into the cell. You can make this easier or harder depending on how much friction you want to add. Eg low friction: just comment out the magic in the code cell, so all a student has to do is uncomment and run it. Higher friction: make them work out what the file name is and write it themselves, then run the code cell.

I’m not sure I’m convinced having things strewn across multiple files is the best way of adding the right amount and right sort of friction to the answer lookup problem though. It also causes maintenance issues, unless you are generating separate student activity notebook and student hint or answer notebooks from a single master notebook that contains activities and hints/answers in the same document.

Other approaches I’ve seen to hiding and revealing answers include creating a package that contains the answers, and then running a call onto that package to display the answer. For example, something like:

from course_activities import answers

An alternative to rendering answers from a python package would be to take more of a “macro” approach and define a magic which maybe has a slightly different feeling to it. For example:

%hint week2_section1_activity3

In this approach, you could explore different sorts of psychological friction, using nudges:

%oh_go_on_then_give_me_a_hint_for week2_section1_activity3


%ffs_why_is_this_so_hard week2_section1_activity3

If the magic incantation is memorable, and students know the pattern for creating phrases like week2_section1_activity3 eg from the notebook URL pattern, then they can invoke the hint themself by saying it. You could even have aliases for revealing the hint, which the student could use in different situations to remind themselves of their state of mind when they were doing the activity.

%I_did_it_but_what_did_you_suggest_for week2_section1_activity3

With a little bit of thought, we could perhaps come up with a recipe that combines approaches to provide an experience that does degrade gracefully and that perhaps also allows us to play with different sorts of friction.

For example, suppose we extend the Collapsible Heading notebook extension so that if we tag a cell as an answer-header cell and a code cell below put a #%load line. In an unextended notebook, the student has to uncomment the load magic and run the cell to reveal the answer; in an extended notebook, the extension colour codes the answer cell and collapses the answer; when the student clicks to expand the answer, the extension looks for #%load lines in the collapsed area, uncomments and executes them to load in the answer, expands the answer cells automatically.

We could go further and provide an extension that lets an author write an answer in eg a markdown cell and then “answerify” it by posting it as metadata in preceding Answer reveal cell. In an unextended student notebook, the answer would be hidden in metadata. To reveal it, a student could inspect the cell metadata. Clunky for them and answer wouldn’t be rendered, but doable. In extended notebook clicking reveal would cause the extension to extract the answer from metadata and then render it in the notebook.

In terms of adding friction, we could perhaps add a delay between clicking a reveal answer button and displaying an answer, although if the delay is too long, a student may pre-emptively come to click the answer button before they actually want to tanswer the button so they don’t have to wait. The use of audible signal might also provide psychological nudges that subconsciously influence the student’s decision about when to reveal an answer.

There are lots of things to explore I think but they all come with a different set of opportunity costs and use case implications. But a better understanding of what we’re actually trying to achieve would be a start… And that’s presumably been covered in the interactive learning design literature? And if it hasn’t, then why not? (Am I overthinking all this again?!)

PS It’s also worth noting something I haven’t covered: notebooks that include interactive self-test questions, or self-test questions where code cells are somehow cross-referenced with code tests. (The nteract testbook recipe looks like it could be interesting in this respcect, eg where a user might be able to run tests from one notebook against another notebook. The otter-grader might also be worth looking at in this regard.


If you look back not that far in history, the word “computer” was a term applied a person working in a particular role. According to Webster’s 1828 American Dictionary of the English Language, a computer was defined as “[o]ne who computes or reckons; one who estimates or considers the force and effect of causes, with a view to form a correct estimate of the effects”.

Going back a bit further, to Samuel Johnson’s magnum opus, we see a “computer” is defined more concisely as a “reckoner” or “accountant”.

In a disambiguation page, Wikipedia identifies Computer_(job_description), quoting Turing’s Computing Machinery and Intelligence paper in Mind (Volume LIX, Issue 236, October 1950, Pages 433–460):

The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail. We may suppose that these rules are supplied in a book, which is altered whenever he is put on to a new job.

Skimming through a paper that appeared in my feeds today — CHARTDIALOGS: Plotting from Natural Language Instructions [ACL 2020; code repo] — the following jumped out at me:

In order to further inspect the quality and difficulty of our dataset, we sampled a subset of 444 partial dialogs. Each partial dialog consists of the first several turns of a dialog, and ends with a Describer utterance. The corresponding Operator response is omitted. Thus, the human has to predict what the Operator (the plotting agent) will plot, given this partial dialog. We created a new MTurk task, where we presented each partial dialog to 3 workers and collected their responses.

Humans. As computers. Again.

Originally, the computer was a person doing a mechanical task.

Now, a computer is a digital device.

Now a computer aspires to be AI, artificial (human) intelligence.

Now AI is, in many cases, behind the Wizard of Oz curtain, inside von Kempelen’s “The Turk” automaton (not…), a human.

Human Inside.

A couple of of other things that jumped out at me, relating to instrumentation and comparison between machines:

The cases in which the majority of the workers (3/3 or 2/3) exactly match the original Operator, corresponding to the first two rows, happen 72.6% of the time. The cases when at least 3 out of all 4 humans (including the original Operator) agree, corresponding to row 1, 2 and 5, happen 80.6% of the time. This setting is also worth considering because the original Operator is another MTurk worker, who can also make mistakes. Both of these numbers show that a large fraction of the utterances in our dataset are intelligible implying an overall good quality dataset. Fleiss’ Kappa among all 4 humans is 0.849; Cohen’s Kappa between the original Operator and the majority among 3 new workers is 0.889. These numbers indicate a strong agreement as well.

Just like you might compare the performance  of different implementations of an algorithm in code, we also compare the performance of their  instationation in digitial or human computers.

At the moment, for “intelligence” tasks (and it’s maybe worth noting that Mechanical Turk has work packages defined as HITs, “Human Intelligence Tasks”) humans are regarded as providing the benchmark god standard, imperfect as it is.

7.5 Models vs. Gold Human Performance (P3) The gold human performance was obtained by having one of the authors perform the same task as described in the previous subsection, on a subset


See also: Robot Workers?