So I think have the bare bones of a
lOCL (local Open Computing Lab) thing’n’workflow running…
I’m also changing the name… to
VOCL — Virtual Open Computing Lab … which is an example of a VCL, Virtual Computing Lab, that runs VCEs, Virtual Computing Environments. I think…
If you are Windows, Linux, Mac or a 32 bit Raspberry Pi, you should be able to do the following:
Next, we will install a universal browser based management tool,
- install portainer:
- on Mac/Linux/RPi, run:
docker run -d -p 80:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce
- on Windows, the start up screen suggests
docker run -d -p 80:8000 -p 9000:9000 --name=portainer --restart=always -v \\.\pipe\docker_engine:\\.\pipe\docker_engine portainer/portainer-cemay be the way to go?
- on Mac/Linux/RPi, run:
On my to do list is to customise portainer a bit and call it something
On first run, portainer will prompt you for an admin password (at least 8 characters).
You’ll then have to connect to a Docker Engine. Let’s use the local one we’re actually running the application with…
When you’re connected, select to use that local Docker Engine:
Once you’re in, grab the feed of
lOCL containers: <s>
https://raw.githubusercontent.com/ouseful-demos/templates/master/ou-templates.json</s> (I’ll be changing that URL sometime soon…: NOW IN
OpenComputingLab/locl-templates Github repo) and use it to feed the portainer templates listing:
From the App Templates, you should now be able to see a feed of examaple containers:
[desktop only] containers can only be run on desktop (
amd64) processors, but the other should run on a desktop computer or on a Raspberry Pi using docker on a 32 bit Rasbperry Pi operating system.
Access the container from the Containers page:
By default, when you launch a container, it is opened onto the domain
0.0.0.0. This can be changed to the actual required domain via the Endpoints configuration page. For example, my Raspberry Pi appears on
raspberrypi.local, so if I’m running portainer against that local Docker endpoint, I can configure the path as follows:
>I should be able to generate Docker images for the 64 bit RPi O/S too, but need to get a new SD card… Feel free to chip in to help pay for bits and bobs — SD cards, cables, server hosting, an RPi 8GB and case, etc — or a quick virtual coffee along the way…
The magic that allows containers to be downloaded to Raspberry Pi devices or desktop machines is based on:
- Docker cross-builds (
buildx), which allow you to build containers targeted to different processors;
- Docker manifest lists that let you create an index of images targeted to different processors and associate them with a single "virtual" image. You can then
docker pull Xand depending on the hardware you’re running on, the appropriate image will be pulled down.
For more on cross built containers and multiple architecture support, see Multi-Platform Docker Builds. This describes the use of
manifest lists which let us pull down architecture appropriate images from the same Docker image name. See also Docker Multi-Architecture Images: Let docker figure the correct image to pull for you.
To cross-build the images, and automate the push to Docker Hub, along with an appropriate manifest list, I used a Github Action workflow using the recipe decribed here: Shipping containers to any platforms: multi-architectures Docker builds.
Here’s a quick summary of the images so far; generally, they either run just on desktop machines (specifically, these are
amd64 images, but I think that’s the default for Docker images anyway? At least until folk start buying the new M1 Macs.:
Jupyter notebook (
oulocl/vce-jupyter): a notebook server based on
andresvidal/jupyter-armv7lbecause it worked on RPi; this image runs on desktop and RPi computers. I guess I can now start iterating on it to make a solid base Jupyter server image. The image also bundles
sklearn. These seem to take forever to build using
buildxso I built wheels natively on an RPi and added them to the repo so the packages can be installed directly from the wheels. Pyhton wheels are named according to a convention which bakes in things like the Python version and processor architecture that the wheel is compiled for.
the OpenRefine container should run absolutely everywhere: it was built using support for a wide range of processor architectures;
the TM351 VCE image is the one we shipped to TM351 students in October; desktop machines only at the moment…
the TM129 Robotics image is the one we are starting to ship to TM129 students right now; it needs a rebuild because it’s a bit bloated, but I’m wary of doing that with students about to start; hopefully I’ll have a cleaner build for the February start;
the TM129 POC image is a test image to try to get the TM129 stuff running on an RPi; it seems to, but the container is full of all sorts of crap as I tried to get it to build the first time. I should now try to build a cleaner image, but I should really refactor the packages that bundle the TM129 software first because they distribute the installation weight and difficulty in the wrong way.
the Jupyter Postgres stack is a simple Docker Compose proof of concept that runs a Jupyter server in one container and a PostgreSQL server in a second, linked container. This is perhaps the best way to actually distribute the TM351 environment, rather than the monolithic bundle. At the moment, the Jupyter environment is way short of the TM351 environment in terms of installed Python packages etc., and the Postgres database is unseeded.
TM351 also runs a Mongo database, but there are no recent or supported 32 bit Mongo databases any more so that will have to wait till I get a 64 bit O/S running on my RPi. A test demo with an old/legacy 32 bit Mongo image did work okay in a docker-compose portainer stack, and I could talk to it from the Jupyter notebook. It’s a bit of a pain because it means we won’t be able to have the same image running on 32 and 64 bit RPis. And TM351 requires a relatively recent version of Mongo (old versions lack some essentially functionality…).