Some quick notes on a quick play with my Rapsberry Pi 400 keyboard thing…
Plugging it in to my Mac (having found the USB2ethernet dongle becuase Macs are too "thin" to have proper network sockets) and having realised the first USB socket I tried on my Mac doesn’t seem to work (no idea if this is at all, or just with the dongle) I plugged an ethernet between the Mac and the RPi 400, tried a ping which seemed to work:
ping raspberry.local
then tried to SSH in:
ssh pi@raspberry.local
No dice… seems that SSH is not enabled by default, so I had to find the mouse and HDMI cable, rewire the telly, go into the Raspberry Pi Configuration tool, Interfaces tab, and check the ssh
option, unwire everything, reset the telly, redo the ethernet cable between Mac and RPi 400 and try again:
ssh pi@raspberry.local
and with the default raspberry
password (unchanged, of course, or I might never get back in again!), I’m in. Yeah:-)
> I think the set-up just requires a mouse, but not a keyboard. If you buy a bare bones RPi, I think this means to get running you need: RPi+PSU+ethernet cable, then for the initial set-up: mouse + micro-HDMI cable + access to screen with HDMI input.
> You should also be able to just plug your RPi 400 into your home wifi router using an ethernet cable, and the device should appear (mine did…) at IP address-name raspberry.local
.
> Security may be an issue so need to tell user to change the pi password when they have keyboard access. During setup, users could unplug the broadband in cable to their home router until they have a chance to reset the password, or swtich off wifi on their laptop etc if they set-up via an ethernet cable connection to the laptop etc.
Update everything (I’d set up the Raspberry Pi’s connection settings to our home wifi network when I first got it, though with a direct ethernet cable connection, you shouldn’t need to do that?):
sudo apt update && sudo apt upgrade -y
and we’re ready go…
Being of a trusting nature, I’m lazy enough to use the Docker convenience installation script:
curl -sSL https://get.docker.com | sh
then add the pi
user to the docker
group:
sudo usermod -aG docker pi
and:
logout
then ssh back in again…
I’d found an RPi Jupyter container previously at andresvidal/jupyter-armv7l
(Github repo: andresvidal/jupyter-armv7l
), so does it work?
docker run -p 8877:8888 -e JUPYTER_TOKEN="letmein" andresvidal/jupyter-armv7l
It certainly does… the notebook server is there and running on http://raspberrypi.local:8877
and the token letmein
does what it says on the tin…
> For a more general solution, just install portainer (docker run -d -p 80:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce
and then go to http://raspberry.local
via a browser and you should be able to install / manage Docker images and containers via that UI.
Grab a container with a bloated TM129 style container (no content):
docker pull outm351dev/nbev3devsimruns
(note that you may need to free space on SD Cards; suggested delections somewhere further down this post).
Autostart container: sudo nano /etc/rc.local
before the exit 0
add:
docker run -d -p 80:8888 --name tm129vce -e JUPYTER_TOKEN="letmein" outm351dev/nbev3devsimruns
Switch off / unplug RPi and switch it on again, server should be viewable at: http:raspberry.local
with token letmein
. Note that files are not mounted onto desktop. They could be but I think I heard somewhere that repeated backup writes every few seconds may degrade SD card over time?
How about if we try docker-compose
?
This isn’t part of the docker package, so we need to install it separately:
pip3 install docker-compose
(I think that pip may be set up to implicitly use --extra-index-url=https://www.piwheels.org/simple
which seems to try to download prebuilt RPi wheels from piwheels.org…?)
The following docker-compose.yaml
file should load a notebook container wired to a PostgreSQL container.
version: "3.5"
services:
tm351:
image: andresvidal/jupyter-armv7l
environment:
JUPYTER_TOKEN: "letmein"
volumes:
- "$PWD/TM351VCE/notebooks:/home/jovyan/notebooks"
- "$PWD/TM351VCE/openrefine_projects:/home/jovyan/openrefine"
networks:
- tm351
ports:
- 8866:8888
postgres:
image: arm32v7/postgres
environment:
POSTGRES_PASSWORD: "PGPass"
ports:
- 5432:5432
networks:
- tm351
mongo:
image: apcheamitru/arm32v7-mongo
ports:
- 27017:27017
networks:
- tm351
networks:
tm351:
Does it work?
docker-compose up
It does, I can see the notebook server on http://raspberrypi.local:8866/
.
Can we get a connection to the database server? Try the following in a notebook code cell:
# Let's install a host of possibly useful helpers...
%pip install psycopg2-binary sqlalchemy ipython-sql
# Load in the magic...
%load_ext sql
# Set up a connection string
PGCONN='postgresql://postgres:PGPass@postgres:5432/'
# Connect the magic...
%sql {PGCONN}
Then in a new notebook code cell:
%%sql
DROP TABLE IF EXISTS quickdemo CASCADE;
DROP TABLE IF EXISTS quickdemo2 CASCADE;
CREATE TABLE quickdemo(id INT, name VARCHAR(20), value INT);
INSERT INTO quickdemo VALUES(1,'This',12);
INSERT INTO quickdemo VALUES(2,'That',345);
SELECT * FROM quickdemo;
And that seems to work too:-)
How about the Mongo stuff?
%pip install pymongo
from pymongo import MongoClient
#Monolithic VM addressing - 'localhost',27351
# docker-compose connection - 'mongo', 27017
MONGOHOST='mongo'
MONGOPORT=27017
MONGOCONN='mongodb://{MONGOHOST}:{MONGOPORT}/'.format(MONGOHOST=MONGOHOST,MONGOPORT=MONGOPORT)
c = MongoClient(MONGOHOST, MONGOPORT)
# And test
db = c.get_database('test-database')
collection = db.test_collection
post_id = collection.insert_one({'test':'test record'})
c.list_database_names()
A quick try installing the ou-tm129-py
package and it seemed to get stuck on the Installing build dependencies ...
step, though I could install most packages separately, even if the builds were a bit slow (scikit-learn
seemed to cause the grief?).
Running pip3 wheel PACKAGENAME
seems to build .whl
files into the local directory, so it might be worth creating some wheels and popping them on Github… The Dockerfile
for the Jupyter container I’m using gives a crib:
# Copyright (c) Andres Vidal.
# Distributed under the terms of the MIT License.
FROM arm32v7/python:3.8
LABEL created_by=https://github.com/andresvidal/jupyter-armv7l
ARG wheelhouse=https://github.com/andresvidal/jupyter-armv7l/raw/master/wheelhouse
#...
RUN pip install \
$wheelhouse/kiwisolver-1.1.0-cp38-cp38-linux_armv7l.whl # etc
Trying to the run the nbev3devsim
package to load the nbev3devsimwidget
, and jp_proxy_widget
threw an error, so I raised an issue and it’s already been fixed… (thanks, Aaron:-)
Trying to install jp_proxy_widget
from the repo threw an error — npm
was missing — but the following seemed to fix that:
#https://gist.github.com/myrtleTree33/8080843
wget https://nodejs.org/dist/latest-v15.x/node-v15.2.0-linux-armv7l.tar.gz
#unpack
tar xvzf node-v15.2.0-linux-armv7l.tar.gz
mkdir -p /opt/node
cp -r node-v15.2.0-linux-armv7l/* /opt/node
#Add node to your path so you can call it with just "node"
#Add these lines to the file you opened
PROFILE_TEXT="
PATH=\$PATH:/opt/node/bin
export PATH
"
echo "$PROFILE_TEXT" >> ~/.bash_profile
source ~/.bash_profile
# linking for sudo node (TO FIX THIS - NODE DOES NOT NEED SUDO!!)
ln -s /opt/node/bin/node /usr/bin/node
ln -s /opt/node/lib/node /usr/lib/node
ln -s /opt/node/bin/npm /usr/bin/npm
ln -s /opt/node/bin/node-waf /usr/bin/node-waf
From the notebook code cell, nbev3devsim
install requires way too much (there’s a lot of crap for the NN packages which I need to separate out… crap, crap, crap:-( Eveything just hangs on sklearn
AND I DON"T NEED IT.
%pip install https://github.com/AaronWatters/jp_proxy_widget/archive/master.zip nest-asyncio seaborn tqdm nb-extension-empinken Pillow
%pip install sklearn
%pip install --no-deps nbev3devsim
So I have to stop now – way past my Friday night curfew… why the f**k didn’t I do the packaging more (c)leanly?! :-(
Memory Issues
There’s a lot of clutter on the memory card supplied with the Raspberry Pi 400, but we can free up some space quite easily:
sudo apt-get purge wolfram-engine libreoffice* scratch -y
sudo apt-get clean
sudo apt-get autoremove -y
# Check free space
df -h
Check O/S: cat /etc/os-release
Installing scikit learn is an issue. Try adding more support for build inside container:
! apt-get update && apt-get install gfortran libatlas-base-dev libopenblas-dev liblapack-dev -y
%pip install scikit-learn
There is no Py3.8 wheel for sklearn on piwheels at the moment (only 3.7)?
More TM351 Components
Lookup processor:
cat /proc/cpuinfo
Returns: ARMv7 Processor rev 3 (v7l)
And:
uname -a
Returns:
Linux raspberrypi 5.4.72-v7l+ #1356 SMP Thu Oct 22 13:57:51 BST 2020 armv7l GNU/Linux
Better, get the family as:
PROCESSOR_FAMILY=`uname -m`
I do think there are 64 bit RPis out there thoughm using ARMv8? And RPi 400 advertises as "Featuring a quad-core 64-bit processor"? So what am I not understanding? Ah… https://raspberrypi.stackexchange.com/questions/101215/why-raspberry-pi-4b-claims-that-its-processor-is-armv7l-when-in-official-specif
So presumably, with the simple 32 bit O/S we can’t use arm64v8/mongo
and instead we need a 32 bit Mongo, which was deprecated in Mongo 3.2? Old version here: https://hub.docker.com/r/apcheamitru/arm32v7-mongo
But TM351 has a requirement on much more recent MongoDB… SO we maybe do need to a new SD card image? That could also be built as a much lighter custom image, perhaps with an OU customised dektop…
In the meantime, maybe worth moving straight to Ubuntu 64 bit server? https://ubuntu.com/download/raspberry-pi
Ubuntu installation guide for RPi: https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#1-overview
There also looks to be a 64 bit RPi / Ubuntu image with Docker already baked in here: https://github.com/guysoft/UbuntuDockerPi
Building an OpenRefine Docker container for Raspberry Pi
I’ve previously posted a cribbed Dockerfile for building an Alpine container that runs OpenRefine ( How to Create a Simple Dockerfile for Building an OpenRefine Docker Image), so let’s have a go at one for building an image that can run on an RPi:
FROM arm32v7/alpine
#We need to install git so we can clone the OpenRefine repo
RUN apk update && apk upgrade && apk add --no-cache git bash openjdk8
MAINTAINER tony.hirst@gmail.com
#Download a couple of required packages
RUN apk update && apk add --no-cache wget bash
#We can pass variables into the build process via --build-arg variables
#We name them inside the Dockerfile using ARG, optionally setting a default value
#ARG RELEASE=3.1
ARG RELEASE=3.4.1
#ENV vars are environment variables that get baked into the image
#We can pass an ARG value into a final image by assigning it to an ENV variable
ENV RELEASE=$RELEASE
#There's a handy discussion of ARG versus ENV here:
#https://vsupalov.com/docker-arg-vs-env/
#Download a distribution archive file
RUN wget --no-check-certificate https://github.com/OpenRefine/OpenRefine/releases/download/$RELEASE/openrefine-linux-$RELEASE.tar.gz
#Unpack the archive file and clear away the original download file
RUN tar -xzf openrefine-linux-$RELEASE.tar.gz && rm openrefine-linux-$RELEASE.tar.gz
#Create an OpenRefine project directory
RUN mkdir /mnt/refine
#Mount a Docker volume against the project directory
VOLUME /mnt/refine
#Expose the server port
EXPOSE 3333
#Create the state command.
#Note that the application is in a directory named after the release
#We use the environment variable to set the path correctly
CMD openrefine-$RELEASE/refine -i 0.0.0.0 -d /mnt/refine
Following the recipe here — Building Multi-Arch Images for Arm and x86 with Docker Desktop — we can build an arm32v7` image as follows:
# See what's available...
docker buildx ls
# Create a new build context (what advantage does this offer?)
docker buildx create --name rpibuilder
# Select the build context
docker buildx use rpibuilder
# And cross build the image for the 32 bit RPi o/s:
docker buildx build --platform linux/arm/v7 -t outm351dev/openrefinetest:latest --push .
For more on cross built containers and multiple architecture support, see Multi-Platform Docker Builds. This describes the use of manifest lists
which let us pull down architecture appropriate images from the same Docker image name. For more on this, see Docker Multi-Architecture Images: Let docker figure the correct image to pull for you. For an example Github Action workflow, see Shipping containers to any platforms: multi-architectures Docker builds. For issues around new Mac Arm processors, see eg Apple Silicon M1 Chips and Docker.
With the image built and pushed, we can add the following to the docker-compose.yaml
file to launch the container via port 3333:
openrefine:
image: outm351dev/openrefinetest
ports:
- 3333:3333
which seems to run okay:-)
Installing the ou-tm129-py package
Trying to buld the ou-tm129-py
package into an image is taking forever on the sklearn build step. I wonder about setting up a buildx process to use something like Docker custom build outputs to genarate wheels. I wonder if this could be done via a Github Action with the result pushed to a Github repo?
Hmmm.. maybe this will help for now? oneoffcoder/rpi-scikit
(and Github repo). There is also a cross-building Github Action demonstrated here: Shipping containers to any platforms: multi-architectures Docker builds. Official Docker Github Action here: https://github.com/docker/setup-buildx-action#quick-start
Then install node
in a child container for the patched jp_widget_proxy
build (for some reason, pip doesn’t run in the Dockerfile: need to find the correct py / pip path):
FROM oneoffcoder/rpi-scikit
RUN wget https://nodejs.org/dist/latest-v15.x/node-v15.2.0-linux-armv7l.tar.gz
RUN tar xvzf node-v15.2.0-linux-armv7l.tar.gz
RUN mkdir -p /opt/node
RUN cp -r node-v15.2.0-linux-armv7l/* /opt/node
RUN ln -s /opt/node/bin/node /usr/bin/node
RUN ln -s /opt/node/lib/node /usr/lib/node
RUN ln -s /opt/node/bin/npm /usr/bin/npm
RUN ln -s /opt/node/bin/node-waf /usr/bin/node-waf
and in a notebook cell try:
!pip install --upgrade https://github.com/AaronWatters/jp_proxy_widget/archive/master.zip
!pip install --upgrade tqdm
!apt-get update && apt-get install -y libjpeg-dev zlib1g-dev
!pip install --extra-index-url=https://www.piwheels.org/simple Pillow #-8.0.1-cp37-cp37m-linux_armv7l.whl
!pip install nbev3devsim
from nbev3devsim.load_nbev3devwidget import roboSim, eds
%load_ext nbev3devsim
Bah.. the Py 3.low’ness of it is throwing an error in nbev3devsim
around character encoding of loaded in files. #FFS
There is actually a whole stack of containers at: https://github.com/oneoffcoder/docker-containers
Should I fork this and start to build my own, more recent versions? They seem to use conda, which may simplify the sklearn installation? But it looks like recent Py supporting packages aren’t there? https://repo.anaconda.com/pkgs/free/linux-armv7l/ ARRGGHHHH.
Even the "more recent" https://github.com/jjhelmus/berryconda is now deprecated.
Bits and pieces
WHere do the packages used for your current Python environment when using a Jupyter notebook live?
from distutils.sysconfig import get_python_lib
print(get_python_lib())
So.. I got scikit to pip install afteer who knows how long by installing from a Jupyter notebook code cell into the a container that I think was based on the following:
FROM andresvidal/jupyter-armv7l
RUN pip3 install --extra-index-url=https://www.piwheels.org/simple myst-nb numpy pandas matplotlib jupytext plotly
RUN wget https://nodejs.org/dist/latest-v15.x/node-v15.2.0-linux-armv7l.tar.gz && tar xvzf node-v15.2.0-linux-armv7l.tar.gz && mkdir -p /op$
# Pillow support?
RUN apt-get install -y libjpeg-dev zlib1g-dev libfreetype6-dev libopenjp2-7 libtiff5
RUN mkdir -p wheelhouse && pip3 wheel --wheel-dir=./wheelhouse Pillow && pip3 install --no-index --find-links=./wheelhouse Pillow
RUN pip3 install --extra-index-url=https://www.piwheels.org/simple blockdiag blockdiagMagic
#RUN apt-get install -y gfortran libatlas-base-dev libopenblas-dev liblapack-dev
#RUN pip3 install --extra-index-url=https://www.piwheels.org/simple scipy
RUN pip3 wheel --wheel-dir=./wheelhouse durable-rules && pip3 install --no-index --find-links=./wheelhouse durable-rules
#RUN pip3 wheel --wheel-dir=./wheelhouse scikit-learn && pip3 install --no-index --find-links=./wheelhouse scikit-learn
RUN pip3 install https://github.com/AaronWatters/jp_proxy_widget/archive/master.zip
RUN pip3 install --upgrade tqdm && pip3 install --no-deps nbev3devsim
In the notebook, I tried to generate wheels along the way:
!apt-get install -y libopenblas-dev gfortran libatlas-base-dev liblapack-dev libblis-dev
%pip install --no-index --find-links=./wheelhouse scikit-learn
%pip wheel --log skbuild.log --wheel-dir=./wheelhouse scikit-learn
I donwnloaded the wheels from the notebook home page (select the files, clicl Download) so at the next attempt I’ll see if I can copy the wheels in via the Dockerfile and install sklearn
from the wheel.
The image is way to heavy – and there is a lot of production crap in the `ou-tm129-py image that could be removed. But I got the simulator to run :-)
So nows the decision as to whether to try to pull together as lite a container as possible. Is it worth the effort?
Mounting but not COPYing wheels into a container
The Docker build secret Dockerfile feature looks like it will mount a file into the conatiner and let you use it but not actually leave the mouted file in a layer. So could we mount a wheel into the container and install from it, essentially giving a COPY...RUN ....&& rm *.wheel
statement?
A recipe from @kleinee for building wheels (I think):
- run pipdeptree | grep -P '^\w+' >requirements.txt
in installation that works (python 3.7.3)
- shift requirements.txt into your 64 bit container with Python x.x and bulid-deps
- in dir containing requirements.txt run pip3 wheel --no-binary :all: -w . -r ./requirements.txt
In passing, tags for wheels: https://www.python.org/dev/peps/pep-0425/ and then https://packaging.python.org/specifications/platform-compatibility-tags/ See also https://www.python.org/dev/peps/pep-0599/ which goes as far as linux-armv7l (what about arm8???)
Pondering "can we just plug an RPi into an iPad / Chromebook via an ethernet cable?", via @kleinee again, seems like yes, for iPad at least, or at least, using USB-C cable…: https://magpi.raspberrypi.org/articles/connect-raspberry-pi-4-to-ipad-pro-with-a-usb-c-cable See also https://www.hardill.me.uk/wordpress/2019/11/02/pi4-usb-c-gadget/
For Chromebook, there are lots of USB2Ethernet adapters (which is what I am using with my Mac).. https://www.amazon.co.uk/chromebook-ethernet-adapter/s?k=chromebook+ethernet+adapter And there are also USB-C to ethernet dongles? https://www.amazon.co.uk/s?k=usbc+ethernet+adapter
Via @kleinee: example RPi menu driven tool for creating docker-compose scripts: https://github.com/gcgarner/IOTstack and walkthough video: https://www.youtube.com/watch?v=a6mjt8tWUws Also refers to:
- Dropbox upload: https://github.com/andreafabrizi/Dropbox-Uploader
- backup: https://github.com/billw2/rpi-clone
- dynamic DNS: https://www.duckdns.org/
Portainer overview: Codeopolis: Huge Guide to Portainer for Beginners. For a video: https://www.youtube.com/watch?v=8q9k1qzXRk4 On RPI example : https://homenetworkguy.com/how-to/install-pihole-on-raspberry-pi-with-docker-and-portainer/ (not sure if you must set up the volume?)
Barebones RPi?
So to get this running on a home network, you also need to add an ethernet cable (to connect to home router) and a mouse (so you can point and click to set ssh during setup), and have a micron-HDML2HDMI cable and access to a tv/monitor w/ HDMI input during setup, and then you’d be good to go?

Pi 4 may run hot, so maybe replace with a passive heatsink case such as https://thepihut.com/products/aluminium-armour-heatsink-case-for-raspberry-pi-4 (h/t @kleinee again)? VIa @svenlatham, “[m]ight be sensible to include an SD reader/writer in requirements? Not only does it “solve” the SSH issue from earlier, Pis are a pain for SD card corruption if (for instance) the power is pulled. Giving students the ability to quickly recover in case of failure.” eg https://thepihut.com/products/usb-microsd-card-reader-writer-microsd-microsdhc-microsdxc maybe?