config.vm.box = "ouseful/ou-robotics-test"
Now I’m thinking I should probably do the same for the TM351 VM, giving the hassle it seems to take trying to get the
.box file hosted for download on an OU URL…
When we put together the virtual machine for TM351, the data management and analysis course, we built a headless virtual machine that did not contain a graphical desktop, but instead ran a set of services that could be accessed within the machine at a machine level, and via a browser based UI at the user level.
Some applications, however, don’t expose an HTML based graphical user interface over http, instead they require access to a native windowing system.
One way round this is to run a system that can generate an HTML based UI within the VM and then expose that via a browser. For an example, see Accessing GUI Apps Via a Browser from a Container Using Guacamole.
Another approach is to expose an X11 window connection from the VM and connect to that on the host, displaying the windows natively on host as a result. See for example the Viewing Application UIs and Launching Applications from Shortcuts section of BYOA (Bring Your Own Application) – Running Containerised Applications on the Desktop.
The problem with the X11 approach is that is requires gubbins (technical term!) on the host to make it work. (I’d love to see a version of Kitematic extended not only to support docker-compose but also pre-packaged with something that could handle X11 connections…)
So another alternative is to create a virtual machine that does expose a desktop, and run the applications on that.
Here’s how I think the different approaches look:
As an example of the desktop VM idea, I’ve put together a build script for a virtual machine containing a Linux graphic desktop that runs the V-REP robot simulator. You can find it here: ou-robotics-vrep.
The script uses one Vagrant script to build the VM and another to launch it.
Along with the simulator, I packaged a Jupyter notebook server that can be used to create Python notebooks that can connect to the simulator and control the simulated robots running within it. These notebooks could be be viewed view a browser running on the virtual machine desktop, but instead I expose the notebook server so notebooks can be viewed in a browser on host.
The architecture thus looks something like this:
I’d never used Vagrant to build a Linux desktop box before, so here are a few things I learned about and observed along the way:
ubuntu-desktopnaively installs a whole range of applications as well. I wanted a minimal desktop that contained just the simulator application (though I also added in a terminal). For the minimal desktop,
apt-get install -y ubuntu-desktop --no-install-recommends;
- by default, Ubuntu requires a user to login (user: vagrant; password: vagrant). I wanted to have as simple an experience as possible so wanted to log the user in automatically. This could be achieved by adding the following to
[SeatDefaults] autologin-user=vagrant autologin-user-timeout=0 user-session=ubuntu greeter-session=unity-greeter
- a screensaver kept kicking in and kicking back to the login screen. I got round this by creating a desktop settings script (
#dock location gsettings set com.canonical.Unity.Launcher launcher-position Bottom #screensaver disable gsettings set org.gnome.desktop.screensaver lock-enabled false
and then pointing to that from a
desktop_settings.desktop file in the
/home/vagrant/.config/autostart/ directory (I set execute permissions set on the script and the
[Desktop Entry] Name=Apply Gnome Settings Exec=/opt/set-gnome-settings.sh Hidden=false NoDisplay=false X-GNOME-Autostart-enabled=true Type=Application
- because the point of the VM is largely to run the simulator, I thought I should autostart the simulator. This can be done with another
.desktopfile in the autostart directory:
[Desktop Entry] Name=V-REP Simulator Exec=/opt/V-REP_PRO_EDU_V3_4_0_Linux/vrep.sh Type=Application X-GNOME-Autostart-enabled=true
- the Jupyter notebook server is started as a service and reuses the installation I used for the TM351 VM;
- I thought I should also add a desktop shortcut to run the simulator, though I couldnlt find an icon to link to? Create an executable
run_vrep.desktopfile and place it on the desktop:
[Desktop Entry] Name=V-REP Simulator Comment=Run V-REP Simulator Exec=/opt/V-REP_PRO_EDU_V3_4_0_Linux/vrep.sh Icon= Terminal=false Type=Application
Her’s how it looks:
If you want to give it a try, comments on the build/install process would be much appreciated: ou-robotics-vrep.
I will also be posting a set of activities based on the RobotLab activities used in TM129 in the possibility that we start using V-REP on TM129. The activity notebooks will be posted in the repo and via the associated uncourse blog if you want to play along.
One issue I have noticed is that if I resize the VM window, V-REP crashes… I also can’t figure out how to open a V-REP scene file from script (issue) or how to connect using a VM hostname alias rather than IP address (issue).
One of the issues with distributing software to distance education students is ensuring that the software package they are trying to install hasn’t been corrupted ins some way during transport. For example, one of the ways we ship software to students is via USB memory stick. But in one course last year, it seems that some of the sticks were a bit dodgy, and the files wouldn’t install from them.
Which is where checksums come in.
If a student is having issues installing a piece of software, we can check the checksum of the distributed installer package to see if it matches the checksum of a pristine package. If it doesn’t, we know the problem is a corrupted installer package (rather than a problem downstream of that, for example).
So what is a checksum? Essentially, it’s a single number derived from all the bits in the file you’re generating the checksum for. If any bit in the file is changed, the checksum should too.
So here are a couple of ways of generating checksums…
Download the Windows fciv (Windows File Checksum Integrity Verifier) utility: https://support.microsoft.com/en-gb/kb/841290
Run a command of the form:
This will produce checksums using different coding mechanisms:
MAC / LINUX:
On a Mac, you can find the checksum using the following command:
openssl sha1 PATH/FILE_CHECK.SUFFIX
openssl md5 PATH/FILE_CHECK.SUFFIX
openssl md5 ~/USERRELATIVEPATH/FILE_CHECK.SUFFIX
If we distribute a copy of the checksum for installer packages, along with the installer packages, assuming that the checksum is not corrupted, a student can check that the installer package is not corrupted by generating the checksum for their installer package and comparing it to the distributed one.
When it comes to support in the event of a problem, and a call to the help desk, then the first question from support can be: “Have you checked the checksum of the original package?” (Or we can prompt students to do this themselves as part of self-service support…)
Even better if we shipped a simple one-click file integrity checking utility that:
1) runs a checksum test on itself to check that it’s working;
2) runs a checksum test on the distributed package(s) to check that they are not corrupted.
One of the issues I keep coming up against when trying to encourage folk to at least give Jupyter notebooks a try is that “it’s too X to install” (for negatively sentimented X…). One way round this is to use something like Binder, which allows you to create a Docker image from the contents of a public Github repository and run a container instance from that image on somebody else’s server, accessing the notebook running “in the cloud” via your own browser. A downside to this approach is that the public binder service can be a bit slow – and a bit flaky. And it probably doesn’t scale well if you’re running a class. (Institutions running their own elastic Binder service could work round this, of course…)
So here’s yet another possible way in for folk – O’Reilly Media’s LaunchBot (via O’Reilly Media CTO Andrew Odewahn’s draft article Computational Publishing with Jupyter) – though admittedly it also requires the possibly (probably?!) “too hard” requirement to install Docker as a pre-requisite. Which means it’s possibly doomed from the start in my institution…
Anyway, for anyone not in a Computing HE department, and who’s willing to install Docker first (it isn’t really hard at all unless your IT department has got their hands on your computer first: just download the appropriate Community Edition for your operating system (Mac, Windows, Linux) from here), the LaunchBot app provides a handy way to run Jupyter environments in a custom Docker container, “preloaded” with notebooks from a public Github repo.
By the by, if you’re on a Mac, you may be warned off installing LaunchBot.
Simply select the app in a Finder window, right click on it, and then select “Open” from the pop-up menu; agree that you do want to launch it, and it should run happily thereafter.
In a sense, LaunchBot is a bit like running a personal Binder instance: the LaunchBot app, which runs from the desktop but is accessed via browser, allows you to clone projects from public Github repos that contain a
Dockerfile, and perhaps a set of notebooks. (You can use this url as an example – it’s a set of notebooks from original OU/FutureLearn Learn to Code for Data Analysis course.) The project README is displayed as part of the project homepage, and an option given to view, edit and save the project
Dockerfile can then be used to launch an instance of a corresponding container using your own (locally running) Docker instance:
(A status bar in the top margin tracks progress as image layers are downloaded and
Dockerfile commands run.)
The base image specified in the
Dockerfile will be downloaded, the
Dockerfile run, and I’m guessing links to all services running on exposed ports are displayed:
In the free plan, you are limited to cloning from public repos, with no support for pushing changes back to a repo. In the $7 a month fee based plan, pulls from private repos and pushes to all repos are supported.
From a quick play, LaunchBot is perhaps easier to use than Kitematic. Whilst neither of them support launching linked containers using docker-compose, LaunchBot’s ability to clone a set of notebooks (and other files) from a repo, as well as the
Dockerfile, makes it more attractive for delivering content as well as the custom container runtime environment.
I had a quick look around to see if the source code for LaunchBot was around anywhere, but couldn’t find it offhand. The UI is a likely to be little bit scary for many people who don’t really get the arcana of git (which includes me! I prefer to use it via the web UI ;-) and could be simplified for users who are unlikely to want to push commits back. For example, students might only need to to pull a repo once from a remote master. On the other hand, it might be useful to support some simplified mechanism for pulling down updates (or restoring trashed notebooks?), with conflicts on the client side managed in a very sympathetic and hand-holdy way (perhaps by backing up any files that would otherwise be overwritten from the remote master?!). (At the risk of feature creeping LaunchBot, more git-comfortable users might find some way of integrating the
nbdime notebook diff-er useful?)
Being able to brand the LaunchPad UI would also be handy…
From a distributed authoring perspective, the git integration could be handy. As far as the TM351 module team experiments in coming up with our own distributed authoring processes go, the use of a private Github repo means we can’t use the LaunchBot approach, at least under the free plan. (From the odd scraps I’ve found around the OU’s new OpenCreate authoring system, it supposedly supports versioning in some way, so it’ll be interesting to see how that works…) The TM351 dockerised VM also makes use of multiple containers linked using docker-compose, so I guess to use it in a LaunchBot context I’d need to build a monolithic container that runs all the required services (Postgres and MongoDB, as well as Jupyter) in a single image (or may I could run docker-compose inside a Docker container?)
[See also Seven Ways of Running IPython / Jupyter Notebooks, which I guess I probably needs another update…]
PS this could also be a handy tool for supporting authoring and maintenance workflows around notebooks: nbval, a “py.test plugin to validate Jupyter notebooks”. I wonder if it’s happy to test the raising of particular errors/warnings, as well as cleanly executed code fragments? Certainly, one thing we’re lacking in our Jupyter notebook production route at the moment is a sensible way of testing notebooks. At the risk of complicating things further, I wonder how that might fit into something like LaunchBot?
PPS Binder also lacks support for docker-compose, though I have asked about this and there may be a way in if someone figures out a spawner that invokes docker-compose.
Somewhen around 2002-3, when we first ran the short course “Robotics and the Meaning of Life”, my colleague Jon Rosewell developed a drag and drop text based programming environment – RobotLab – that could be used to programme the behaviour of a Lego Mindstorms/RCX brick controlled robot, or a simulated robot in an inbuilt 2D simulator.
The environment was also used in various OU residential schools, although an update in 2016 saw us move from the old RCX bricks to Lego EV3 robots, and with it a move to the graphical Lego/Labview EV3 programming environment.
RobotLab is still used in the introductory robotics unit of a level 1 undergrad OU course, a unit that is itself an update of the original Robotics and the Meaning of Life short course. And although the software is largely frozen – the screenshot below shows it running under Wineskin on my Mac – it continues to do the job admirably:
- the environment is drag and drop, to minimise errors, but uses a text based language (inspired originally by Lego scripting code, which it generated to control the actual RCX powered robots);
- the simulated robot could be configured by the user, with noise being added to sensor inputs and motor outputs, if required, and custom background images could be loaded into the simulator:
- a remote control panel could be used to control the behaviour of the real – or simulated – robot to provide simple tele-operation of it. A custom remote application for the residential school allowed a real robot to controlled via the remote app, with a delay in the transmission of the signal that could be used to model the signal flight time to a robot on the moon! The RobotLab remote app provided a display to show the current power level of each motor, as well as the values of any sensor readings.
– the RobotLab environment allowed instructions to be stepped through an instruction at a time, in order to support debugging;
– a data logging tool allowed real or simulated logged data to be “uploaded” and viewed as a time series line chart.
Time moves on, however, and we’re now starting to thing about revising the robotics material. We had started looking at an updated, HTML5 version of RobotLab last year, but that effort seems to have stalled. So I’ve started looking for an alternative.
Robot Operating System (ROS)
Following on from a capital infrastructure bid a couple of years ago, we managed to pick up a few Baxter robots that are intended to be used in the “real, remote experiment” OpenSTEM Lab. (Baxter is also demoed at the level 1 engineering residential school.) Baxter runs under ROS, the Robot Operating System, and can be programmed using Python. Both 2D and 3D simulators are available for ROS, which means we could go down the ROS root as a the programming environment for any revised level 1 course.
At this point, it’s also worth saying that the level 1 course is something of a Frankenstein course, including introductory units to Linux and networking, as well as robotics. If the course is rewritten, the current idea is to replace the Linux module with one on more general operating systems. This means that we could try to create a set of modules that all complement each other, yet standalone as separate modules. For example, ROS works on a client server model, which allows us to foreshadow, or counterpoint, ideas arising in the other units.
The relative complexity of the ROS environment means that we can also use it to motivate the idea of using virtual machines for running scientific software with complex dependencies and rather involved installation processes. However, from a quick look at the ROS tools, they do look rather involved for a first intro course and I think would require quite a bit of wrapping to hide some of the complexity.
If you’re interested, here‘s a quick run down of what needs adding to a base Linux 16.04 install:
The simulator and the keyboard remote need launching from separate terminal processes.
A rather more friendly environment we could work with is the blockly based RobertaLab. Scratch is already being used in one of the new first level courses to introduce basic programming, of a sort, so the blocks style environment is one that students will see elsewhere, albeit in a rather simplistic fashion. (I’d argued for using BlockPy instead, but the decision had already been taken…)
In keeping with complementing the operating systems unit, we could use a Docker containerised version of RobertaLab to allow students to run RobertaLab on their own machines:
A couple of other points to note about RobertaLab. Firstly, it has a simulator, with several predefined background environments. (I’m not sure if new ones can be easily uploaded, but if not the repo can be forked an new ones added form src.)
As with BlockPy, there’s the ability to look at Python script generated from the blocks view. In the EV3Dev selection, Python code compatible with the Ev3dev library is generated. But other programming environments are available too, in which case different code is generated. For example, the EV3lejos selection will generate Java code.
This ability to see the code behind the blocks is a nice stepping stone towards text based programming, although unlike BlockPy I don’t think you can go from code to blocks? The ability to generate code for different languages from the same blocks programme could also be used to make a powerful point, I think?
Whether or not we go with the robotics unit rewrite, I think I’ll try to clone the current RobotLab activities in using the RobertaLab environment, and also add some extra value bits by commenting on the Python code that is generated.
By the by, here’s a clone of the Dockerfile used by exmatrikulator for building the RobertaLab container image:
In the original robotics unit, one of the activities looked at how a simple neural network could be trained using data collected by the robot. I’m not sure if Dale Lane’s Machine Learning for Kids application, which also looks to be written using
block.lyScratch. This means it probably can’t be integrated with Open RobertaLab, but even if it isn’t, although it would perhaps make a nice complement to both the use of OpenRobertaLab to control a simple simulated robot, and the use of Scratch in the other level 1 module as an environment for building simple games as a way of motivating the teaching of basic programming concepts.
No-one’s interested in Jupyter notebooks, but folk seem to think Scratch is absolutely bloody lovely for teaching programming to novice ADULTS, so I might as well go with them in adopting that style of interface…
Trying to get my thoughts in order and lay bare some of my assumptions…
Comments / sanity checking appreciated…
…there seems to be so much resistance in OU to Jupyter notebooks, when I’m seeing this sort of thing more and more….
Folk creating open educational resources to support their technical ramblings using IPython (which is to say, Jupyter) notebooks…
I just, …., whatever… #ffs
PS see also: Introducing learnr. I can just imagine what sort of response that would get… Whuurrr? Wossat? No idea…