DIT4C Inspired RobotLab Container

A couple of days ago I posted a quick demo of how to run an old OU Windows app under wine in a Docker container that exposed the app running on a graphical desktop via a browser.

One of the the problems with that demo is that it doesn’t provide a way of uploading or downloading files, so unless the user runs the container with a volume mounted against something that does support file transfer, users couldn’t persist files outside the container.

I don’t know whether the audio works either (though I didn’t try…) but as that recipe was based on a container I’d used to run the Audacity audio editor previously, I’m guessing it should…?

Anyway… regarding the file transfer issue, I recalled putting together a container last year for my PhD student who was looking for a portable way of demoing a desktop Ruby app. Digging that out, it was easily repurposed to run RobotLab. The base container is the DIT4C X11 container. The DIT4C project lost its funding last year, I think, but the repos and Docker images still exist and they provide a really great set of examples of prebuilt containerised apps. Exactly the sort of apps I’d want on my HE Library Digital App shelf. A bit dated now, perhaps, but as I when I get a chance I’ll start trying to refresh them.

The base Dockerfile is relatively straightforward:

#Base container
FROM dit4c/dit4c-container-x11:debian

#Required in order to add backports repo
RUN apt-get update && apt-get install -y software-properties-common 

# Install wine
RUN dpkg --add-architecture i386
RUN echo "deb http://httpredir.debian.org/debian jessie-backports main" | sudo tee /etc/apt/sources.list.d/docker.list
RUN apt-get update && apt-get install -y -t jessie-backports wine

# /var contains HTML site for container homepage
COPY var /var

# RobotLab windows files
COPY Apps/ /opt/Apps

# profile.d environment vars
COPY etc /etc

# Desktop shortcuts and application icons
COPY usr /usr
RUN chmod +x /usr/share/applications/robotlab.desktop
RUN chmod +x /usr/share/applications/neural.desktop
RUN chmod +x /usr/share/applications/remote.desktop

#Add items to top toolrail on desktop
RUN LNUM=$(sed -n '/launcher_item_app/=' /etc/tint2/panel.tint2rc | head -1) && \
  sed -i "${LNUM}ilauncher_item_app = /usr/share/applications/robotlab.desktop" /etc/tint2/panel.tint2rc && \
sed -i "${LNUM}ilauncher_item_app = /usr/share/applications/neural.desktop" /etc/tint2/panel.tint2rc && \
sed -i "${LNUM}ilauncher_item_app = /usr/share/applications/remote.desktop" /etc/tint2/panel.tint2rc

The local build / run / push to dockerhub process is trivial:

cp Dockerfile_dit4c_x11 Dockerfile
docker build -t psychemedia/robtolab2 .
docker run -d -p 8082:8080 psychemedia/robtolab2
docker push psychemedia/robtolab2

As to what it looks like, here’s the home page:

Files can be uploaded to / downloaded from the container using the File Management link:

The Linux Desktop Session link takes you to a Linux desktop. Several tools are available in the toolbar at the top of the desktop, including a terminal and the RobotLab application.

Clicking on the RobotLab icon runs it under wine – on first run we seem to get an update message. Maybe I need to run wine as part of the start-up procedure to handle this as part of the build process?

As well as the RobotLab application, we can run other tools in the RobotLab suite, such as the Neural neural network package:

Note that I haven’t tested the student activities yet – this is still early days in just trying to work out what the tech requirements are and how the workflow / student user experience might play out…

Repo is here (it contains redundant RobotLab stuff such as drivers for the original Lego Mindstorms infra-red towers, but I haven’t yet worked out what can be dropped and what RobotLab requires…): ouseful-course-containers/ou-tm129-robotlab.

A container is also available on Dockerhub as ousefulcoursecontainers/ou-tm129-robotlab but I haven’t had chance to check it yet… (it’s still in the build queue…).

The container can be imported into a desktop Docker environment using Kitematic:

robotlab_kitematic0

Once added, the container can also be run using Kitematic, with the user clicking on a link to take then directly to an appropriate location in their web browser:

robotlab_kitematic

The desktop / Kitematic route also enables file sharing with a local directory mounted into the container (though by the looks of it, I may need to tidy that up a little?).

The pieces are slowly starting to come together…?

Fragment – TM351 Services Architected for Online Access

When we put together the original  TM351 VM, we wanted a single, self-contained installable environment capable of running all the services required to complete the practical activities defined for the course. We also had a vision that the services should be capable of being accessed remotely.

With a bit of luck, we’ll have access to an OU OpenStack environment any time soon that will let us start experimenting with a remote / online VM provision, at least for a controlled number of students. But if we knew that a particular cohort of students were only ever going to access the services remotely, would a VM be the best solution?

For example, the services we run are:

  • Jupyter notebooks
  • OpenRefine
  • PostgreSQL
  • MongoDB

Jupyter notebooks could be served via a single Jupyter Hub instance, albeit with persistence enable on individual accounts so students could save their own notebooks.

Access to PostgreSQL could be provided via a single Postgres DB with students logging in under their own accounts and accessing their own schema.

Similarly – presumably? – for MongoDB (individual user accounts accessing individual databases). We might need to think about something different for the sharded Mongo activity, such as a containerised solution (which could also provide an opportunity to bring the network partitioning activity I started to sketch out way back when).

OpenRefine would require some sort of machinery to fire up an OpenRefine container on demand, perhaps with a linked persistent data volume. It would be nice if we could use Binderhub for that, or perhaps DIT4C style infrastructure…

R HTMLWidgets in Jupyter Notebooks

A quick note for displaying R htmlwidgets in Jupyter notebooks without requiring pandoc – there may be a more native way but this acts as a workaround in the meantime if not:

library(htmlwidgets)
library(IRdisplay)

library(leaflet)
m = leaflet() %>% addTiles()
saveWidget(m, 'demo.html', selfcontained = FALSE)
display_html('<iframe src="demo.html"></iframe>')

PS and from the other side, using reticulate for Python powered Shiny apps.

Guacamole and Wine – Running Small Legacy Windows Apps Via a Browser

A sketch some time ago of Accessing GUI Apps Via a Browser from a Container Using Guacamole.

Today, we’re faced with trying to keep an old, old Windows app:

a) running;
b) across various platforms

for another couple of presentations / another year of a course…

So what to do?

I had a look at the dockerfile from the above sketch, installed wine, and tweaked the launch script as per the embedded gist at the end of the post. Image is on dockerhub for now as psychemedia/robtolab/.

Here’s it running on Digital Ocean via DockerCloud…

First select the app (I need to figure how to allow selection of different ones…)

Then run it…

And in action…

UPDATE: out of the can, images for changing the simulator background can be found here: Z:\opt\Apps\RobotLab\images

So… University of the Cloud, right?!

PS the next step is to see if I can get something like the above running via binderhub under nbserverproxy eg as per https://github.com/betatim/openrefineder See maybe https://github.com/jupyterhub/binder/issues/87

Scripted Forms Magic

One of the things I think we could be looking at more in open ed in general, and the OU in particular, are authoring environments that allow educators to create their own interactives.

A significant blocker is the skills required to create and deploy such things:

  • activity design in general;
  • front end / UI coding (HTML / js)
  • back end coding (application logic, js, or py, for example)
  • runtime management (js can run in a browser, R or py likely to require a backend kernel to execute the code).

One of the things that appeals to me about the Jupyter and RStudio environments / ecosystems is the availability of packages that support end-user development by wrapping high level UI components as templated widgets that can be automatically generated from code. (See for example my own attempts at creating an HTML hexjson widget for rendering a hex map given an R dataframe.)

One of the things I started to come round to when creating various binderhub demos to show off how Jupyter notebooks can be used to generate and embed rich media assets for different topic areas (see slides 25-33 of the presentation from Dev.ac.uk embedded below, for example) was that a good collection of IPython magics could provide the basis for end-user developed online materials featuring embedded interactives.

For example, the py / folium magic I started working on makes it relatively straightforward to create an embedded map within a Jupyter notebook, that can then be saved and rendered as a standalone HTML page featuring an embedded interactive map.

It struck me that the magic should also work in the context of Scriptedforms, eg as per Creating Simple Interactive Forms Using Python + Markdown Using ScriptedForms + Jupyter.

And it does…

The markdown (text) file, scriptedforms_magic.md, on the left can be fired up as the HTML + interactive form on the right from a single command line command:

scriptedforms scriptedforms_magic.md

You can also do “live development” by editing and saving the markdown and then reloading the browser page.

By developing a sensible set of IPython magics, authors with some limited technical skills should be able to write enough “code”, as per the “code” in the markdown document above, to:

  • identify form elements;
  • bind those form elements as parameters in an IPython magic call.

After all, we keep saying everyone should be able to code… and that’s a level of coding where you can be immediately productive…

Datasette ClusterMap Plugin – Querying UK Food Standards Agency (FSA) Food Hygiene Ratings Open Data

Earlier today, Simon Willison announced a new feature of in the datasette landscape: plugins. I’ve not had a chance to explore how to go about creating them, but I did have a quick play with the first of them – datasette-cluster-map.

This looks through the columns of a query results table and displays each row that contains latitude an longitude columns as a marker on an interactive leaflet map. Nearby markers are clustered toegther.

So I thought I’d have a play… Assuming you have python installed, on the command line tun the following (omit the comments). The data I’m going to use fo the demo is UK Food Standards Agency (FSA) food hygiene ratings opendata.

# Install the base datasette package
pip install datasette

# Install the cluster map plugin
pip install datasette-cluster-map 

# Get some data - I'm going to grab some Food Standards Agency ratings data
# Install ouseful-datasupply/food_gov_uk downloader
pip install https://github.com/ouseful-datasupply/food_gov_uk.git
# Grab the data into default sqlite db - fsa_ratings_all.db
oi_fsa collect

# Launch datasette
datasette serve fsa_ratings_all.db

This gives me a local URL to view the datasette UI on in a browser (but it didn’t open the page automatically into the browser?). For example: http://127.0.0.1:8001

By default we see a listing of all the tables in the database:

I want the ratings_table:

We can use selectors to query into the table, but I want to write some custom queries and see the maps:

So – what can we look for? How about a peek around Newport on the Isle of Wight, which has an ish postcode of PO30

Looking closely, there doesn’t seem to be any food rated establishments around the Isle of Wight Prisons?

This is a bit odd, because I’ve previously used FSA data as a proxy for trying to find prison locations. Here’s a crude example of that:

So… all good fun, and no coding really required, just glueing stuff together.

Here are some more example queries:

# Search around a postcode district
SELECT BusinessName, BusinessType, RatingValue, AddressLine1, AddressLine2 , AddressLine3, Latitude, Longitude FROM ratingstable WHERE postcode LIKE 'PO30%'


# Crude search for prisons
SELECT BusinessName, BusinessType, RatingValue, AddressLine1, AddressLine2 , AddressLine3, Latitude, Longitude FROM ratingstable WHERE (AddressLine1 LIKE '%prison%' COLLATE NOCASE OR BusinessName LIKE '%prison%' COLLATE NOCASE OR AddressLine1 LIKE '%HMP%' COLLATE NOCASE OR BusinessName LIKE '%HMP%' COLLATE NOCASE)

# Can we find Amazon distribuion centres?
SELECT BusinessName, BusinessType, RatingValue, AddressLine1, AddressLine2 , AddressLine3, Latitude, Longitude FROM ratingstable WHERE AddressLine1 LIKE 'Amazon%' OR BusinessName LIKE 'Amazon%'

# How about distribution centres in general?
SELECT BusinessName, BusinessType, RatingValue, AddressLine1, AddressLine2 , AddressLine3, Latitude, Longitude FROM ratingstable WHERE AddressLine1 like '%distribution%' COLLATE NOCASE  or BusinessName LIKE '%distribution%' COLLATE NOCASE

# Another handy tick is to get your eye in to queryable things by looking at where
#  contract catering businesses ply their trade. For example, Compass
SELECT BusinessName, BusinessType, RatingValue, AddressLine1, AddressLine2 , AddressLine3, Latitude, Longitude FROM ratingstable WHERE AddressLine1 LIKE '%compass%' COLLATE NOCASE  OR BusinessName LIKE '%compass%' COLLATE NOCASE

One thing I could do is share the database via an online UI. datasette makes this easy – datasette publish heroku fsa_ratings_all.db` would do the trick if you have Heroku credentials handy, but I’ve already trashed my monthly Heroku credits…

Creating Simple Interactive Forms Using Python + Markdown Using ScriptedForms + Jupyter

Just… wow…

Every time I miss attending the OER conf (which is most years, which is stupid because all the best peeps are there) I try to assuage my guilt by tinkering with some OpenLearn stuff.

This year, I revisted my OpenLearn XML scraper – first sketch notebook here – scraping the figure data into a SQLite db that I then popped onto Heroku as a datasette (a single command line command – datasette publish heroku openlearn.sqlite posts the sqlite database to Heroku as a running app, so thanks to Simon Willison, it’s not hard, right?).

That’s okay as far as it goes – there are some notes in the notebook about more nuggets that can be pulled from the OpenLearn OU-XML – but it falls short of giving a demo gallery of images.

The latest version of datasette supports plugins, as well as the original templating, which means I should be able to put together a simple gallery viewer, allowing figure caption / description data to be searched then showing the results with the corresponding images alongside. But there’s still learning for me to be done there (I still don’t grok jinja templates) and I wanted a quick win…

Tinkering with Simon Biggs’ ScriptedForms Jupyter hack has been on my to do list for a bit, and with its markdown + python interactive app scripting, it looked like it contained the promise of a quick win. And it did…

For example, here’s a markdown script in a file openlearnq.md:

# OpenLearn Images

<section-start>
```python

from IPython.display import Image

import pandas
import sqlite3
conn = sqlite3.connect('openlearn.sqlite')
```
</section-start>


<section-live>
<variable-string>image_text</variable-string>
```python
q="SELECT * FROM xmlfigures WHERE caption LIKE '%{q}%' LIMIT 10".format(q=image_text)
pd.read_sql(q, conn)
```
</section-live>

and here’s the running app (launched from the command line as scriptedforms openlearnq.md).

I have a github repo here to try to run this via Binderhub: Binder

PS as to setting up the github repo, it was done without git: the repo was created online,  the binder/ dir files (cribbed from SimonBiggs/scriptedforms-examples) were created and edited onlin evia the Github web UI, the openlearn-example.md and openlearn.sqlite files were automatically uploaded once I’d dragged from my desktop onto the repo web page, leaving it to me to just commit them (which I forgot the first time!). See also: Easy Web Publishing With Github

PPS I’m also wondering if it would be possibly to run datasette in the same Binder container and then query the sqlite db via the datasette API service?