Quick First Look At Moodle CodeRunner

One of the tools we have to support programming activities in our Moodle VLE is a CodeRunner backed interactive question type.

From the blurb, CodeRunner is “a free open-source question-type plug-in for Moodle that can run program code submitted by students in answer to a wide range of programming questions in many different languages”.

Of interest to me on the courses I’m involved with are support for Python3, SQL (or at least, the dialect supported by SQLite), and R, which looks like it has hacky support via a command line call to R from a Python3 question type…

The question set up includes an execution environment selection, but I’m not sure how easy it is to define bespoke ones (e.g. a Python 3 environment with particular packages preinstalled):

The R support looks like it’s not offered natively, but seems to be hacked together via a system call from the Python environment:

(I guess that means we could also hack a way to running code against an arbitrary Jupyter kernel?)

A slot is provided for a valid example answer, but it doesn’t look like there’s a way I can interactively edit and test that code (a simple terminal onto the underlying execution environment would be really handy. (The jupytergraffiti Jupyter notebook extension has some interesting ideas about inline terminals and workflows around a similar sort of use case, in which the contents of a code cell are saved into a Python file that is then executed from a terminal.)

I can save the code and have it validated automatically, but that’s not really interactive. (Also, I’m not sure about the semantics of ‘Validate on save’? Does that mean it runs the code against the defined tests?)

The test definitions also look like they don’t let me interactively test them? It’s also not clear where I’m expected to pick up the Expected output from. I’m guessing this is used as an exact match string, so, erm, I really need it to be right? That cell really should be automatically populated by running the test case against a correct answer? Human hands should have nothing to do with it…

I’ve not got far enough into yet to know if I can call on arbitrary packages that need installing on top of the base environment (I suspect I can’t?).

It looks like there is an opportunity to provide files that are available at run time…

so I guess if I upload a Python package zip file there I might be able to install from it?

Hmm… nearly…. so it looks like we could hack a way round package requirements by tweaking settings… but are things like time limits defined globally? Or at least, at a level above the question type level? (What if I have a question for which I know any legitimate answer will take a longish time to execute?)

I also haven’t figured out how to properly inspect the test environment:

Co-opting a question preview as an error displaying terminal seems really ineffective? (There has to be a better way, surely?)

Developing sample answers / tests in my own environment means I need to make sure my environment is exactly the same as the one used by CodeRunner and then copying exact match expected answers is fraught with danger?

Keeping student environments (where students might try out sample code before submitting it) and test environments in sync could also be a real issue. Our TM351 VM environment has maybe 30 or 40 custom packages installed, in various known versions, (although there’s nothing to stop students updating them), which may change year on year. Trying to use the default CodeRunner Python3 environment is not likely to work for anything other than trivial questions that don’t use the packages that form the core part of the teaching.

One of the things it would be useful for us to test is chart generation, but it’s not obvious that that would work within the CodeRunner context. (We could get students to generate a py/matplotlib or R/ggplot2 chart (that is, chart object) and then try to introspect on that as part of the tests, I suppose, but the defined tests would be at a state level students are never exposed to? Or we could maybe try to take a hash of an image file against a correct answer image file hash?

Code execution itself seems to be handled via something called Jobe (Job Engine), “a server that supports running of small compile-and-run jobs in a variety of programming languages … developed as a remote sandbox for use by CodeRunner”. This looks to largely be a solo project, although I note some commits from the OU, which is good to see. One of the things I’ve been repeatedly told is we can’t trust solo code projects for our own production use cases (too risky / too unsustainable), so, erm….

(At this point, I should declare I lobbied early on for us to look at using CodeRunner, I think round about the time we were first looking at Jupyter notebooks (or IPython notebooks, as they were then). By the time it arrived, our course needs had moved on as we developed the course’s computational environment, and if anything, nbgrader was looking a better bet: CodeRunner didn’t seem flexible enough on the back end, or as an environment for naturally (i.e. quickly and easily) developing and testing questions. I think there are various other test regimes out there that demonstrated for use in an academic quiz/test/automated assessment context, and I need to do a quick overview of them all to see what they offer…)

It does look like you’re largely stuck with the provided environments though, which is less than useful, particularly when considered across a range of courses at different levels, each with a different focus, each with its own environmental context needs.

Once again, I wonder if anyone has looked at using a Jupyter powered backend, rather than Jobe, which would give access to all the Jupyter kernels, as well as custom environments? This could make CodeRunner a bit more useable for us, and would allow bespoke course/presentation code environments to be more easily defined.

Creating Team and Individual WordPress Blogs From Your VLE Via An LTI WordPress Plugin

Via a series of Twitter posts, of which this is the first, from @ammienoot, the self-styled “Edtech lady leader @EdinburghUni”, I learn of uoe-dlam/ed-lti, “Learning Tools Interoperability (LTI) integration for creating WordPress blogs with appropriate user roles based on roles set within the Virtual Learning Environment (VLE)”.

The ed-lti WordPress plugin is an LTI Provider that feeds a VLE LTI Consumer, such as Moodle, that:

allows Virtual Learning Environment (VLE) users to create blogs on a WordPress multisite installation through the use of Learning Tools Interoperability (LTI) tools. The plugin is designed to make it easy to integrate WordPress with a VLE course to provide an individual or group working space for students. It isn’t designed to support course materials on an external WordPress site being included in the VLE.

FWIW, LTI is not something I’m really familiar with (to do, to do…; here are the docs), but from the README it appears that the plugin will map VLE roles (how does LTI expose those, I wonder?) onto appropriate WordPress roles.

(Thinks… hmmm.. I pretty sure we use the JupyterHub LTI Authenticator from our Moodle installation? I’m not sure if that’s using the Moodle LTI external tool on the Moodle side, or what???)

Anyway, the WordPress plugin seems to support a couple of different use cases. In particular, it can manage:

  • group course blogs in the form of a single blog shared by all students on a course; it seems that this can also be used to support “sub-course” group blogs, e.g. for tutor groups, by dropping an instance of the blog tool into each group and using VLE permissions to restrict access to the group and hence to the blog.
    • The first VLE user to click on a course blog tool link will be taken to the WordPress multisite install and a blog will be created for the course. The user will also be made a member of the blog.
    • Any subsequent users that click on the link are added to the blog as a member.
    • Teachers on the course will get the WordPress Administrator role and students will be added as WordPress Authors.
  • individual student blogs for each student on a course.
    • If a VLE user with the student role clicks on a student blog tool link they are taken to the WordPress multisite install and a blog is created for them. All blogs are cloned from a master template so you can pre-configure the set up that students receive. The student is added to the blog with the WordPress Administrator role.
    • Teachers on the course who follow the student tool link will be taken to a WordPress page with a list of links to all the student blogs associated with the course.
    • If the Teacher clicks on one of the links, they are given the WordPress Author role and taken to the home page of the blog.

In terms of docs, it’d be really nice if there were a simple architectural diagram or two… plus a couple of screenshots showing what the integration looks like to the user… Anne-Marie’s tweeted presentation also had several screenshots which would be handy in a blog post, or even in the docs…

I think we’ve had issues before sorting out WordPress blogs for particular student activities on at least one of our courses, but I’m not sure how, or even if, they were resolved. If it was an auth issue, or a user management issue, this sort of approach might help?

By the by, Edinburgh are also behind Noteable, a Jupyter notebook service run by Edina that looks likely to be offered as a commercially hosted service. I’m not sure how VLE integration is likely to work for that and whether tools are in place to support similarly convenient user account creation, single sign-on, and personal file persistence? But I wonder if the Jupyter mutli-user solution would look something like this WordPress multi-user one, at least architecturally?

PS the OU Moodle / JupyterHub user journey is as follows. In the VLE, a link (I think this is the external LTI thing link?):

which logs the student in to a JupyterHub (temporary) notebook server launch page:

Simples… Though we haven’t yet sorted out persistent notebooks for students in their own JupyterHub account yet, let alone one linked to their Moodle / OU single sign-on credentials.

PPS in passing, there is also Jupyter LTI integration available for for Canvas.

PPPS Now I’m wondering what a Docker launching LTI Provider might look like? This LTI tool provider for sandboxed Docker Containers maybe? (although that example is docs-less…)

Related: [Fragment] Jupyter Notebooks and Moodle

Less Related: Using ThebeLab to Run Python Code Embedded in HTML Via A Jupyter Kernel

Running Legacy Windows Desktop Applications Under Wine Directly in the Browser Via XPRA Containers

Okay, so here’s another way of trying to run legacy Windows applications under Wine in a Docker container via browser.

This variant improves on Running RobotLab (Legacy Windows App) Remotely or Locally in a Docker Container Under Wine Via RDP by not requiring the user to update Wine and by launching directly into either the RobotLab application or the Neural application.

If you have Docker running, you should be able to just type:

#Run RobotLab (default)
docker run --name tm129robotlab --shm-size 1g -p 3395:10000 -d ousefuldemos/tm129robotics-xpra-html5

#Run RobotLab (explicitly)
docker run --name tm129robotlabx --shm-size 1g -p 3396:10000 -e start=robotlab -d ousefuldemos/tm129robotics-xpra-html5

#Run Neural (explicitly)
docker run --name tm129neuralx --shm-size 1g -p 3397:10000 -e start=neural -d ousefuldemos/tm129robotics-xpra-html5

Here’s the Dockerfile (also see the repo):

#This container has been removed
#and the original repo archived (it used an old Linux base container)
#FROM lanrat/docker-xpra-html5

#I forked the lanrat/docker-xpra-html5 and rebuilt it using ubuntu:bionic
#https://github.com/ouseful-backup/docker-xpra-html5
FROM ousefuldemos/docker-xpra-html5

USER root

#Required to add repo
RUN apt-get update && apt-get install -y software-properties-common wget

#Install wine
RUN dpkg --add-architecture i386

RUN wget -qO- https://dl.winehq.org/wine-builds/winehq.key | apt-key add -
RUN apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main'
RUN apt update && apt-get install -y --install-recommends winehq-stable

#Install the wine packages wine wants to load if they aren't already there
#There are lots of warnings in the install but they seem to work in use?
RUN mkdir -p /home/user/.cache/wine
RUN wget http://dl.winehq.org/wine/wine-mono/4.8.1/wine-mono-4.8.1.msi -O /home/user/.cache/wine/wine-mono-4.8.1.msi
RUN wget http://dl.winehq.org/wine/wine-gecko/2.47/wine_gecko-2.47-x86.msi -O /home/user/.cache/wine/wine_gecko-2.47-x86.msi
RUN wget http://dl.winehq.org/wine/wine-gecko/2.47/wine_gecko-2.47-x86_64.msi -O /home/user/.cache/wine/wine_gecko-2.47-x86_64.msi

USER user
RUN wine msiexec /i /home/user/.cache/wine/wine_gecko-2.47-x86_64.msi
RUN wine msiexec /i /home/user/.cache/wine/wine_gecko-2.47-x86.msi
RUN wine msiexec /i /home/user/.cache/wine/wine-mono-4.8.1.msi

USER root

#Use the recipe in https://blog.ouseful.info/2019/03/11/running-microsoft-vs-code-remotely-xpra-and-rdp/
#for starting with RobotLab

#Copy over Win application folders
COPY Apps/  /opt/Apps


#Add some start commands

ADD robotlab.sh /usr/local/bin/robotlab
RUN chmod +x /usr/local/bin/robotlab

ADD neural.sh /usr/local/bin/neural
RUN chmod +x /usr/local/bin/neural



#Pulseaudio also has a switch in cmd
#Can't get this working atm...
#Does it even make sense to try?
#i.e. can XPRA HTML be used to play audio in a browser anyway?
#RUN apt-get install -y pulseaudio

#Go back to user...
USER user


ENV start robotlab

#Start with robotlab
CMD xpra start --bind-tcp=0.0.0.0:10000 --html=on  --exit-with-children --daemon=no --xvfb="/usr/bin/Xvfb +extension Composite -screen 0 1920x1080x24+32 -nolisten tcp -noreset" --pulseaudio=no --notifications=no --bell=no --start-child=${start}

#Example image pushed as 
#docker build -t ousefuldemos/tm129robotics-xpra-html5 .
#Default runs robotlab
#docker run --name tm129x --shm-size 1g -p 3395:10000 -d ousefuldemos/tm129robotics-xpra-html5
#docker run --name tm129x --shm-size 1g -p 3395:10000 -e start=robotlab -d ousefuldemos/tm129robotics-xpra-html5

One thing I’s started wondering now is: could we run this via a Jupyter notebook UI using jupyter-server-proxy (I tried, and it doesn’t seem to work atm: the proxy just goes into an infinite redirect loop?); or launch it as a standalone container using JupyterHub Dockerspawner and a Dockerfile shim to change the start CMD so things start on port 8888 (a naive attempt at that didn’t seem to work either).

At the very least, this seems to offer a reasonably natural way of launching a containerised desktop application directly into the browser?

(It would be useful to know if PulseAudio can be used to play sound from a container launched on something like Digital Ocean through the XPRA HTML5 desktop viewed in a browser before I waste any more time chasing that, and if so, it would be really handy to see a minimal working example Dockerfile ;-)

I’m guessing it should also work in the Google Cloud Run serverless container context [howto]? I still need to try this out…

The Long Road From Proof of Concept / Quick Demo Through Reference Architecture to Production System

I tinker at the level of proof of concept, playful demo and half hour hack (when I try things out, it’s my intention that I should be able to make some good progress and get something running in half an hour. It may end up taking an hour, a couple of hours, half a day, even a couple of days if I get really obsessed/frustrated and think it’s worth spending that extra time (?!) on, but the initial intention typically is: could I get something working to proof of concept level quickly?

As I’ve written before, one reason is funnels: if it takes 3 weeks to try something out, not many people will get to try it out. If you see something new in a tweet and it takes ten minutes to try, you might. And from that, whatever the thing is might get traction more widely if within that 10 minutes you see enough promise to want to spend more time on the thing. Or it might just help you on a temporary problem, and you can use it, move on, drop it, perhaps remembering it as yet another of those weirdly shaped screwdrivers that only fits very peculiarly headed screws, but it useful for them nonetheless.

Through trying lots of things of things out you also get a feel for what’s new, what’s interesting, what’s more of the same, what’s actually different. Downes knows this too…

So, playful demos. I spent a chunk of time last night trying to launch an OpenRefine container directly from Jupyterhub using a Dockerspawner. (It didn’t work.) My thinking is that being able to launch arbitrary containers from behind Jupyterhub means that have-a-go educators could co-opt Jupyterhub as a multi-user front end to launch anything in a container that returns something on port 8888. (I’m still not sure what Jupyterhub Dockerspawner requires of a container it launches (is it just an http response on port 8888?) or what it sends to the container when it tries to launch it (does it send a command to append to a ENTRYPOINT? does it send environment variables in? If you can point me to docs, transparent debug examples/logs, that’d be much appreciated).

I’ve not really used Jupyterhub and didn’t want to use The Littlest Jupyter Hub (although I guess you can change that to use Dockerspawner? Hmmm… Bah…), so it also provided an opportunity for me to find a (quick) way of firing up Jupyterhub servers.

I ended up following the recipe for the simple example in the jupyterhub/dockerspawner repo. It comes with these caveats:

This is a simple example of running jupyterhub in a docker container.

This shows the very basics of running the Hub in a docker container (mainly setting up the network). To run for real, you will want to:

– …

jupyterhub-deploy-docker does all of these things.

So: enough to get up and running, no more than that… That’s the level I tend work at.

One of the nice things about Juptyer ecosystem is that I can get started at this quick level, and produce containers that can be launched by production systems that work easily way with my quick local demo. I might even be able to tinker around with Jupyterhub customisation, tweaking style templates and so on to explore different ways of customising the presentation which might also be relevant to the final production system.

The jupyterhub/jupyterhub-deploy-docker setup, which provides a [r]eference deployment of JupyterHub with docker goes a bit further than I need for simple personal testing / proof of concept and requires more investment in setup time.

As a reference deployment, the README suggests use cases include (but are not necessarily limited to):

  • creating a JupyterHub demo environment that you can spin up relatively quickly.
  • providing a multi-user Jupyter Notebook environment for small classes, teams, or departments.

The reference deployment is useful for me because it provides a logical diagram /  architectural example showing what other things need to be considered for a production system rather than a plaything, even if the reference deployment does not demonstrate them at production strength.

(Note to self: it would be useful to annotate the reference deployment with commentary about why each piece is there and what sorts of criteria you might bring to bear when deciding one way of implementing it versus another.)

It also comes with a disclaimer:

This deployment is NOT intended for a production environment. It is a reference implementation that does not meet traditional requirements in terms of availability nor scalability.

If you are looking for a more robust solution to host JupyterHub, or you require scaling beyond a single host, please check out the excellent zero-to-jupyterhub-k8s project.

(It might also be worth noting that for a small scale production use-case, The Littlest JupyterHub (TLJH) [jupyterhub/the-littlest-jupyterhub], a “[s]imple JupyterHub distribution for 1-100 users on a single server” might also be appropriate?)

The Zero to JupyterHub with Kubernetes [jupyterhub/zero-to-jupyterhub-k8s] deployment adds complexity further, providing a comprehensive set of “[r]esources for deploying JupyterHub to a Kubernetes Cluster” (the docs are actually targeted to Google Kubernetes Engine, but we (well, not me, obvs..;-) managed to use them to bootstrap an Azure install). This is moving into production territory now (we use this for our TM112 disposable notebook optional activity), although by following the instructions, if you have a couple of hours, or perhaps half a day, to start with (rather than half an hour to start with…) plus access to a Kubernetes cluster, you can still give it a spin. (I tried last year to get it running with a local k8s cluster running via Docker on my local machine, but couldn’t get it to work at the time. It may be worth trying this again now, and finding, or posting a recipe, for doing this…)

A large part of my frustration in working at the OU arises from not being able to explore technology ideas more rapidly. It’s easy to be quick at the proof of concept level, harder to get things into production. I know that. But things like the Jupyter ecosystem provide an opportunity for end-user-development in one part of the ecosystem (eg within a container launched by Dockerspawner, or within a notebook via notebook extensions) whilst another part gets the production side right. Or even just facilitates the playfulness.

For example, yesterday I spotted this spacy-course [repo]. If you havenlt come across it, spacy is a really powerful, easy to use, natural language processing library.

The course is split into chapters, with sections in chapters and pages in sections.

Some of the sections are slide displays, with central teaching points and commentary on the side. (Methinks it should be easy enough to add an audio player to read the script on the side, which could be quite interesting?)

Other sections, containing practical activities, are arranged as collapsible elements.

The course supports code execution using MyBinder (it looks like it uses juniper.js to manage this; I wonder how easy it would be to use Voila instead?):

From looking at the repo, the course seems to have been around some time, so now I’m wondering why it took me so long to find it?!

[Prompted by @betatim, it seems there’s a backstory: the course was on DataCamp, but the course developer, @_inesmontani, got frustrated with that provider and instead “wanted to make a free version of my spaCy course so you don’t have to sign up for their service – and ended up building my own interactive app. Powered by the awesome @mybinderteam & @gatsbyjs” What’s more, “[t]he app and framework are 100% open-source and based on Markdown + custom elements. I built it for my content, but if you want to use it to publish your own DIY online course…” By the by, for a course revision, we’re looking at ways we can take all the course content out of the VLE and deliver it via our Jupyter fronted VM… There are three main reasons for this: 1) students should be allowed to take away a copy of the course materials, not just be given access to them for the duration of the course and a couple of years after; 2) getting errata addressed is a nightmare with the current document workflow — the version controlled, issue tracked, workflow we’re trying to work to improves this; 3) we’re interested in exploring how to present the course material in a more structured, searchable and interactive / interesting way. I really take heart from this spacy course example…]

I’m not sure how the content was created. If there’s a transform from Jupyter notebooks into this course format (perhaps using Jupytext, or a Jupyter Book style production route?), that could be really interesting… (At least, to me…. [REDACTED SNARK].)

If you want to try it yourself, Ines has put together this forkable [s]tarter repo for building interactive Python courses.

When it comes to production systems, end user development like this is perhaps part of the problem, though? Production systems folk don’t want end users producing things…?

PS Yes and no to that…paraphrasing something else I saw yesterday, I tend to assume excellence, and tend to only provide negative feedback. A lot of my commentary tends to be more neutral — X does this; I had to do Y then Z to get that to work; etc. As a rule of thumb, I only comment on public activities that I come across and I don’t comment on things that are only discoverable behind authentication.

On the other hand, Tracking Jupyter is a personal experiment into finding a way of providing synoptic feedback about an open system. That that community is open, and that a large number of the activities carried out within it are transparent and discoverable, makes such feedback possible.

Sometimes, my commentary comes with added snark in my personal comms channels (social media, this blog). Which is part of the point. That, and the f****g swearing, are deliberately used to limit the readership, and the willingness of people to link to the content (it’s inappropriate; not properAcademic). And they’re channels where I vent frustration.

I know how to maintain Chinese Walls. Contrary to what folk may think, I don’t blog everything. A lot of stuff that appears in this blog is only here because I can’t find anyone to engage in discussion about it internally, despite trying… And a lot of stuff doesn’t appear. (Not as much as didn’t used to appear, though, back when folk did used to talk to me…)

PPS This sort of personal comment is also, in part, a device to limit linking. Plus the blog is my personal notebook, and as such, is what it is…

;-)

Running RobotLab (Legacy Windows App) Remotely or Locally in a Docker Container Under Wine Via RDP

One of our longer running courses (TM129 — Technologies in practice) distributes a Windows desktop application (RobotLab) developed internally 15+ years ago that implements a simple 2D robot simulator.

For the last few years, we’ve been supposed to make software available on a cross platform basis. For Windows users, I think the application is recompiled every so often to cope with Windows OS upgrades, for Linux it’s distributed using Wine (I think?) and for Macs it’s bundled under PlayOnMac.

A recent issue with the Mac version prompted me to revisit my earlier attempt at producing a DIT4C Inspired RobotLab Container with a simple RDP container that a student could connect to via the Microsoft RDP (remote desktop protocol) client. (One advantage of RDP is that sound sort of works and the RobotLab activities includes one that involves sound…)

Here’s a minimal Dockerfile, derived from danielguerra69/ubuntu-xrdp:

FROM danielguerra/ubuntu-xrdp

#Required to add repo
RUN apt-get update && apt-get install -y software-properties-common

RUN dpkg --add-architecture i386

RUN wget -nc https://dl.winehq.org/wine-builds/winehq.key<

RUN apt-key add winehq.key

RUN apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ xenial main'

RUN apt update && apt-get install -y --install-recommends winehq-stable

COPY Apps/  /opt/

To build the Docker image, I put original Apps/ folder containing RobotLab and Neural folders in the same directory as the Dockerfile and then run:

docker build -t myimagetagname  .

The container then be run from that as:

docker run  --name mycontainername  --shm-size 1g -p MYMAPPEDPORT:3389 -d  myimagetagname

For reference, a version of the container can be found here — ousefulcoursecontainers/tm129rdp — and if you need an RDP client they can be found here. There's a repo here, but there are various experiments scattered across various branches and it's not very well documented / clear what's where and what's working yet and how…

If you have docker locally or remotely, the demo container can be run using:

docker run --name tm129 --hostname tm129demo --shm-size 1g -p 3391:3389 -d ousefulcoursecontainers/tm129rdp

(You should be able to run the container remotely on Digital Ocean. See here for a crib.)

In the RDP client application, create a new connection on port 3391 (or whatever you mapped in the docker run command)  as per:

Login with user: ubuntu

The password seems optional but is also: ubuntu

If you need to sudo using the terminal on the remote desktop, the password is: ubuntu

The RobotLab and Neural apps are in the /opt directory (a more recent build uses /opt/Apps, I think?).

When you first run the applications, wine wants to install several packages (gecko twice (?), mono once). (I made a start on trying to run the associated installers in the Dockerfile, but the approach I’be been taking so far doesn’t seem to work…)

#Use a base XRDP container
FROM danielguerra/ubuntu-xrdp

#Required to add a repo
RUN apt-get update && apt-get install -y software-properties-common

#Add additional repo required for wine install
RUN dpkg --add-architecture i386
RUN wget -nc https://dl.winehq.org/wine-builds/winehq.key
RUN apt-key add winehq.key
RUN apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ xenial main'
RUN apt update && apt-get install -y --install-recommends winehq-stable

#The first time wine is used it wants to download some bits...
#I've tried to to add via the Dockerfile but can't get it to work (yet?!) 
#Not sure these are the correct versions, either?
#RUN wget http://dl.winehq.org/wine/wine-mono/4.8.1/wine-mono-4.8.1.msi
#RUN wget http://dl.winehq.org/wine/wine-gecko/2.47/wine_gecko-2.47-x86.msi
#RUN wget http://dl.winehq.org/wine/wine-gecko/2.47/wine_gecko-2.47-x86_64.msi
#RUN wine msiexec /i wine_gecko-2.47-x86_64.msi
#RUN wine msiexec /i wine_gecko-2.47-x86.msi
#RUN wine msiexec /i wine-mono-4.8.1.msi

#Copy over the Windows applications we want to run under wine
COPY Apps/  /opt

#I can't seem to create a user or copy the files to that user home directory?
#If I do, I just get a black screen when I try to connect using RDP client on a Mac.

Files can be saved in RobotLab but you need to select My Documents within the wine context, the files ending up down a path:

I don’t know if there’s a way of configuring this so we can save files more directly  into the host filesystem rather than down the wine path? Would a symlink work?

If the container is halted and then restarted, any updates made to it will be immortalised in the container. So you don’t need to keep installing wine updates each time you want to use the container.

docker stop  --name tm129
docker start  --name tm129

I tried to make a cleaner build with tm129 user and the apps in a more convenient location, and then used a docker commit to make a new image, but the containers don’t seem to start up correctly from the new image.  I also tried tidying where the Apps folder was copied to, creating a tm129 user etc in the Dockerfile, but RDP didn’t seem to work thereafter (black screen on connect).

As far as to do items go, it would be nice to:

  1. create a tm129 user via the Dockerfile and install the applications to that user’s home directory;
  2. install the additional required wine packages via the Dockerfile;
  3. use a symlinked file path, or something, to make saving and loading files in the windows/wine apps a bit simpler.

If we can’t do the above as part of the build using the Dockerfile, find a way to update a container manually and then export a customised Docker image from it.

Nice to haves would be desktop icons pointing to the course applications. A demo of how to start a container that launches the RobotLab application on start (or an image that launches the remote desktop into one of the applications) could also be handy as a reference.

Motorsport Stats — A Comprehensive Free Source of Motor Racing Sports Results?

In passing, I note the launch of Motorsport Stats, “the sport’s pre-eminent provider of motorsport results, live data and and visualised racing analytics for media owners, rights-holders, bookmakers and broadcasters”, apparently… It looks like it’s free, but I’m not sure what the license terms are yet…

A part of Autosport Media, it looks as if it wraps several motorsport results databases, including Forix.

F1 stats looks like a scrape of the FIA timing sheets:

and whilst there are some graphics, I’ve no idea how you find them other than by luck…

The WRC Results go back a good for years, but the presentation is limited to stage / overall classifications presented in a not totally useful way to my mind:

By way of reference, here’s how I’m currently looking at stage results:

and overall classification:

Part of the business model seems to be selling things like widget displays. For example, here’s some marketing blurb from last year for the  Motorsport Stats Formula One widget suite.

All_and_doodles

They’re also into the production of dataviz products, from social media infographics to video graphics using Vizrt.

Hmmm… thinks… it’s been ages since I did any TV sports graphics round-ups… looks to be at least a couple of years going by these examples: Augmented TV Sports Coverage & Live TV Graphics and Behind the Scenes of Sports Broadcasting – Virtual Sets, VIrtual Signage and Virtual Advertising, etc.

I’ve seen a couple of mentions in reports about an API, but haven’t found any docs around that yet. A peak at browser developer tools suggests JSON API calls, which is handy:

It shouldn’t be too much of a chore (?!) to create a wrapper around the API if it plays nice and uses structured URLs with obvious ways of picking out record IDs / keys out of the JSON to use with the structured URLs:

If you’ve spotted docs for the API, found a Python wrapper for it, or even created your own, please let me know:-)

PS I keep meaning to get restarted with my Wrangling F1 Data With R book, or perhaps in a “More Data Wrangling…” version, or perhaps “…With Python”. But there never seems to be enough hours in the day…

Simple Script for Grabbing BBC Programme Info

It’s been a long time since I used to play with BBC data (old examples here) but whilst trying to grab some top level programme names / identifiers / descriptions of BBC programme content that we might be able to make use of in an upcoming course revision, I thought I’d have a quick play to so see if any of the JSON feeds were still there.

It seems they are, so I popped together a quick throwaway thing for grabbing some programme info by series and programme ID, with the code available via this gist.

(Folk don’t necessarily believe that I write code every day. But I do, most days. Things like this… “disposable helper scripts”, developed over a coffee break rather than doing a Killer Sudoku or trying a crossword, treated as a simple throwaway coding puzzle with the off chance that they’re temporarily OUseful.)

PS I’ve think I’ve posted noticings like this before, but always useful to remark on every so often… Using contextual natural / organic advertising for recruitment…

BBC_-_Programmes.png

I wonder what percentage of computing academics are aware that such things go on?!