Not Quite Serverless Reproducible Environments With Digital Ocean

I’ve been on something of a code rollercoaster over the last few days, fumbling with Jupyter server proxy settings in MyBinder, and fighting with OpenRefine, but I think I’ve stumbled on a new pattern that could be quite handy. In fact, I think it’s frickin’ ace, even though it’s just another riff, another iteration, on what we’ve been able to do before (e.g. along the lines of deploy to tutum, Heroku or zeit.now).

(It’s probably nothing that wasn’t out there already, but I have seen the light that issues forth from it.)

One of the great things about MyBinder is that it helps make you work reproducible. There’s a good, practical, review of why in this presentation: Reproducible, Reliable, Reusable Analyses with BinderHub on Cloud. One of the presentation claims is that it costs about $5000 a month to run MyBinder, covered at the moment by the project, but following the reference link I couldn’t see that number anywhere. What I did see, though, was something worth bearing in mind: “[MyBinder] users are guaranteed at least 1GB of RAM, with a maximum of 2GB”.

For OpenRefine running in MyBinder, along with other services, this shows why it may struggle at times…

So, how can we help?

And how can we get around the fact of not knowing what other stuff repo2docker, the build agent for MyBinder, might be putting into the server, or not being able to use Docker compose to link across several services in the Binderised environment, or having to run MyBinder containers in public (although it looks as though evil Binder auth in general may now be available?)

One way would be for institutions to chip in the readies to help keep the public MyBinder service free. Another way could be a sprint on a federated Binderhub in which institutions could chip in server resource. Another would be for institutions to host their own Binderhub instances, either publicly available or just available to registered users. (In the latter case, it would be good if the institutions also contributed developer effort, code, docs or community development back to the Jupyter project as a whole.)

Alternatively, we can roll our own server. But setting up a Binderhub instance is not necessarily the easiest of tasks (I’ve yet to try it…) and isn’t really the sort of thing your average postgrad or data journalist who wants to run a Binderised environment should be expected to have to do.

So what’s the alternative?

To my mind, Binderhub offers a serverless sort of experience, though that’s not to say no servers are involved. The model is that I can pick a repo, click a button, a server is started, my image built and a container launched, and I get to use the resulting environment. Find repo. Click button. Use environment. The servery stuff in the middle — the provisioning of a physical web server and the building of the environment — that’s nothing I have to worry about. It’s seamless and serverless.

Another thing to note is that MyBinder use cases are temporary / ephemeral. Launch Binderised app. Use it. Kill it.

This stands in contrast to setting services running for extended periods of time, and managing them over that period, which is probably what you’d want to do if you ran your own Binderhub instance. I’m not really interested in that. I want to: launch my environment; use it; kill it. (I keep trying to find a way of explaining this “personal application server” position clearly, but not with much success so far…)

So here’s where I’m at now: a nearly, or not quite, serverless solution; a bring your own server approach, in fact, using Digital Ocean, which is the easiest cloud hosting solution I’ve found for the sorts of things I want to do.

Its based around the User data text box in a Digital Ocean droplet creation page:

If you pop a shell script in there, it will run the code that appears in that box once the server is started.

But what code?

That’s the pattern I’ve started exploring.

Something like this:

#!/bin/bash

#Optionally:
export JUPYTER_TOKEN=myOwnPA5%w0rD

#Optionally:
#export REFINEVERSION=2.8

GIST=d67e7de29a2d012183681778662ef4b6
git clone https://gist.github.com/$GIST.git
cd $GIST
docker-compose up -d

which will grab a script (saved as a set of files in a public gist) to download and install an OpenRefine server inside a token protected Jupyter notebook server (the OpenRefine server runs via a Jupyter server proxy (see also OpenRefine Running in MyBinder, Several Ways… for various ways of running OpenRefine behind a Jupyter server proxy in MyBinder).

Or this (original gist):

#!/bin/bash

#Optionally:
#export JUPYTER_TOKEN=myOwnPA5%w0rD

GIST=8fa117e34c62b7f80b6c595b8ba4f488

git clone https://gist.github.com/$GIST.git
cd $GIST

docker-compose up -d

that will download and install a docker-compose set of elements:

  • a Jupyter notebook server, seeded with a demo notebook and various SQL magics;
  • a Postgres server (empty; I really need to add a fragment showing how to seed it with data; or you should be able to figure it out from here: Running a PostgreSQL Server in a MyBinder Container);
  • an AgensGraph server; AgensGraph is a graph database built on Postgres. The demo notebook currently uses the first part of the AgensGraph tutorial to show how to load data into it.

(The latter example includes a zip file that you can’t upload via the Gist web interface; so here’s a recipe for adding binary files (zip files, image files) to a Github gist.)

So what do you need to do to get the above environments up and running?

  • go to Digital Ocean](https://www.digitalocean.com/) (this link will get you $100 credit if you need to create an account);
  • create a droplet;
  • select ‘one-click’ type, and the docker flavour;
  • select a server size (get the cheap 3GB server and the demos will be fine);
  • select a region (or don’t); I generally go for London cos I figure it’s locallest;
  • check the User data check box and paste in one of the above recipes (make sure you start it with no spare lines at the top with the hashbang (#!/bin/bash);
  • optionally name the image (for your convenience and lack of admin panel eyesoreness);
  • click create;
  • copy the IP address of the server that’s created;
  • after 3 or 4 minutes (it may take some time to download the app containers into the server), paste the IP address into a browser location bar;
  • when presented with the Jupyter notebook login page, enter the default token (letmein; or the token you added in the User data script, if you did), or use it to set a different password at the bottom of the login page;
  • use your apps…
  • destroy the droplet (so you don’t continue to pay for it).

If that sounds too hard / too many steps, there are some pictures to show you what to do in the Creating a Digital Ocean Docker Droplet section of this post.

It’s really not that hard…

Though it could be easier. For example, if we had a “deploy to Digital Ocean” button that took the something like the form: http://deploygist.digitalocean.com/GIST and that looked for user_data and maybe other metadata files (region, server size, etc) to set a server running on your account and then redirect you to the appropriate webpage.

We don’t need to rely on just web clients either. For example, here’s a recipe for Connecting to a Remote Jupyter Notebook Server Running on Digital Ocean from Microsoft VS Code.

The next thing I need to think about is preserving state. This looks like it may be promising in that regard? Filestash [docs]. This might also be worth looking at: Pydio. (Or this: Grpahite h/t @simonperry.)

For anyone still still reading and still interested, here are some more reasons why I think this post is a useful one…

The linked gists are bother based around Docker deployments (it makes sense to use Docker, I think, because a lot of hard to install software is already packaged in Docker containers), although they demonstrate different techniques:

  • the first (OpenRefine) demo extends a Jupyter container so that it includes the OpenRefine server; OpenRefine then hides behind the Jupyter notebook auth and is proxied using Jupyter server proxy;
  • the second (AgensGraph) demo uses Docker compose. The database services are headless and not exposed.

What I have tried to do in the docker-compose.yml and Dockerfile files is show a variety of techniques for getting stuff done. I’ll comment them more liberally, or write a post about them, when I get chance. One thing I still need to do is a demo using nginx as a reverse proxy, with and without simple http auth. One thing I’m not sure how to do, if indeed it’s doable, is proxy services from a separate container using Jupyter server proxy; nginx would provide a way around that.

Adding Zip Files to Github Gists

Over the years, I’ve regularly used Github gists as a convenient place to post code fragments. Using the web UI, it’s easy enough to create new text files. But how do you add images, or zip files to a gist..?

…because there’s no way I can see of doing it using the web UI?

But a gist is just a git repo, so we should be able to commit binary files to it.

Ish via this gist — How to add an image to a gist — this simple recipe. It requires that you have git installed on your computer…

#YOURGISTID is in the gist URL: https://gist.github.com/USERNAME/GISTID
GIST=GISTID

git clone https://gist.github.com/$GIST.git

#MYBINARYFILENAME is something like mydata.zip or myimage.zip
cp MYBINARYPATH/MYBINARYFILENAME $GIST/

cd $GIST
git add MYBINARYFILENAME

git commit -m MYCOMMITMESSAGE

git push origin master
#If prompted, provide your Github credentials associated with the gist

Handy…

Running OpenRefine in the Clear on Digital Ocean

In a couple of earlier posts, I’ve described how to get OpenRefine up and running remotely over the web by installing the OpenRefine server onto a Digital Ocean Linux server and running it there behind a simple authenticating proxy(Running OpenRefine On Digital Ocean Using Simple Auth and the more automated Authenticated OpenRefine Server on Digital Ocean, Redux).

In this post I’ll show how to set up a simple OpenRefine server, without authentication, using Docker (I’ll show how to add in the authenicating nginx proxy in a follow on post).

Docker is a virtualisation technology that heavily draws on the idea of “containers”, isolated computational environments that provide just enough operating system to run a particular application within them.

As well as hosting raw Linux servers, Digital Ocean also provides Linux-servers-with-docker as a one-click application.

Here’s how to start a docker machine on Digital Ocean.

Creating a Digital Ocean Docker Droplet

First up, create a new droplet as a one-click app, selecting docker as the one-click application type:

To give ourselves some space to work with, I’m going to choose the 3GB server (it may work with default settings in a 2GB server, or it may ruin your day…). It’s metered by the hour, but it’ll still only cost a few pennies for a quick demo. (You can also get $100 free credit as a new user if you sign up here.)

DigitalOcean_-_Create_Droplets_

Select a data center region (I typically go for a local one):

If you want to, add your SSH key (recipe here, but it’s not really necessary: the ssh key just makes it easier for you to login to the server from your own computer if you need to. If you haven’t heard of ssh keys before, ignore this step!)

Hit the big green button to create your droplet (if you want to, give the sever a nicer hostname first…).

Accessing the Digital Ocean Droplet Server Terminal

Your one-click docker server will now start up. Once its there (it should take less than a minute) click through to its admin page. Assuming you haven’t added ssh keys, you’ll need to log in through the console. The login details for your server should have been emailed to the email address associated with your Digital Ocean account. Use them to login.

On first login, you’ll be prompted to change the password (it was emailed to you in plain text after all!)

If you choose a really simple replacement password, you may need to choose another one. Also note that the (current) UNIX password was the one you were emailed, so you’ll essentially be providing this password twice in quick succession (once for the first login, then again to authorise the enforced password change). Copy and pasting the password into the console from your email should work…

Once you’ve changed your password, you’ll be logged out and you’ll have to log back in again with your new password. (Isn’t security a faff?! That’s why ssh keys…!;-)

Now you get to install and launch OpenRefine. I’ve got an example image here, and the recipe for creating it here, but you don’t need really need to look at that if you trust me…

What you do need to do is run:

docker run -d -p 3333:3333 --name openrefine psychemedia/openrefinedemo

What this command does is download and run the container psychemedia/openrefinedemo, naming it (purely for our convenience) as openrefine.

You can learn how to create an OpenRefine docker image here: How to Create a Simple Dockerfile for Building an OpenRefine Docker Image.

The -d flags runs the container in “detached”, standalone mode (in the background, essentially). The -p 3333:3333 is read as -p PUBLICPORT:INTERNALPORT. The OpenRefine server is started on INTERNALPORT=3333 and we’re also going to view it on a URL port 3333.

The container will take a few seconds to download if this is the first time you’ve called for it:

and then it’ll a print out a long id number once it’s launched and running the background.

(You can check it’s running by running the command docker ps.)

In both the terminal and the droplet admin pages, as well as the droplet status line in the current droplet listing pages, you should see the public IP address associated with the droplet. Copy that address into your browser and add the port mapping (:3333). You should now be able to see a running version of OpenRefine. (And so should anyone else who wanders by that URL:PORT combination…)

Let’s now move the application to another port. We could do this by launching another container, with a new unique name (container names, when we assign them, need to be unique) and assigned to another port. (The OpenRefine internal service port will remain the same). For example:

docker run -d -p 3334:3333 --name openrefine2 psychemedia/openrefinedemo

This creates a new container running a fresh instance of the OpenRefine server. You should see it on IPADDRESS:3334.

(Alternatively we can omit the name and a random one will be assigned, for example, docker run -d -p 3335:3333)

Note that the docker image does not need to be downloaded again. We simply reuse the one we downloaded previously, and spawn a new instance of it as a new container.

Each container does take up memory though, so kill the original container:

docker kill openrefine

and remove it:

docker rm openrefine.

For a last quick demo, let’s create a new instance of the contain, once again called openrefine (assuming we’ve removed the one previously called that) and run it on port 80, the default http port, which means we should be able to see it directly by going to just the IPADDRESS (with no port specified) in our browser:

docker run -d -p 80:3333 --name openrefine psychemedia/openrefinedemo

When you’re done, you can halt the droplet (in which case, you’ll keep on paying rent for it) or destroy it (which means you won’t be billed for any additional hours, or parts thereof, on top of the time you’ve already been running the droplet):

You don’t need to tidy up around the docker containers, they’ll die with the droplet.

So, not all that hard, is it? Probably a darn sight easier than trying to get anything out of your local IT unit?!

In the next post, I’ll show how to combine the container with another one containing nginx to provide some simple authentication. (There are lots of prebuilt containers out there we can just take “off-the-shelf”, and nginx is one of them.) I’ll maybe also have a look at how you might persist projects in hibernating container / droplet, perhaps look at how we might be able to upload files that OpenRefine can work on, and maybe even try to figure out a way to simply synch your project files from Digital Ocean to your own file storage location somewhere. Maybe…

PS third party nginx proxy example: https://github.com/beevelop/docker-nginx-basic-auth

Converting Pandas Generated HTML Data Tables to PNG Images

Over the weekend, I noticed the Dakar 2019 rally was on, which resulted in my spending Sunday evening putting a scraper together to grab timing data down from the official website (notebook code here).

The data on its own is all a bit “so  what?”; it only comes alive when you start playing with it. One of the displays I was still tinkering with at the end of last year’s WRC season was a tabular stage report that tries to capture a chunk of information from stage timing, such as split times, so it made sense to start riffing on that.

The Rally Dakar timing screen presents split times like this:

You can get a view with either of the running time in stage at each split / waypoint, or the gap, which is to say, the time difference to the person who was fastest at each split. (I think sometimes Gap times may report the time difference to the person who ranked first on the stage overall, rather than the gap to the person ranked first at a particular split.)

One of the things that interests me (from a data storytelling point of view) is how much time a driver gains, or loses, within a split compared to other drivers. We can use this to spot parts of the stage where a driver has hit a problem, or pushed hard.

The sort of display I’ve been working up looks, at least with the Dakar data, like this so far (there are a few columns missing compared to my WRC tables, but there’s also an extra one: the in-line bimodal sparkline chart).

This particular view displays split times rebased relative to Peterhansel (it’s easy enough to generate views rebased relative to any other specified driver). That is, the table shows how much time Peterhansel gained/lost relative to each other driver at each waypoint. The table is ordered by stage rank. The columns on the left show how much time was gained/lost going from one waypoint to the next. The columns on the right show how the gap relative to each driver evolved over the stage. The inline chart tracks the gap evolution.

The table is a styled pandas table, rendered as HTML. After applying styling, you can get a preview in a notebook using something of the form:

from IPython.display import display, HTML
display( HTML( df.style.render() ) )

I’ve previously posted a recipe for Grabbing Screenshots of folium Produced Choropleth Leaflet Maps from Python Code Using Selenium so here’s the latest iteration of my code fragment (which built on the previous example) for taking a chunk of HTML and using selenium to open it in a browser and grab a screenshot of it.

The code is h/t to several Stack Overflow posts.

import os
import time
from selenium import webdriver

#Via https://stackoverflow.com/a/52572919/454773
def setup_screenshot(driver,path):
    ''' Grab screenshot of browser rendered HTML.
        Ensure the browser is sized to display all the HTML content. '''
    # Ref: https://stackoverflow.com/a/52572919/
    original_size = driver.get_window_size()
    required_width = driver.execute_script('return document.body.parentNode.scrollWidth')
    required_height = driver.execute_script('return document.body.parentNode.scrollHeight')
    driver.set_window_size(required_width, required_height)
    # driver.save_screenshot(path)  # has scrollbar
    driver.find_element_by_tag_name('body').screenshot(path)  # avoids scrollbar
    driver.set_window_size(original_size['width'], original_size['height'])

def getTableImage(url, fn='dummy_table', basepath='.', path='.', delay=5, height=420, width=800):
    ''' Render HTML file in browser and grab a screenshot. '''
    browser = webdriver.Chrome()

    browser.get(url)
    #Give the html some time to load
    time.sleep(delay)
    imgpath='{}/{}.png'.format(path,fn)
    imgfn = '{}/{}'.format(basepath, imgpath)
    imgfile = '{}/{}'.format(os.getcwd(),imgfn)

    setup_screenshot(browser,imgfile)
    browser.quit()
    os.remove(imgfile.replace('.png','.html'))
    #print(imgfn)
    return imgpath

def getTablePNG(tablehtml, basepath='.', path='testpng', fnstub='testhtml'):
    ''' Save HTML table as: {basepath}/{path}/{fnstub}.png '''
    if not os.path.exists(path):
        os.makedirs('{}/{}'.format(basepath, path))
    fn='{cwd}/{basepath}/{path}/{fn}.html'.format(cwd=os.getcwd(), basepath=basepath, path=path,fn=fnstub)
    tmpurl='file://{fn}'.format(fn=fn)
    with open(fn, 'w') as out:
        out.write(tablehtml)
    return getTableImage(tmpurl, fnstub, basepath, path)

#call as: getTablePNG(s)
#where s is a string containing html, eg s = df.style.render()

The png image is saved as an image file that can be embedded in other HTML pages, shared via soshul meeja, etc…

Authenticated OpenRefine Server on Digital Ocean, Redux

Following on from yesterday’s recipe showing how to Running OpenRefine On Digital Ocean Under Simple Auth, here’s an even easier way that doesn’t require ssh and doesn’t require console access: just copy and paste some set-up info into a form on the Digital Ocean droplet creation page.

Every little everything helps… this link will set new users up with $100 of free credit on Digital Ocean; somewhere down the line I may get a small amount of affiliate link powered Digital Ocean credit to help keep my own server costs covered…

The Digital Ocean droplet creator has an option to add a User data start-up script when the droplet is created (docs).

This means we can provide a script that will download and install everything we need, including a file to define a service that will run OpenRefine.

Copy and paste the script in the gist that can be found here into the user data area before you create the droplet. The code will be executed once the droplet is up and running and should install the nginx proxy and the OpenRefine application. A default user (test) and password (letmein) are defined in script.

If you are trusting of the gist, you can install it more succinctly from the gist repository. You can also define your own user and password. To take this installation route, use the code that follows below in the userdata area:

Here’s the code…:

#!/bin/bash

#We can over-ride the default credentials
USER_NAME=testuser
USER_PWD=testpwd

#Get the latest version of installer script and run it

source <( curl -s https://gist.githubusercontent.com/psychemedia/f256960c112347dd410c2beec8ce05e3/raw/ )

(There should be no spaces between the less than and open bracket characters in the source command.)

For an explicit link to a particlar version of the script, go to the gist, and copy the link for the raw version of the latest file or the version you want.

Note that this route requires you to place considerable trust in the publisher of the remote installation script. Do you really trust a file with such a dodgy filename?

It will probably take two or three minutes to download and install everything, so don’t be too worried if you don’t see anything at the provided IP address immediately. Just keep refreshing the page…

The first sign of life you should see is the nginx default page…

Wait a few moments and reload the page… Next up is a holding page as the OpenRefine application is installed and the service started…

This page should automatically refresh itself every 5 seconds or so. When the service is up and running, you should get a page like this:

Click the link and you should be prompted with the the user/password authenticator challenge. Provide the username/password and you should be taken to your OpenRefine server.

Running OpenRefine On Digital Ocean Using Simple Auth

As a cloud host, Digital Ocean provides a really easy way in to getting services up and running on the web.

Here’s a quick recipe for getting Open Refine up and running behind a simple authentication scheme.

Creating a Server With Digital Ocean

First up, create a Digital Ocean account if you don’t already have one (this link will get you started with $100 free credit).

Creating and launching a server is easy… Select Create then Droplet, choose the server type you want — let’s use a simple Ubuntu box — and choose the server size. For lots of quick tests, I use the smallest box, but from experience I think OpenRefine prefers MORE THAN 2GB or more

Really… If you pick a 2GB server, you may find that OpenRefine hangs on start and ruins your day when you end up trying to debug what you think other other problems…. Be warned, kids… stay unfrustrated out there…

The servers are charged at a metered rate, and you can stop them any time, so for a quick test, it’ll cost you pennies… (A $100 credit can go a long way…!)

Next, choose a data center region; I generally pick a local one…

You also have the option of adding an ssh key. This makes life much easier when trying to log in to the server from your own machine using ssh (you can just run ssh root@IP_ADDRESS and it’ll log you straight in; there’s a recipe for setting up an SSH key here).

If you don’t want to set up an SSH, a root password will be emailed to you. You can use this password to log in to your server via a web terminal, which means you can do everything via a web UI if you need to…

Create your server by clicking the big green button…

It should only take a few seconds to start up… And when it has, you’ll be presented with it’s public IP address.

If you need a web terminal, click through the on the server name, and you should see a link to launch a web console.

Installing  OpenRefine

From the console, we can install all we need to run OpenRefine. This is a minimum viable example — we should probably find a better place to install OpenRefine, and may want to run it as a particular user with limited permissions. Working as root with everything wide open makes life easier, though not necessarily safer…!

OpenRefine requires a Java environment, so we need to install that:

apt-get update && apt-get install -y openjdk-8-jre

We can download the OpenRefine application via the command-line using a command of the form wget -q -O DOWNLOADED_FILE_NAME URL; the download link for each release can be found on the OpenRefine releases page:

wget -q -O openrefine-2.8.tar.gz https://github.com/OpenRefine/OpenRefine/releases/download/2.8/openrefine-linux-2.8.tar.gz

The downloaded file is provided as a tar archive file, which we need to unpack:

tar xzf openrefine-2.8.tar.gz

This will unbundle the files into the directory ./openrefine-2.8.

Let’s create an alias for a working directory in which to place the OpenRefine project files:

OPENREFINE_DIR="$HOME/openrefine"

And create that directory:

mkdir -p $OPENREFINE_DIR

You should now be able to run OpenRefine in the background using the command:

nohup openrefine-2.8/refine -p 3333 -d OPENREFINE_DIR > /dev/null 2>&1 &

This will run OpenRefine on port 3333. If you copy the IP address of your Digital Ocean server, and go to http://IP.ADDRESS:3333, you should see OpenRefine running there. (Note that if you’re in Chrome, Google may well tell you that the address is dangerous…)

Adding Simple Authentication

The OpenRefine server is running as a public service on the public web. If you want to add a simple layer of authentication, you can add a web server proxy to the server that will prompt for a password when a new visit is made to the server.

One of the easiest proxies to get up and running is nginx. Let’s install it, along with a simple Apache toolkit that will help us create a simple password:

apt-get install -y nginx apache2-utils

Now create a simple user/password combination. Ever secure, I’ll go with user test and password letmein:

htpasswd -b -c /etc/nginx/.htpasswd test letmein

Now we need to define the proxy. A Digital Ocean tutorial (How To Install Nginx on Ubuntu 18.04) describes how to set up a firewall – I’m selecting the 'Nginx Full' option because I’m working via SSH, but if you’re working in the web terminal, the more restrictive 'Nginx HTTP' may be more appropriate:

sudo ufw allow 'Nginx Full'

If you try loading the OpenRefine server on port 3333, you should now find that it’s blocked: the firewall is only letting web traffic through on port 80.

We now need to open access back up to the OpenRefine server, albeit via a password challenge. The following will create a default nginx configuration file that will expose the OpenRefine service running on port 3333 via default http port 80, mediated by a simple authorisation challenge:

config='''
server {
  listen 80;
  auth_basic Protected...;
  auth_basic_user_file /etc/nginx/.htpasswd;
  location / {
    proxy_pass http://127.0.0.1:3333;
  }
}
'''

echo "$config" > /etc/nginx/sites-available/default

Restart the nginx proxy to put the new settings into effect:

nginx -s reload

If you now go to http://IPADDRESS you should be presented with a challenge. Enter the credentials you defined, and you should see your OpenRefine server:

Finishing Up

When you’ve finished your session, you can destroy the droplet. This will tear the server down and you won’t be billed for it anymore.

Alternatively, you can switch the droplet off, but keep it in a shutdown state that you can restart in the future.

However, as the above prompt suggests, you will continue to be billed, even if the service is not running, because it is still consuming Digital Ocean resources.

IPython Magic for Controlling the V-REP Robot Simulator from Jupyter notebooks

Whilst exploring how we might be able to use Jupyter notebooks hooked up to the Coppelia Robotics V-REP robot simulator, it struck me that we needed a fair amount of boilerplate stuff to get the simulator loaded with an appropriate scene file and the a connection made to the simulator from the notebook so we could script the robot actions from the notebooks.

My first approach to trying to simplify presentation was to create some “self-documenting” notebooks that could be used to set-up necessary environmental variables and import default classes and functions:

The %run cell magic loads and runs the referenced notebooks, which can also be inspected (and modified) by students.

(To try to minimise the risk of students introducing breaking changes into the imported notebooks, we could also lock the cells as read-only in the notebooks. Whilst this requires an extension to be installed to implement the read-only behaviour, the intention is that we distribute a customised Jupyter notebook environment to students.)

The loadSceneRelativeToClient() function loads the specified scene into the simulator. Note that this scene should contain a robot model. Once the connection to the simulator is made, a robot object can be instantiated using the connection details. The robot class should contain the definitions required to control the robot model in the loaded in scene.

Setting up the connection to the simulator is a bit of a faff, and when code cell execution is stopped we can get an annoying KeyboardInterrupt report:

We can defend against the KeyboardInterrupt by wrapping the code execution in a try/except block:

try:
    with VRep.connect("127.0.0.1", 19997) as api:
        robot = PioneerP3DXL(api)
        while True:
            #do stuff
            pass
except KeyboardInterrupt:
    pass

But it struck me that it would be much nicer to be able to use some magic along the lines of the following, in which we set up the simulator with a scene, identify the robot we want to control, automatically connect to the simulator and then just run the robot control program:

So here’s a first attempt at some IPython cell magic to do that:

from pyrep import VRep
from __future__ import print_function
from IPython.core.magic import (Magics, magics_class, line_magic,
                                cell_magic, line_cell_magic)
import shlex

# The class MUST call this class decorator at creation time
@magics_class
class Vrep_Sim(Magics):

    @cell_magic
    def vrepsim(self, line, cell):
        "V-REP magic"

        #Use shlex.split to handle quoted strings containing a space character
        loadSceneRelativeToClient(shlex.split(line)[0])

        #Get the robot class from the string
        robotclass=eval(shlex.split(line)[1])

        #Handle default IP address and port settings; grab from globals if set
        ip = self.shell.user_ns['vrep_ip'] if 'vrep_ip' in self.shell.user_ns else '127.0.0.1'
        port = self.shell.user_ns['vrep_port'] if 'vrep_port' in self.shell.user_ns else 19997

        #The try/except block exits form a keyboard interrupt cleanly
        try:
            #Create a connection to the simulator
            with VRep.connect(ip, port) as api:
                #Set the robot variable to an instance of the desired robot class
                robot = robotclass(api)
                #Execute the cell code - define robot commands as calls on: robot
                exec(cell)
        except KeyboardInterrupt:
            pass

    #@line_cell_magic
    @line_magic
    def vrep_robot_methods(self, line):
        "Show methods"
        robotclass = eval(line)
        methods = [method for method in dir(robotclass) if not method.startswith('_')]
        print('Methods available in {}:\n\t{}'.format(robotclass.__name__ , '\n\t'.join(methods)))

#Could install as magic separately
ip = get_ipython()
ip.register_magics(Vrep_Sim)

I’ve also added a bit of line magic to display the methods defined on a robot model class:

The tension now is a pedagogical one: for example, should I be providing students with the robot model, or should they be building up the various control functions (.move_forwards(), .turn_left(), etc.) themselves?

I’m also wondering whether I should push the while True: component into the magic? On balance, I think students need to see it in their code block because getting them to think about control loops rather than one shot execution of command statements is something they often don’t get the first, or even second, time round. But for reducing clutter, it’d make for far cleaner cell block code.