Simple 2D ev3devsim Javascript Simulator Running as an ipywidget in Jupyter Notebooks

So…

…for a course revision upcoming, I’ve been tweaking a thing.

The thing is ev3devsim [repo], a Javascript powered 2 robot simulator that allows you to execute Python code, via Skulpt, in the browser to control a simple simulated robot.

The Python package used to control the robot is a skulpt port of ev3dev-lang-python, a Python wrapper for the ev3dev Linux distribution for Lego Ev3 robots. (Long time readers may recall I explored ev3dev for use in an OU residential school way back when and posted a few related notebooks.)

Anyway… we want to use Python in the module revision, the legacy activities we want to update look similar, ish, sort of, almost, we may be able to use some of them, and I want to do the activities via Jupyter notebooks.

So I’ve had a poke around and think I’ve managed to make the fumblings of a start around an ipywidget wrapper for the simulator that will allows us to embed it in a notebook.

Because I don’t understand ipywidgets at all, I’m using the jp_proxy_widget, which I first played with in the context of wrapping wavesurfer.js (Rapid ipywidgets Prototyping Using Third Party Javascript Packages in Jupyter Notebooks With jp_proxy_widget).

Here’s where I’m at [nbev3devsim; Binder demo available, if the code I checked in works!]:

The first thing to notice is that the terminal has gone. The idea is that you write the code in a code cell and inject it into the simulator. My model for doing this is via cell block magic or by passing code in a variable into the simulato (for generality, I should probably also allow a link to a .py file).

The cell block magic still needs some work, I think; eg a temporary alert with a coloured backrgound to say “code posted to simulator” that disappears on its own after a couple of seconds.) I probably also need an easy  way to preview the code currently assigned to the simulated robot.

You might also notice a chart display. This is actually a plotly streaming line chart that updates with sensor values (at the moment, just the ultrasound sensor; other sensors have different ranges, so do I scale those, or use different charts perhaps?)

There is also an output window your code can print messages to, as the following hello-world magic shows:

We can read state out of the simulator, though the way the callback work this seems to require running code across two cells to get the result into the Python environment?

I’ve also experimented with another approach where the widget’s parent object grabs (or could be regularly updated to mirror, maybe?) logged sensor readings from inside the simulator, which means I can interrogate that object, even as the simulator runs. (I also started to explore using a streaming dataframe for this data, but I’m not convinced that’s the best approach; certainly trying to stream logged data from the simulator into a streaming chart in the notebook context is laggy compared to the chart embedded in the simulator context.)

With the data in the Python context, we can do what we like with it, Like chart it etc.

There’s a lot of tweaks that need to be made and things to be added to run the full complement of activities we ran in the original presentation of the course.

I’d already started to explore what’s required to add Python functions to skulpt (eg Simple Text to Speech With Skulpt), although I’m not sure if that’s blocking (could it be handled asynchronously if so?) and today managed to learn enough from this SO answer on making objects draggable to make the robot draggable on the canvas (I think; a PR is here but not tested in a fresh/isolated environment yet (I only made a PR to give me s/thing to link to here!); the biggest issue I had was converting mouse co-ordinates to robot world canvas co-ordinates. There are still issues there, eg in getting the size of the robot correctly, but the co-ordinate management in the simulator looks a bit involved to me and I want to get my head round it enough that if I do start trying to simplify things, I don’t break other things!)

Other things that really need adding:

  • ability to reset canvas in one go;
  • ability to rotate robot using mouse;
  • ability to add noise to motors and sensors;
  • configure robot from notebook code cell rather than simulator UI? (This could also be seen as an issue about whether to strip as much out of the widget as possible.)
  • predefine sensoble robot configurations; (can we also have a single, centreline front mounted light sensor?)
  • add pen-up / pen-down support (perhaps have a drawing layer in the simulator for this?)
  • explore supporting multiple simulators embedded in one notebook (currently it’s at most one, I suspect in large part becuase of specific id values assigned to DOM elements?)

The layout is also really clunky, the main issue being how to see the code against the simulator (if you need to). Two columns might be better — notebook text and code cells in one, the simulator windows stacked in the other? — but then again, a wide simulator window is really useful. A floatinging / draggable simulator window might be another option? I did thing the simulator window might be tear-offable in JupyterLab, but I have never managed to successfully tear off any jp_proxy_widget in JupyterLab (my experiences using JupyterLab for anything are generally really miserable ones).

The original module simulator allowed you to step through the code, but: a) I don’t know if that would be possible; b) I suspect my coding knowledge / skills aren’t up to it; c) I really should be trying to write the activities, not sinking yet more time into the simulator. (One thing I do need to do is see if any of the code I wrote years ago when scopting things for the residential school is reusable, which could save some time…)

I also need to see if the simulator is actually any good for the practical activities we used in the original version of the course, or whether I need to write a whole new set of activities that do work in this simulator… Erm…

First Attempt At Using IPywidgets in Jupyter Notebooks to Display V-REP Robot Simulator Telemetry

Having got a thing together that lets me use some magic to load a V-REP robot simulator scene, connect to it and control a robot contained inside it, I also started to wonder about we could build instrumentation on the Jupyter notebook client side.

The V-REP simulator itself has graph objects that can record and display logged data within the simulator:

But we can also capture data from the simulator as part of the Python control loop running via a notebook.

(I’m not sure if streaming data from the simulator is possible, or how to go about either setting that up in the simulator connection or rendering it in the notebook?)

So here’s my quick starter for 10 getting a simple data display running in a notebook using IPython widgets.

So here’s a simple text display to give a real time (ish) view of a couple of sensor values:

As the robot runs, the widget values update in real-time-ish .

I couldn’t figure out offhand how to generate a live-updating chart, and couldn’t quickly see how to return data from inside the magic cell as part of the magic function. (In fact, I’m not convinced I understand at all the scoping in there!)

But it seems as if we set a global variable inside the magic cell, we can get data out and plot it when the simulation is stopped:

Example notebook here.

If anyone can show me how to create and update a live chart, that would be fantastic:-)

IPython Magic for Controlling the V-REP Robot Simulator from Jupyter notebooks

Whilst exploring how we might be able to use Jupyter notebooks hooked up to the Coppelia Robotics V-REP robot simulator, it struck me that we needed a fair amount of boilerplate stuff to get the simulator loaded with an appropriate scene file and the a connection made to the simulator from the notebook so we could script the robot actions from the notebooks.

My first approach to trying to simplify presentation was to create some “self-documenting” notebooks that could be used to set-up necessary environmental variables and import default classes and functions:

The %run cell magic loads and runs the referenced notebooks, which can also be inspected (and modified) by students.

(To try to minimise the risk of students introducing breaking changes into the imported notebooks, we could also lock the cells as read-only in the notebooks. Whilst this requires an extension to be installed to implement the read-only behaviour, the intention is that we distribute a customised Jupyter notebook environment to students.)

The loadSceneRelativeToClient() function loads the specified scene into the simulator. Note that this scene should contain a robot model. Once the connection to the simulator is made, a robot object can be instantiated using the connection details. The robot class should contain the definitions required to control the robot model in the loaded in scene.

Setting up the connection to the simulator is a bit of a faff, and when code cell execution is stopped we can get an annoying KeyboardInterrupt report:

We can defend against the KeyboardInterrupt by wrapping the code execution in a try/except block:

try:
    with VRep.connect("127.0.0.1", 19997) as api:
        robot = PioneerP3DXL(api)
        while True:
            #do stuff
            pass
except KeyboardInterrupt:
    pass

But it struck me that it would be much nicer to be able to use some magic along the lines of the following, in which we set up the simulator with a scene, identify the robot we want to control, automatically connect to the simulator and then just run the robot control program:

So here’s a first attempt at some IPython cell magic to do that:

from pyrep import VRep
from __future__ import print_function
from IPython.core.magic import (Magics, magics_class, line_magic,
                                cell_magic, line_cell_magic)
import shlex

# The class MUST call this class decorator at creation time
@magics_class
class Vrep_Sim(Magics):

    @cell_magic
    def vrepsim(self, line, cell):
        "V-REP magic"

        #Use shlex.split to handle quoted strings containing a space character
        loadSceneRelativeToClient(shlex.split(line)[0])

        #Get the robot class from the string
        robotclass=eval(shlex.split(line)[1])

        #Handle default IP address and port settings; grab from globals if set
        ip = self.shell.user_ns['vrep_ip'] if 'vrep_ip' in self.shell.user_ns else '127.0.0.1'
        port = self.shell.user_ns['vrep_port'] if 'vrep_port' in self.shell.user_ns else 19997

        #The try/except block exits form a keyboard interrupt cleanly
        try:
            #Create a connection to the simulator
            with VRep.connect(ip, port) as api:
                #Set the robot variable to an instance of the desired robot class
                robot = robotclass(api)
                #Execute the cell code - define robot commands as calls on: robot
                exec(cell)
        except KeyboardInterrupt:
            pass

    #@line_cell_magic
    @line_magic
    def vrep_robot_methods(self, line):
        "Show methods"
        robotclass = eval(line)
        methods = [method for method in dir(robotclass) if not method.startswith('_')]
        print('Methods available in {}:\n\t{}'.format(robotclass.__name__ , '\n\t'.join(methods)))

#Could install as magic separately
ip = get_ipython()
ip.register_magics(Vrep_Sim)

I’ve also added a bit of line magic to display the methods defined on a robot model class:

The tension now is a pedagogical one: for example, should I be providing students with the robot model, or should they be building up the various control functions (.move_forwards(), .turn_left(), etc.) themselves?

I’m also wondering whether I should push the while True: component into the magic? On balance, I think students need to see it in their code block because getting them to think about control loops rather than one shot execution of command statements is something they often don’t get the first, or even second, time round. But for reducing clutter, it’d make for far cleaner cell block code.

Distributing Virtual Machines That Include a Virtual Desktop To Students – V-REP + Jupyter Notebooks

When we put together the virtual machine for TM351, the data management and analysis course, we built a headless virtual machine that did not contain a graphical desktop, but instead ran a set of services that could be accessed within the machine at a machine level, and via a browser based UI at the user level.

Some applications, however, don’t expose an HTML based graphical user interface over http, instead they require access to a native windowing system.

One way round this is to run a system that can generate an HTML based UI within the VM and then expose that via a browser. For an example, see Accessing GUI Apps Via a Browser from a Container Using Guacamole.

Another approach is to expose an X11 window connection from the VM and connect to that on the host, displaying the windows natively on host as a result. See for example the Viewing Application UIs and Launching Applications from Shortcuts section of BYOA (Bring Your Own Application) – Running Containerised Applications on the Desktop.

The problem with the X11 approach is that is requires gubbins (technical term!) on the host to make it work. (I’d love to see a version of Kitematic extended not only to support docker-compose but also pre-packaged with something that could handle X11 connections…)

So another alternative is to create a virtual machine that does expose a desktop, and run the applications on that.

Here’s how I think the different approaches look:

vm_styles

As an example of the desktop VM idea, I’ve put together a build script for a virtual machine containing a Linux graphic desktop that runs the V-REP robot simulator. You can find it here: ou-robotics-vrep.

The script uses one Vagrant script to build the VM and another to launch it.

Along with the simulator, I packaged a Jupyter notebook server that can be used to create Python notebooks that can connect to the simulator and control the simulated robots running within it. These notebooks could be be viewed view a browser running on the virtual machine desktop, but instead I expose the notebook server so notebooks can be viewed in a browser on host.

The architecture thus looks something like this:

I’d never used Vagrant to build a Linux desktop box before, so here are a few things I learned about and observed along the way:

  • installing ubuntu-desktop naively installs a whole range of applications as well. I wanted a minimal desktop that contained just the simulator application (though I also added in a terminal). For the minimal desktop, apt-get install -y ubuntu-desktop --no-install-recommends;
  • by default, Ubuntu requires a user to login (user: vagrant; password: vagrant). I wanted to have as simple an experience as possible so wanted to log the user in automatically. This could be achieved by adding the following to /etc/lightdm/lightdm.conf:
[SeatDefaults]
autologin-user=vagrant
autologin-user-timeout=0
user-session=ubuntu
greeter-session=unity-greeter
  • a screensaver kept kicking in and kicking back to the login screen. I got round this by creating a desktop settings script (/opt/set-gnome-settings.sh):
#dock location
gsettings set com.canonical.Unity.Launcher launcher-position Bottom

#screensaver disable
gsettings set org.gnome.desktop.screensaver lock-enabled false

and then pointing to that from a desktop_settings.desktop file in the /home/vagrant/.config/autostart/ directory (I set execute permissions set on the script and the .desktop file):

[Desktop Entry]
Name=Apply Gnome Settings
Exec=/opt/set-gnome-settings.sh
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Type=Application
  • because the point of the VM is largely to run the simulator, I thought I should autostart the simulator. This can be done with another .desktop file in the autostart directory:
[Desktop Entry]
Name=V-REP Simulator
Exec=/opt/V-REP_PRO_EDU_V3_4_0_Linux/vrep.sh
Type=Application
X-GNOME-Autostart-enabled=true
  • the Jupyter notebook server is started as a service and reuses the installation I used for the TM351 VM;
  • I thought I should also add a desktop shortcut to run the simulator, though I couldnlt find an icon to link to? Create an executable run_vrep.desktop file and place it on the desktop:
[Desktop Entry]
Name=V-REP Simulator
Comment=Run V-REP Simulator
Exec=/opt/V-REP_PRO_EDU_V3_4_0_Linux/vrep.sh
Icon=
Terminal=false
Type=Application

Her’s how it looks:

If you want to give it a try, comments on the build/install process would be much appreciated: ou-robotics-vrep.

I will also be posting a set of activities based on the RobotLab activities used in TM129 in the possibility that we start using V-REP on TM129. The activity notebooks will be posted in the repo and via the associated uncourse blog if you want to play along.

One issue I have noticed is that if I resize the VM window, V-REP crashes… I also can’t figure out how to open a V-REP scene file from script (issue) or how to connect using a VM hostname alias rather than IP address (issue).