Simple 2D ev3devsim Javascript Simulator Running as an ipywidget in Jupyter Notebooks

So…

…for a course revision upcoming, I’ve been tweaking a thing.

The thing is ev3devsim [repo], a Javascript powered 2 robot simulator that allows you to execute Python code, via Skulpt, in the browser to control a simple simulated robot.

The Python package used to control the robot is a skulpt port of ev3dev-lang-python, a Python wrapper for the ev3dev Linux distribution for Lego Ev3 robots. (Long time readers may recall I explored ev3dev for use in an OU residential school way back when and posted a few related notebooks.)

Anyway… we want to use Python in the module revision, the legacy activities we want to update look similar, ish, sort of, almost, we may be able to use some of them, and I want to do the activities via Jupyter notebooks.

So I’ve had a poke around and think I’ve managed to make the fumblings of a start around an ipywidget wrapper for the simulator that will allows us to embed it in a notebook.

Because I don’t understand ipywidgets at all, I’m using the jp_proxy_widget, which I first played with in the context of wrapping wavesurfer.js (Rapid ipywidgets Prototyping Using Third Party Javascript Packages in Jupyter Notebooks With jp_proxy_widget).

Here’s where I’m at [nbev3devsim; Binder demo available, if the code I checked in works!]:

The first thing to notice is that the terminal has gone. The idea is that you write the code in a code cell and inject it into the simulator. My model for doing this is via cell block magic or by passing code in a variable into the simulato (for generality, I should probably also allow a link to a .py file).

The cell block magic still needs some work, I think; eg a temporary alert with a coloured backrgound to say “code posted to simulator” that disappears on its own after a couple of seconds.) I probably also need an easy  way to preview the code currently assigned to the simulated robot.

You might also notice a chart display. This is actually a plotly streaming line chart that updates with sensor values (at the moment, just the ultrasound sensor; other sensors have different ranges, so do I scale those, or use different charts perhaps?)

There is also an output window your code can print messages to, as the following hello-world magic shows:

We can read state out of the simulator, though the way the callback work this seems to require running code across two cells to get the result into the Python environment?

I’ve also experimented with another approach where the widget’s parent object grabs (or could be regularly updated to mirror, maybe?) logged sensor readings from inside the simulator, which means I can interrogate that object, even as the simulator runs. (I also started to explore using a streaming dataframe for this data, but I’m not convinced that’s the best approach; certainly trying to stream logged data from the simulator into a streaming chart in the notebook context is laggy compared to the chart embedded in the simulator context.)

With the data in the Python context, we can do what we like with it, Like chart it etc.

There’s a lot of tweaks that need to be made and things to be added to run the full complement of activities we ran in the original presentation of the course.

I’d already started to explore what’s required to add Python functions to skulpt (eg Simple Text to Speech With Skulpt), although I’m not sure if that’s blocking (could it be handled asynchronously if so?) and today managed to learn enough from this SO answer on making objects draggable to make the robot draggable on the canvas (I think; a PR is here but not tested in a fresh/isolated environment yet (I only made a PR to give me s/thing to link to here!); the biggest issue I had was converting mouse co-ordinates to robot world canvas co-ordinates. There are still issues there, eg in getting the size of the robot correctly, but the co-ordinate management in the simulator looks a bit involved to me and I want to get my head round it enough that if I do start trying to simplify things, I don’t break other things!)

Other things that really need adding:

  • ability to reset canvas in one go;
  • ability to rotate robot using mouse;
  • ability to add noise to motors and sensors;
  • configure robot from notebook code cell rather than simulator UI? (This could also be seen as an issue about whether to strip as much out of the widget as possible.)
  • predefine sensoble robot configurations; (can we also have a single, centreline front mounted light sensor?)
  • add pen-up / pen-down support (perhaps have a drawing layer in the simulator for this?)
  • explore supporting multiple simulators embedded in one notebook (currently it’s at most one, I suspect in large part becuase of specific id values assigned to DOM elements?)

The layout is also really clunky, the main issue being how to see the code against the simulator (if you need to). Two columns might be better — notebook text and code cells in one, the simulator windows stacked in the other? — but then again, a wide simulator window is really useful. A floatinging / draggable simulator window might be another option? I did thing the simulator window might be tear-offable in JupyterLab, but I have never managed to successfully tear off any jp_proxy_widget in JupyterLab (my experiences using JupyterLab for anything are generally really miserable ones).

The original module simulator allowed you to step through the code, but: a) I don’t know if that would be possible; b) I suspect my coding knowledge / skills aren’t up to it; c) I really should be trying to write the activities, not sinking yet more time into the simulator. (One thing I do need to do is see if any of the code I wrote years ago when scopting things for the residential school is reusable, which could save some time…)

I also need to see if the simulator is actually any good for the practical activities we used in the original version of the course, or whether I need to write a whole new set of activities that do work in this simulator… Erm…