OpenRobertaLab – Simple Robot Programming Simulator and UI for Lego EV3 Bricks

Rather regretting not having done a deep dive into programming environments for the Lego EV3 somewhat earlier, I came across the inspired OpenRobertaLab (code, docs) only a couple of days ago.


(Way back when , in the first incarnation of the OU Robotics Outreach Group, we were part of the original Roberta project which was developing a European educational robotics pack, so it’s nice to see it’s continued.)

OpenRobertaLab is a browser accessible environment that allows users to use blocks to program a simulated robot.


I’m not sure how easy it is to change the test track used in the simulator? That said, the default does have some nice features – a line to follow, colour bars to detect, a square to drive round.

The OU Robotlab simulator supported a pen down option that meant you could trace the path taken by the robot – I’m not sure if RobertaLab has a similar feature?


It also looks as if user accounts are available, presumably so you can save your programmes and return to them at a later date:


Account creation looks to be self-service:


OpenRobertaLab also allows you to program a connected EV3 robot running leJOS, the community developed Java programming environment for the EV3s. It seems that it’s also possible to connect to a brick running ev3dev to OpenRobertaLab using the robertalab-ev3dev connector. This package is preinstalled in ev3dev, although it needs enabling (and the brick rebooting) to run. ssh into the brick and then from the brick commandline, run:

sudo systemctl unmask openrobertalab.service
sudo systemctl start openrobertalab.service

Following a reboot, the Open Robertalab client should now automatically run and be available from the OpenRobertaLab menu on the brick. To stop the service / cancel it from running automatically, run:

sudo systemctl stop openrobertalab.service
sudo systemctl mask openrobertalab.service

If the brick has access to the internet, you should now be able to simply connect to the OpenRobertalab server (

Requesting a connection from the brick gives you an access code you need to enter on the OpenRobertaLab server. From the robots menu, select connect...:


and enter the provided connection code (use the connection code displayed on your EV3):


On connecting, you should hear a celebratory beep!

Note that this was as far as I got – Open Robertalab told me a more recent version of the brick firmware was available and suggested I installed it. Whilst claiming I may still be possible to run commands using old firmware, that didn’t seem to be the case?

As we well as accessing the public Open Robertalab environment on the web, you can also run your own server. There are a few dependencies required for this, so I put together a Docker container psychemedia/robertalab (Dockerfile) containing the server, which means you should be able to run it using Kitematic:


(For persisting things like user accounts, and and saved programmes, there should probably be a shared data container to persist that info?)

A random port will be assigned, though you can change this to the original default (1999):


The simulator should run fine using the IP address assigned to the docker machine, but in order to connect a robot on the same local WiFi network to the Open RobertaLab server, or connect to the programming environment from another computer on the local network, you will need to set up proter forwarding from the Docker VM:


See Exposing Services Running in a Docker Container Running in Virtualbox to Other Computers on a Local Network for more information on exposing the containerised Open Robertalab server to a local network.

On the EV3, you will need to connect to a custom Open Robertalab server. The settings will be the IP address of the computer on which the server is running, which you can find on a Mac from the Mac Network settings, along with the port number the server is running on:

So for example, if Kitematic has assigned the port number 32567, and you didn’t otherwise change it, and you host computer IP address is, you should connect to: from the Open Robertalab connection settings on the brick. On connecting, you will be presented with a pass code as above, which you should connect to from your local OpenRobertaLab webpage.

Note that when trying to run programmes on a connected brick, I suffered the firmware mismatch problem again.

Exposing Services Running in a Docker Container Running in Virtualbox to Other Computers on a Local Network

Most of my experiments with Docker on my desktop machine to date have been focused on reducing installation pain and side-effects by running applications and services that I can access from a browser on the same desktop.

The services are exposed against the IP address of the virtual machine running docker, rather than localhost of the host machine, which also means that the containerised services can’t be accessed by other machines connected to the same local network.

So how do we get the docker container ports exposed on the host’s localhost network IP address?

If docker is running the containers via Virtualbox in the virtual machine named default, it seems all we need to do is tweak a couple of port forwarding rules in Virtualbox. So if I’m trying to get port 32769 on the docker IP address relayed to the same port on the host localhost, I can issue the following terminal command if the Docker Virtualbox is currently running:

VBoxManage controlvm "default" natpf1 "tcp-port32769,tcp,,32769,,32769"

which has syntax:

natpf<1-N> [<rulename>],tcp|udp,[<hostip>], <hostport>,[<guestip>],<guestport>

Alternatively, the rule can be created from the Network – Port Forwarding Virtualbox  settings for the default box:


To clear the rule, use:

VBoxManage controlvm "default" natpf1 delete "tcp-port32769"

or delete from the Virtualbox box settings Network – Port Forwarding rule dialogue.

If the box is not currently running, use:

VBoxManage modifyvm "default" --natpf1 "tcp-port32769,tcp,,32769,,32769"
VBoxManage modifyvm "default" --natpf1 delete "tcp-port32769"

The port should now be visible and localhost:32769 and by extension may be exposed to machines on the same network as the host machine by calling the IP address of the host machine with the value of the forwarded port on host.

On a Mac, you can find the local IP address of the machine from the Mac’s Network settings:



Sharing Files With a Lego EV3 Brick Over Wifi Using Filezilla

A handy comment on the ev3dev github repository shows how easy it easy to share files between a laptop/desktop computer and a networked Lego EV3 brick over wifi.

For a brick on e, from the Filezilla File -> Site Manager… menu

Site_ManagerWe can also set the default directory on the brick (as well as on host):


From the Filezilla View -> Filename Filters… option we can hide the display of hidden files:


To copy files, simply drag them from one machine to the other:

ev3 scp

To run python files on the brick, you need to make them executable. Right-click on a Python file and select the File permissions… option:


then set the execute file permission as required:



See also: Running Python Programmes on the Lego EV3 via the EV3 File Browser for a command line/scp route for file transfer, and Setting Up PyCharm To Work With a Lego Mindstorms EV3 Brick for a guide on file synching via git checkins using the PyCharm Community Edition editor / IDE.

Setting Up PyCharm To Work With a Lego Mindstorms EV3 Brick

Notes based on Setting Up a Python Development Environment with PyCharm for setting up PyCharm editor (I use the free Community Edition) to work with EV3. Requires passwordless ssh into the brick and the brick on

We’re going to go round the houses with git checkins to move stuff from the Mac and the PyCharm editor to the brick.

Get into the brick and do some minimal config stuff:

[Mac] ssh robot@

[EV3] sudo apt-get update
[EV3] sudo apt-get install git
[EV3] mkdir -p /home/robot/demoproj
[EV3] mkdir -p /home/robot/demoproj/.demoproj.git
[EV3] git init --bare /home/robot/demoproj/.demoproj.git

Now use the nano editor on the brick to populate the demoproj dir with the files we push, setting them to be executable.

[EV3] nano /home/robot/demoproj/.demoproj.git/hooks/post-receive

In the nano editor, change the file to:

git --work-tree=/home/robot/demoproj --git-dir=/home/robot/demoproj/.demoproj.git checkout -f
find /home/robot -iname \*.py | xargs chmod +x

The chmod on the py files makes them executable, so as long as you hashbang the first line of any python files (with #!/usr/bin/python), they should be runnable from the file browser menu on the brick.

Then ctrl-x to exit, saying Yes to save the file on the way out and accepting the default file name. Now make that hook executable:

[EV3] chmod +x /home/robot/demoproj/.demoproj.git/hooks/post-receive

Should we prepend hashbangs as well? This will add one to start of all py files not containing one:

grep -rL '^#!/usr/bin/python$' /home/robot/demoproj/*.py | xargs sed -i '1i #!/usr/bin/python'

In PyCharm, VCS->Check Out From Version Control to create a new project, selecting git as a the checkout target (so you’ll also need git installed on the Mac…).

The VCS Repository URL is: robot@, the Parent Directory (for example, /Users/me/projects) and Directory names (e.g. testProj – note, this must be a new folder) specifying the location on the Mac where you want to keep a local copy of the project files.

Say yes to all the crap PyCharm wants to create, and Yes when it prompts if you want to open the newly created directory. Create a new python file containing the following test program:

#The shebang above runs this file with the python shell
from import Sound
#Make a short sound

Save it, press the VCS /UP ARROW/ button top right in PyCharm to commit, add a commit message, then bottom right Commit and Push.

This should commit the file locally and also push it into the git repo on the ev3; the commit hook on the ev3 will copy the file into the demoproj folder and, if it’s a .py file, mark it as executable.

You should now be able to locate it and run it from the ev3dev file browser.

See also: Using IPython on Lego EV3 Robots Running Ev3Dev and Running Python Programmes on the Lego EV3 via the EV3 File Browser.

PS to enable autocompletion in PyCharm, check what Python shell the project is using (in the settings/preferences somehow though I’m damned if I know where to find them; I HATE IDEs with a passion – way to cluttered and over complex…). Then using the same Python shell:

git clone
cd ev3dev-lang-python
python install
cd ..
rm -r ev3dev-lang-python/

Run the programme in PyCharm (or at least, the import lines) and autocompletion should be enabled. The complete code won’t run properly though, because you’re not running it in an ev3dev environment…

Running Python Programmes on the Lego EV3 via the EV3 File Browser

A quick note to self… If we test python scripts on an EV3 brick interactively via a Jupyter notebook and then save the notebook as a python file (eg, we can get the Python file over to the EV3 brick and into a state we can run it from the file browser by using the following recipe:

echo '#!/usr/bin/python'|cat - > /tmp/out && mv /tmp/out
scp robot@
ssh robot@ 'chmod ugo+x /home/robot/'

Then in the file browser on the brick, just click on to run it.

It would be easy enough to script this so a single button in the notebook does it automatically? (In which case, it would probably make more sense to modify the template used to generate the Python save file so that it includes the shebang at the start of the file anyway?)

PS see also Setting Up PyCharm To Work With a Lego Mindstorms EV3 Brick and Sharing Files With a Lego EV3 Brick Over Wifi Using Filezilla.

From StringLE to DockLE… Erm… Maybe…

Getting on for a decade ago, I doodled an approach for creating user a user defined personal learning environment based around online applications and resources: StringLe, the string’n’glue learning environment (more and more and more).


StringLe was of the web. A StringLE environment configuration comprised three things:

  • the username and tag for a particular delicious social bookmark feed that contained a list of web applications that were linked from the top menu bar of a StringLE set up; tools included things like online office tools, drawing packages, and audio and image editors
  • a link to an OPML feed: this feed populated a Grazr widget in the left hand sidebar that could be used to load in further RSS feeds to act as more comprehensive nested menu options that could be used to pull resources into the environment.
  • a homepage URL for the environment.

The idea was simple – through managing various RSS feeds, the user could create a workspace configured to load in a variety of tools (web applications) arranged along a top tabbed menu bar and online content pages accessed via an OPML navigation widget and inpage RSS viewer widget (actually, the OPML and RSS viewer widgets were one and the same: Grazr? Remember Grazr? I went through phase of packageing MIT courses as OPML files for rendering in Grazr eg here and here), as well as using the environment as a normal web browser, and bookmarking links visited within the space into one of the feeds exposed via the navigational OPML feed.

It was a throwaway toy, but one that demonstrated a way of pulling a range of separate applications together that might be used within a particular course or course activity. For different courses or activities, it was easy enough to pull together different collections of tools for that particular purpose.

This notion of online tool collections – workbenches, perhaps? or tool suites? – is used, in part, in things like SagemathCloud, or the IBM Data Workbench, where a user is provided access to a range of different online applications within a single online environment.

Those workbenches go much further, however, and provide an online workspace in the form of a user filestore that can be accessed by all the applications within the suite. Services like Sandstorm also resemble StringLE in the sense that they provide an environment from within which a user can launch a range of user selected online applications.

For anyone whose followed this blog for any period of time, you’ll probably know the same ideas just keep getting revisited and recycled. So now, for example, it strikes me that a Docker Compose file provides another way of defining a user configured workbench comprising multiple linked applications, along with a workspace in the sense of a shared file area that can be used to pass files between the applications.

So for example, here’s an example from earlier this year of the Docker Compose script I used to pull together a set of containers to re-implement the TM351 monolithic VM as a set of linked containers (raw dockerfiles on Github).

What might me quite nice would be to add a Flask container to the mix to set up a simple webserver, and generate a simple HTML file from the dockerfile to act as a floating menu panel that has a set of buttons, one for each application, that would let me click through to that application. A simple wrapper script would then: take the docker-compose file, add a flask control panel container to it, create the menu HTML from the docker-compose file, run docker-compose over the docker-compose file to launch the containers, pop the HTML menu up in a browser window.

Anyway… suffice to say that the app collection idea reminded me of StringLE, but  reimplemented via a Docker Compose script – as a DockLE?! – and using applications running in linked containers, rather than more loosely connected web apps identified via RSS and OPML feeds.