Category: OU2.0

Authoring Interactive Diagrams and Explorable Explanations

One of the things that the OU has always tended to do well is create clear – and compelling – diagrams and animations to help explain often complex topics. These include interactive diagrams that allow a learner to engage with the diagram and explore it interactively.

At a time when the OU is looking to reduce costs across the board, finding more cost effective ways of supporting the production, maintenance, presentation and updating of our courses, along with the components contained within them, is ever more pressing.

As a have-a-go technology optimist, I’m generally curious as to how technology may help us come up with, as well as produce, such activities.

I’m a firm believer in using play as a tool for self-directed discovery and learning, and practise as a way of identifying or developing, erm, new practise, and I’m also aware that new technology and tools themselves can sometimes require a personal time investment before you start to get productive with them. However, for many, if you don’t get to play often, knowing how to install or start using a new piece of software, let alone how to start playing with once you are in, can be a blocker. And that’s if you’ve got – or make – the time to explore new tools in the first place.

Changing a workflow is also not just down to one person changing their own practise – it can heavily depend on immediate downstream factors, such as what the person you hand over your work to is expecting from you in order for them to do their job.

(Upstream considerations can also make life more or less easy. For example, if you want to analyse a data set that the person before you has handed over as a table in a PDF document, you have to do work to get the data out of the document before you can analyse it.)

And that’s part of the problem: because tech can often help in several ways, but is sometimes most effective when you change the whole process; and if you stick with the old process, and just update one step of the workflow, that can often makes things worse, not better.

Sometimes, a workflow can just be bonkers. When we produced material for the FutureLearn Learn to Code MOOC, we used an authoring tool that could generate markdown content. The FutureLearn authoring environment is (I was told) a markdown environment. I was keen to explore an authoring route that would let us publish from the authoring environment to FutureLearn (in the absence of a FutureLearn API, I’d have been happy to finesse one by scraping form controls and bodging my own automation route.) As it was, we exported content from the markdown producing environment into Word, iterated through it there with the editor (introducing errors into code elements), and then someone cut and pasted the content into the FutureLearn editor, presumably restyling it as they did so. Then we had to fix the errors that were either introduced by the editing process, or made it through the editing process, by checking back against code in the original authoring environment. The pure markdown workflow was stymied because even though we could produce markdown, and FutureLearn could (presumably) accept it, the intermediate workflow was a Word based one. (The lesson from this? Innovation can be halted if you have to use legacy processes in a workflow rather than reengineering all of it.)

The OU-XML authoring route has similar quirks: authors typically author in Word, then someone has to copy, paste and retag the content in an XML authoring tool so it’s marked up correctly.

But that’s all by the by, and more than enough for the subject of another post…

Because the topic of this post is a quick round-up of some tools that support the creation – and deployment – of interactive diagrams and explorable explanations. I first came across this phrase in a 2011 post by Bret Victor – Explorable Explanations, and I’ve posted about them a couple of times (for example, Time to Revisit Tangle?).

One of the most identifiable aspects of many explorable explanations are interactive diagrams where you can explore some dynamic feature of an explanation in an interactive way. For example, exploring the effect of changing parameter values in an equation:

One of the things I’m interested in are frameworks and environments that support “direct authoring” of interactive components that could be presented to students. Ideally, the authoring environment should produce some sort of source code from which the final application can be previewed as well as published. Ideally, there should also be separation between style and “content”, allowing the same asset to be rendered in multiple ways, (this might include print as well as online static or interactive content).

Unfortunately, in many cases, direct authoring is replaced by a requirement to use some sort of “source code”. (That’s partly because building UIs that naive users can use can be really difficult, especially if those users refuse to use the UI because it’s a bit clunky. Even if the code the UI generates, which is the thing you actually want to produce, is actually quite simple and it would be much easier if authors wrote that source code directly.)

For example, I recently came across Idyll [view the code and/or read the docs], a framework for creating interactive documents. See the following couple of examples to get a feel for what it can do:

The example online editor gives an example of the markup language (markdown, with extensions) and the rendered, interactive document:

(It’d be quite interesting to see how closely this maps onto the markdown export from a Jupyter notebook that incorporates ipywidgets.)

Moving the sliders in the rendered document changes the variable values and dynamically replots the curve in the chart.

I can see Idyll becoming a component of the forthcoming OpenCreate tool, so it’ll be interesting if anyone else can – partly because it would presumably require downstream buy-in into using the interactive components Idylll bundles with.

Whilst Idyll is a live project, the next one – Apparatus –  looks to have stalled. It has good provenance, though, with one of the examples coming from Bret Victor himself.

Here’s an example of the sort of thing it can produce:

The view can also reveal the underlying configuration:

The scene is built up from a set of simple objects, or previously created objects (for example, the “Wheel with mark” This feature is important because it encourages another useful behaviour amongst new users: it encourages you to create simple building blocks that do a particular thing, and then assemble those building blocks to help you do more complex things later on.

The apparatus “manual” fits in one diagram:

The third tool – Loopy – also looks like it may be recently stalled (again, code is available and the UI is via a browser). This tool allows for the creation, through direct manipulation, of  a particular sort of “systems diagram” where influence at one node can positively or negatively influence another node:

To create a node, simply draw a circle; to connect nodes, draw a line from one node to another.

You can set the weight, positive or negative:

 

As well as adding and editing text, and moving or deleting items:

You can also animate the diagram, feeding in positive or negative elements from one item and seeing how those changes feed through to influence the rest of the system:

The defining setup of the diagram can be saved in a URI and then shared.

All three of these applications encourage the use to explore a particular explanation.

Apparatus and LOOPY both provide direct authoring environments that allow the user to create their own scenes through adding objects to a canvas, although Apparatus does require the user to add arithmetic or geometrical constraints to some items when they are first created. (Once a component has been created, it can just be reused in another diagram.)

Apparatus and LOOPY also carry their own editor with them, so a user could change the diagram themselves. In Idyll, you would need access to the underlying enhanced markdown.

If you know of any other browser based, open source frameworks for creating and deploying standalone, iframe/web page embeddable interactive diagrams and explorable explanations, please let me know via the comments.

PS for a range of other explorable explanations, see this awesome list of explorables.

Fragment – DIT4C – Docker Base Containers for Edu Remote Computing Labs

What’s an effective way of helping a student run a desktop application when their own computer won’t run the application, for whatever reason, locally? Virtualised software, running remotely, provides one solution. So here’s an example of a project that looks at doing just that:  DIT4C (“Data Intensive Tools for the Cloud”)a platform for hosting data analysis tools “in the cloud” using containers [repo].

Prepackaged, standalone containers are defined for a range of applications, including RStudio, Jupyter notebooks, Jupyter+R and OpenRefine

Standalone Containers With Branded Landing Page

The application containers are built on top of a base container that includes an nginx webser/proxy, a GoTTY shell and a file uploader. The individual containers then have a “homepage” that links to the particular application:

So what do we have at this point?

  • a branded landing page;
  • browser accessed shell:
  • a browser accessed file uploader:

These services are all running within a single container. I don’t know if there’s a way of linking multiple containers using docker-compose? This would require finding some way of announcing the services provided by each container, to a central nginx server which could then link to each from a single homepage. But this would mean separate terminals and file loaders into each one (though maybe the shared files could be handled as a single mounted volume shared across all the linked containers?

Once again, I’m coming round to the idea that using a single container to run multiple services, rather than several linked containers each running a single service, is simpler, even if it does go against the (ideal?) model of using containers as part of a small pieces, loosely joined architecture? I think I need to post a simple recipe (or recipes) somewhere that show different ways of running multiple services within a single container. The docker docs  – Run multiple services in a container – provide a crib in to this at the moment.

X11 Applications

Skimming the docs, I notice reference to a base X11 desktop container. Interesting… I have a PhD student looking for an easy way to host a Qt widget running application in the cloud for evaluation purposes. To this end I’ve just started looking around for X11/noVNC web client containers that would allow us to package the app in a simple container then access it from something like Digital Ocean (given there’s no internal OU docker container hosting service that I’m allowed to access (or am aware of… Maybe on the Faculty cluster?)).

So things like this show the way – a container that offers a link to a containerised “desktop” application, in this case QGIS (dit4c/dockerfile-dit4c-container-qgis); (does the background colour mean anything, I wonder? How could we make use of background colour in OU containers?):


Following the X11 Session link, we get to a desktop:

There’s an icon in the toolbar to the application we want – QGIS:

What I’m thinking now is this could be handy for running the V-REP robot simulator, and maybe Gephi…

It also makes me think that things could be simplified a little further by offering a link to QGIS, rather than X11 Application, and opening the application in full screen mode (on the virtualised desktop) on start-up. (See Distributing Virtual Machines That Include a Virtual Desktop To Students – V-REP + Jupyter Notebooks for some thoughts on how to use VMs to distribute a single pre-launched on startup desktop application to try to simplify the student experience.)

It also makes me even more concerned about the apparent lack of interest in the OU, and even awareness of, the possibilities of virtualised software offerings. For example, at a recent SIG group on (interactive) maps/mapping, brief mention was made of using QGIS, and problems arising therefrom (though I forget the context of the problems). Here we have a solution – out there for all to see and anyone to find – that demonstrates the use of QGIS in a prebuilt container. But who, internally, would think to mention that? I don’t think any of the Tech Enhanced Learning folk I’ve spoken to would even consider it, if they are even aware of it as an option?

(Of course, in testing, it might be rubbish… how much bandwidth is required for a responsive experience when creating detailed maps? See also one of my earlier related experiments: Accessing GUI Apps Via a Browser from a Container Using Guacamole, which remotely accessing the Audacity audio editor using a cloud hosted container.)

The Platform Offering

Skimming through the repos, I (mistakenly, as it happens) thought I saw a reference to resbaz (ResBaz Cloud – Containerised Research Apps as a Service). I was mistaken in thinking I had seen a reference in the code I skimmed though, but not, it seems, in the fact that there is a relationship:

And so it seems that perhaps more interestingly than the standalone containers is that DIT4C is a platform offering (architecture docs), providing authenticated access to users, file persistence (presumably?) and the ability to launch prebuilt docker images as required.

That said, looking at the Github repository commits for the project, there appears to have been little activity since March 2017 and the gitter channel appears to have gone silent at the end of 2016. In addition, the docs for getting an instance of the platform up and running are a little bit too sparse for me to follow easily… [UPDATE: it seems as if the funding did run out/get pulled:-(]

So maybe as a project, DIT4C is perhaps now “of historical interest” only, rather than being a live project we might have been able to jump on the back of to get an OU hosted remote computing lab up and running? :-( That said, the ResBaz (Research Bazaar) initiative, “worldwide festival promoting the digital literacy emerging at the center of modern research”, still seems to be around…

My Personal TEL Mission Statement

Technology Enhanced Learning  (TEL) is “a thing” in the OU at the moment. I have no idea what folk (think they) mean by it.

Here’s what I mean by it, in the form of my own, ad hoc eTEL – emerging technology enhanced learning – mission statement.

What I aspire to is:

  • explore how we might be able to use and repurpose emerging technology to support distance education;
  • use the technology we teach our students about to deliver that teaching;
  • use the technology we teach our students about to support that teaching;
  • use the technology we teach our students about to produce the courses we are teaching;
  • expose our students to emerging technologies that they can take and use in the outside world.

This obviously raises tensions, particularly where courses take two years to produce and then ideally (in the eyes of the organisation) remain unchanged for 5 years. The first step is risky, because it means trying new ways of doing things. The last step relates to my belief that universities should be helping push new ideas, technologies, techniques and processes out into society using our students as a vector.

For All The Corporatisation & “Analytics Everywhere” Hype, We Still Don’t Behave Like The Web Publisher We Are

A few weeks ago I spotted a review paper of “data wrangling” activities at the OU (Making sense of learner and learning Big Data: reviewing five years of Data Wrangling at the Open University UK). I saw it being linked to/promoted again today.

Apparently, “Data Wranglers [DWs] are a group of academics who analyse data about student learning and prepare reports with actionable recommendations based upon that data”. Also apparently, “[i]n practice” they also do “Big Data insights”. Or something. I’m not sure we have any “Big Data” do we? (Big data, meh.)

Furthermore, it seems that “Learning analytics are now increasingly taken into consideration at the OU when designing, writing and revising modules, and in the evaluation of specific teaching approaches and technologies”.

Looks around, confused…

…because something that I’ve been failing to understand for years and years and years and years is why no-one seems interested in taking the view that we are, in a lot of courses, delivering online content just like any other web publisher would, and as such we could be looking at ways of making our content “work better”, for some definition of “better”. Or even “work”.

In the learning analytics world, this possibly means building predictive models based on previous cohorts that show how students who dwelled this long on those content pages did well, while others who didn’t reveal that hidden answer of or visit that page, or who didn’t appear to visit any course pages, failed.

At this point, it’s probably worth mentioning that the OU, as a distance learning organisation, used to deliver course materials to students as print material, but increasingly we deliver material (that looks just like the print material) as HTML via a Moodle VLE. Each section of “as if” print material appears as a separate HTML page. (We also make PDFs available that students can download… It’d be interesting to know how many then print those PDF downloads out…)

It’s also worth mentioning that a lot of the teaching related activity pursued by the OU’s central academics relates to the production of course materials and assessment materials, which is to say, writing stuff, rather than delivery to students: when the course runs, it’s the moderators of online forums (which may include the occasional central academic) and the students’ personal tutors  (Associate Lecturers, in OU parlance), who are the people who actually engage with students directly.

So to a large extent, once the stuff it’s written, that’s job done. Despite a laborious editing and publishing process to get the material onto the website, errors do slip through, and when spotted (often by pathfinder/vanguard students studying course material weeks ahead of the course schedule), corrected in another lengthy process (authors don’t have edit/write permissions on the course materials, and in some cases errors may be left uncorrected in situ with students expected to pick up the errata announcements via errata notices. Just like the print days…)

So what I keep on not understanding is why we don’t have someone paying attention to the course material as web content with a view to helping us better understand the obvious (because it’s nothing f****g difficult I want to learn from the pages), as I demoed nine years ago. For example:

  • what’s the course dynamic in terms of content use (when are most students studying particular parts of the course)? – have we got the pacing about right?
  • what’s the weekly rhythm of the course (what time of day are most students accessing the content pages?) – this could help forum moderators schedule their time;
  • how much time are students spending, on average, in a particular study session, and does this vary (e.g. 1-2 hours on a weekday evening, 3-4 hours for daytime or weekday study, 45 mins over lunch periods), and so on; i.e. what user stories might we create *from the data*?
  • how much time are students spending on particular pages? Are some pages just too long, or maybe have an idea or activity that is taking a lot of time to complete – or less time that we expect? Handy to know as a content designer (which is what course authors are). For the learning analytics surveillance freaks, can they spot students who spend more or less time than average on a particular page as a “likely fail” feature that they can celebrate?
  • are those links to external resources clicked on? Ever?
  • are the “optional activities” linked to on separate pages visited? Ever? Again, the learning analytics folk may be able to wet themselves finding correlation features on those pages, but I don’t really care about that. I just want to know, in the first instance, are the pages visited. Ever. (If they are, and it’s only a fraction of students who visit those pages/follow those links, then maybe it becomes useful to track the learning analytics stuff to see if we can figure what sort of student is making use of those resources. But rather than caring about a particular student, I’m more interested in getting a better user story dialled in that I can use to help as one more focal point to motivate content production in future courses.)
  • are students using particular devices, or the same users using different sorts of devices at different times of time? With our insistence on still delivering software that needs to be installed on a traditional desktop computer, it would be useful to know if this can affect what a student might be able to study when based on device availability. And if it comes to trying to pitch particular computer requirements, it would be handy to know what the baseline is (which course webstats can provide an indicator of), and the extent to which this may vary across faculties or course levels.

Sometimes it can be comforting to see that your expectations about how the content would be used appear to be being met. Sometimes it can be revealing to find out that they’re not.

This is all basic stuff, and someone can probably have a fun time building some dashboards to report it. (Maybe there are some already, but no-one’s directed me to them despite my asking everyone I can think of.)

To reiterate on the why: I just want to be able to tell myself more informed stories about how the content appears to be being used en masse, and maybe also identifying different audience segments in the data (eg weekend studiers, weekday nighters, full-timers). Looking across courses (faculties, levels) it may be that we get different sorts of pattern / segmentation, which could be interesting from a user / user story informed content design perspective. It may well also prompt “learning analytics” discussions. (Writing this, I’ve come to realise I associate learning analytics with tracking back into individual data from “success” criteria such as assessment scores. For the content analysis, in the first instance, I’m just interested in how its generally being consumed. No individual data necessary. Once I’ve got broad usage pattern segments down, then maybe looking at performance level segments would be useful. But then, I’d rather just track the whole cohort score distribution to try and improve that.)

From looking at VLE pages, it looks as if there are Google Analytics and optimizely tracking scripts linked in the pages, although asking around I can’t find anyone who does anything with that data from the VLE pages. (Maybe the “DW”s do?) So I’m guessing the data is there?

 

PS One of the things I think optimizely may be used for is A/B testing by the Marketing folk on other bits of the website. Something I’ve pitched before is A/B testing on course materials (e.g. differently phrased or worked versions of the same activity).

This has generally been treated with disdain, but if it works for medical trials I don’t see why we can’t try it in education too. There is an argument here that we would need to track effect on attainment (the learning analytics thing), but I’m wary of the idea that changing a single page in several hundred could wildly affect attainment, unless it related to a particular key concept that the whole course hinged on. More realistically, if we see a page on average is taking students an hour to work through when we estimated it at 20 minutes, I’d be tempted to do A/B tests on it within a cohort. (Managing that if students chat about the topic in the common forums could represent a challenge!) The idea would be to see if we could improve the content performance more in line with expectations. As it is, the current approach would be to wait until the next presentation and give that whole cohort the new version. Which would of course be previously untested at scale. And may end up with students taking even longer to work through it.

Sharing Folders into VMs on Different Machines Using Dropbox, Google Drive, Microsoft OneDrive etc

Ever since I joined the OU, I’ve believed in trying to deliver distance education courses in an agile and responsive way, which is to say: making stuff up for students whilst the course is in presentation.

This is generally not done (by course/module teams at least) because the aim of most course/module teams is to prepare the course so thoroughly that it can “just” be presented to students.

Whatever.

I personally think we should try to improve the student experience of the course as it presents if we can by being responsive and reactive to student questions and issues.

So… TM351, the data management course that uses a VM, has started again, and issues / questions are already starting to hit the forums.

One of the questions – which I’d half noted but never really thought through in previous presentations (my not iterating/improving the course experience in, or between, previous presentations)  – related to sharing Jupyter notebooks across different machines using Google Drive (equally, Dropbox or Microsoft OneDrive).

The VirtualBox VM we use is fired up using the vagrant provisioner. A Vagrantfile defines various configuration settings – which ports are exposed by the VM, for example. By default, the contents of the folder in which vagrant is started up in are shared into the VM. At the same time, vagrant creates a hidden .vagrant folder that contains state relating to the instance of that VM.

The set up on a single machine is something like this:

If a student wants to work across several machines, they need to share their working course files (Jupyter notebooks, and so on) but not the VM machine state. Which is to say, they need a set up more like the following:

For students working across several machines, it thus makes sense to have all project files in one folder and a separate .vagrant settings folder on each separate machine.

Checking the vagrant docs, it seems as if this is quite manageable using the synced folder configuration settings.

The default copies the current project folder (containing the vagrantfile and from which vagrant is rum), which I’m guessing is a setting something like:

config.vm.synced_folder "./", "/vagrant"

By explicitly setting this parameter, we can decide how we want the mapping to occur. For example:

config.vm.synced_folder "/PATH/ON/HOST", "/vagrant"

allows you to to specify the folder you want to share into the VM. Note that the /PATH/ON/HOST folder needs to be created before trying to share it.

To put the new shared directory into effect, reload and reprovision the VM. For example:

vagrant reload --provision

Student notebooks located in the notebooks folder of that shared directory should now be available in the VM. Furthermore, if the shared folder is itself in a webshared folder (for example, a synced Dropbox, Google Drive or Microsoft OneDrive folder) it should be available wherever that folder is synched to.

For example, on a Mac (where ~ is an alias to my home directory), I can create a directory in my dropbox folder ~/Dropbox/TM351VMshare and then map this into the VM using by adding the following line to the Vagrantfile:

config.vm.synced_folder "~/Dropbox/TM351VMshare", "/vagrant"

Note the possibility of slight confusion – the shared folder will not now be the folder from which vagrant is run (unless the folder are running from is /PATH/ON/HOST ).

Furthermore, the only thing that needs to be in the folder from which vagrant is run is the Vagrantfile and the hidden .vagrant folder that vagrant creates.

Fingers crossed this recipe works…;-)

Sharing Pre-Built VMs Via Vagrant Cloud

In passing, I noticed yesterday that Vagrant Cloud (docs) can be used to host and distribute public Vagrant base boxes. So I exported a box file from my V-REP’n’Jupyter VM:

vagrant package

uploaded it to Vagrant Cloud – ouseful/ou-robotics-test – and tweaked my Vagrantfile to use that copy as the base box:

config.vm.box = "ouseful/ou-robotics-test"

Now I’m thinking I should probably do the same for the TM351 VM, giving the hassle it seems to take trying to get the .box file hosted for download on an OU URL…

Distributing Virtual Machines That Include a Virtual Desktop To Students – V-REP + Jupyter Notebooks

When we put together the virtual machine for TM351, the data management and analysis course, we built a headless virtual machine that did not contain a graphical desktop, but instead ran a set of services that could be accessed within the machine at a machine level, and via a browser based UI at the user level.

Some applications, however, don’t expose an HTML based graphical user interface over http, instead they require access to a native windowing system.

One way round this is to run a system that can generate an HTML based UI within the VM and then expose that via a browser. For an example, see Accessing GUI Apps Via a Browser from a Container Using Guacamole.

Another approach is to expose an X11 window connection from the VM and connect to that on the host, displaying the windows natively on host as a result. See for example the Viewing Application UIs and Launching Applications from Shortcuts section of BYOA (Bring Your Own Application) – Running Containerised Applications on the Desktop.

The problem with the X11 approach is that is requires gubbins (technical term!) on the host to make it work. (I’d love to see a version of Kitematic extended not only to support docker-compose but also pre-packaged with something that could handle X11 connections…)

So another alternative is to create a virtual machine that does expose a desktop, and run the applications on that.

Here’s how I think the different approaches look:

vm_styles

As an example of the desktop VM idea, I’ve put together a build script for a virtual machine containing a Linux graphic desktop that runs the V-REP robot simulator. You can find it here: ou-robotics-vrep.

The script uses one Vagrant script to build the VM and another to launch it.

Along with the simulator, I packaged a Jupyter notebook server that can be used to create Python notebooks that can connect to the simulator and control the simulated robots running within it. These notebooks could be be viewed view a browser running on the virtual machine desktop, but instead I expose the notebook server so notebooks can be viewed in a browser on host.

The architecture thus looks something like this:

I’d never used Vagrant to build a Linux desktop box before, so here are a few things I learned about and observed along the way:

  • installing ubuntu-desktop naively installs a whole range of applications as well. I wanted a minimal desktop that contained just the simulator application (though I also added in a terminal). For the minimal desktop, apt-get install -y ubuntu-desktop --no-install-recommends;
  • by default, Ubuntu requires a user to login (user: vagrant; password: vagrant). I wanted to have as simple an experience as possible so wanted to log the user in automatically. This could be achieved by adding the following to /etc/lightdm/lightdm.conf:
[SeatDefaults]
autologin-user=vagrant
autologin-user-timeout=0
user-session=ubuntu
greeter-session=unity-greeter
  • a screensaver kept kicking in and kicking back to the login screen. I got round this by creating a desktop settings script (/opt/set-gnome-settings.sh):
#dock location
gsettings set com.canonical.Unity.Launcher launcher-position Bottom

#screensaver disable
gsettings set org.gnome.desktop.screensaver lock-enabled false

and then pointing to that from a desktop_settings.desktop file in the /home/vagrant/.config/autostart/ directory (I set execute permissions set on the script and the .desktop file):

[Desktop Entry]
Name=Apply Gnome Settings
Exec=/opt/set-gnome-settings.sh
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Type=Application
  • because the point of the VM is largely to run the simulator, I thought I should autostart the simulator. This can be done with another .desktop file in the autostart directory:
[Desktop Entry]
Name=V-REP Simulator
Exec=/opt/V-REP_PRO_EDU_V3_4_0_Linux/vrep.sh
Type=Application
X-GNOME-Autostart-enabled=true
  • the Jupyter notebook server is started as a service and reuses the installation I used for the TM351 VM;
  • I thought I should also add a desktop shortcut to run the simulator, though I couldnlt find an icon to link to? Create an executable run_vrep.desktop file and place it on the desktop:
[Desktop Entry]
Name=V-REP Simulator
Comment=Run V-REP Simulator
Exec=/opt/V-REP_PRO_EDU_V3_4_0_Linux/vrep.sh
Icon=
Terminal=false
Type=Application

Her’s how it looks:

If you want to give it a try, comments on the build/install process would be much appreciated: ou-robotics-vrep.

I will also be posting a set of activities based on the RobotLab activities used in TM129 in the possibility that we start using V-REP on TM129. The activity notebooks will be posted in the repo and via the associated uncourse blog if you want to play along.

One issue I have noticed is that if I resize the VM window, V-REP crashes… I also can’t figure out how to open a V-REP scene file from script (issue) or how to connect using a VM hostname alias rather than IP address (issue).