My Personal TEL Mission Statement

Technology Enhanced Learning  (TEL) is “a thing” in the OU at the moment. I have no idea what folk (think they) mean by it.

Here’s what I mean by it, in the form of my own, ad hoc eTEL – emerging technology enhanced learning – mission statement.

What I aspire to is:

  • explore how we might be able to use and repurpose emerging technology to support distance education;
  • use the technology we teach our students about to deliver that teaching;
  • use the technology we teach our students about to support that teaching;
  • use the technology we teach our students about to produce the courses we are teaching;
  • expose our students to emerging technologies that they can take and use in the outside world.

This obviously raises tensions, particularly where courses take two years to produce and then ideally (in the eyes of the organisation) remain unchanged for 5 years. The first step is risky, because it means trying new ways of doing things. The last step relates to my belief that universities should be helping push new ideas, technologies, techniques and processes out into society using our students as a vector.

For All The Corporatisation & “Analytics Everywhere” Hype, We Still Don’t Behave Like The Web Publisher We Are

A few weeks ago I spotted a review paper of “data wrangling” activities at the OU (Making sense of learner and learning Big Data: reviewing five years of Data Wrangling at the Open University UK). I saw it being linked to/promoted again today.

Apparently, “Data Wranglers [DWs] are a group of academics who analyse data about student learning and prepare reports with actionable recommendations based upon that data”. Also apparently, “[i]n practice” they also do “Big Data insights”. Or something. I’m not sure we have any “Big Data” do we? (Big data, meh.)

Furthermore, it seems that “Learning analytics are now increasingly taken into consideration at the OU when designing, writing and revising modules, and in the evaluation of specific teaching approaches and technologies”.

Looks around, confused…

…because something that I’ve been failing to understand for years and years and years and years is why no-one seems interested in taking the view that we are, in a lot of courses, delivering online content just like any other web publisher would, and as such we could be looking at ways of making our content “work better”, for some definition of “better”. Or even “work”.

In the learning analytics world, this possibly means building predictive models based on previous cohorts that show how students who dwelled this long on those content pages did well, while others who didn’t reveal that hidden answer of or visit that page, or who didn’t appear to visit any course pages, failed.

At this point, it’s probably worth mentioning that the OU, as a distance learning organisation, used to deliver course materials to students as print material, but increasingly we deliver material (that looks just like the print material) as HTML via a Moodle VLE. Each section of “as if” print material appears as a separate HTML page. (We also make PDFs available that students can download… It’d be interesting to know how many then print those PDF downloads out…)

It’s also worth mentioning that a lot of the teaching related activity pursued by the OU’s central academics relates to the production of course materials and assessment materials, which is to say, writing stuff, rather than delivery to students: when the course runs, it’s the moderators of online forums (which may include the occasional central academic) and the students’ personal tutors  (Associate Lecturers, in OU parlance), who are the people who actually engage with students directly.

So to a large extent, once the stuff it’s written, that’s job done. Despite a laborious editing and publishing process to get the material onto the website, errors do slip through, and when spotted (often by pathfinder/vanguard students studying course material weeks ahead of the course schedule), corrected in another lengthy process (authors don’t have edit/write permissions on the course materials, and in some cases errors may be left uncorrected in situ with students expected to pick up the errata announcements via errata notices. Just like the print days…)

So what I keep on not understanding is why we don’t have someone paying attention to the course material as web content with a view to helping us better understand the obvious (because it’s nothing f****g difficult I want to learn from the pages), as I demoed nine years ago. For example:

  • what’s the course dynamic in terms of content use (when are most students studying particular parts of the course)? – have we got the pacing about right?
  • what’s the weekly rhythm of the course (what time of day are most students accessing the content pages?) – this could help forum moderators schedule their time;
  • how much time are students spending, on average, in a particular study session, and does this vary (e.g. 1-2 hours on a weekday evening, 3-4 hours for daytime or weekday study, 45 mins over lunch periods), and so on; i.e. what user stories might we create *from the data*?
  • how much time are students spending on particular pages? Are some pages just too long, or maybe have an idea or activity that is taking a lot of time to complete – or less time that we expect? Handy to know as a content designer (which is what course authors are). For the learning analytics surveillance freaks, can they spot students who spend more or less time than average on a particular page as a “likely fail” feature that they can celebrate?
  • are those links to external resources clicked on? Ever?
  • are the “optional activities” linked to on separate pages visited? Ever? Again, the learning analytics folk may be able to wet themselves finding correlation features on those pages, but I don’t really care about that. I just want to know, in the first instance, are the pages visited. Ever. (If they are, and it’s only a fraction of students who visit those pages/follow those links, then maybe it becomes useful to track the learning analytics stuff to see if we can figure what sort of student is making use of those resources. But rather than caring about a particular student, I’m more interested in getting a better user story dialled in that I can use to help as one more focal point to motivate content production in future courses.)
  • are students using particular devices, or the same users using different sorts of devices at different times of time? With our insistence on still delivering software that needs to be installed on a traditional desktop computer, it would be useful to know if this can affect what a student might be able to study when based on device availability. And if it comes to trying to pitch particular computer requirements, it would be handy to know what the baseline is (which course webstats can provide an indicator of), and the extent to which this may vary across faculties or course levels.

Sometimes it can be comforting to see that your expectations about how the content would be used appear to be being met. Sometimes it can be revealing to find out that they’re not.

This is all basic stuff, and someone can probably have a fun time building some dashboards to report it. (Maybe there are some already, but no-one’s directed me to them despite my asking everyone I can think of.)

To reiterate on the why: I just want to be able to tell myself more informed stories about how the content appears to be being used en masse, and maybe also identifying different audience segments in the data (eg weekend studiers, weekday nighters, full-timers). Looking across courses (faculties, levels) it may be that we get different sorts of pattern / segmentation, which could be interesting from a user / user story informed content design perspective. It may well also prompt “learning analytics” discussions. (Writing this, I’ve come to realise I associate learning analytics with tracking back into individual data from “success” criteria such as assessment scores. For the content analysis, in the first instance, I’m just interested in how its generally being consumed. No individual data necessary. Once I’ve got broad usage pattern segments down, then maybe looking at performance level segments would be useful. But then, I’d rather just track the whole cohort score distribution to try and improve that.)

From looking at VLE pages, it looks as if there are Google Analytics and optimizely tracking scripts linked in the pages, although asking around I can’t find anyone who does anything with that data from the VLE pages. (Maybe the “DW”s do?) So I’m guessing the data is there?

 

PS One of the things I think optimizely may be used for is A/B testing by the Marketing folk on other bits of the website. Something I’ve pitched before is A/B testing on course materials (e.g. differently phrased or worked versions of the same activity).

This has generally been treated with disdain, but if it works for medical trials I don’t see why we can’t try it in education too. There is an argument here that we would need to track effect on attainment (the learning analytics thing), but I’m wary of the idea that changing a single page in several hundred could wildly affect attainment, unless it related to a particular key concept that the whole course hinged on. More realistically, if we see a page on average is taking students an hour to work through when we estimated it at 20 minutes, I’d be tempted to do A/B tests on it within a cohort. (Managing that if students chat about the topic in the common forums could represent a challenge!) The idea would be to see if we could improve the content performance more in line with expectations. As it is, the current approach would be to wait until the next presentation and give that whole cohort the new version. Which would of course be previously untested at scale. And may end up with students taking even longer to work through it.

First Attempt At Using IPywidgets in Jupyter Notebooks to Display V-REP Robot Simulator Telemetry

Having got a thing together that lets me use some magic to load a V-REP robot simulator scene, connect to it and control a robot contained inside it, I also started to wonder about we could build instrumentation on the Jupyter notebook client side.

The V-REP simulator itself has graph objects that can record and display logged data within the simulator:

But we can also capture data from the simulator as part of the Python control loop running via a notebook.

(I’m not sure if streaming data from the simulator is possible, or how to go about either setting that up in the simulator connection or rendering it in the notebook?)

So here’s my quick starter for 10 getting a simple data display running in a notebook using IPython widgets.

So here’s a simple text display to give a real time (ish) view of a couple of sensor values:

As the robot runs, the widget values update in real-time-ish .

I couldn’t figure out offhand how to generate a live-updating chart, and couldn’t quickly see how to return data from inside the magic cell as part of the magic function. (In fact, I’m not convinced I understand at all the scoping in there!)

But it seems as if we set a global variable inside the magic cell, we can get data out and plot it when the simulation is stopped:

Example notebook here.

If anyone can show me how to create and update a live chart, that would be fantastic:-)

IPython Magic for Controlling the V-REP Robot Simulator from Jupyter notebooks

Whilst exploring how we might be able to use Jupyter notebooks hooked up to the Coppelia Robotics V-REP robot simulator, it struck me that we needed a fair amount of boilerplate stuff to get the simulator loaded with an appropriate scene file and the a connection made to the simulator from the notebook so we could script the robot actions from the notebooks.

My first approach to trying to simplify presentation was to create some “self-documenting” notebooks that could be used to set-up necessary environmental variables and import default classes and functions:

The %run cell magic loads and runs the referenced notebooks, which can also be inspected (and modified) by students.

(To try to minimise the risk of students introducing breaking changes into the imported notebooks, we could also lock the cells as read-only in the notebooks. Whilst this requires an extension to be installed to implement the read-only behaviour, the intention is that we distribute a customised Jupyter notebook environment to students.)

The loadSceneRelativeToClient() function loads the specified scene into the simulator. Note that this scene should contain a robot model. Once the connection to the simulator is made, a robot object can be instantiated using the connection details. The robot class should contain the definitions required to control the robot model in the loaded in scene.

Setting up the connection to the simulator is a bit of a faff, and when code cell execution is stopped we can get an annoying KeyboardInterrupt report:

We can defend against the KeyboardInterrupt by wrapping the code execution in a try/except block:

try:
    with VRep.connect("127.0.0.1", 19997) as api:
        robot = PioneerP3DXL(api)
        while True:
            #do stuff
            pass
except KeyboardInterrupt:
    pass

But it struck me that it would be much nicer to be able to use some magic along the lines of the following, in which we set up the simulator with a scene, identify the robot we want to control, automatically connect to the simulator and then just run the robot control program:

So here’s a first attempt at some IPython cell magic to do that:

from pyrep import VRep
from __future__ import print_function
from IPython.core.magic import (Magics, magics_class, line_magic,
                                cell_magic, line_cell_magic)
import shlex

# The class MUST call this class decorator at creation time
@magics_class
class Vrep_Sim(Magics):

    @cell_magic
    def vrepsim(self, line, cell):
        "V-REP magic"

        #Use shlex.split to handle quoted strings containing a space character
        loadSceneRelativeToClient(shlex.split(line)[0])

        #Get the robot class from the string
        robotclass=eval(shlex.split(line)[1])

        #Handle default IP address and port settings; grab from globals if set
        ip = self.shell.user_ns['vrep_ip'] if 'vrep_ip' in self.shell.user_ns else '127.0.0.1'
        port = self.shell.user_ns['vrep_port'] if 'vrep_port' in self.shell.user_ns else 19997

        #The try/except block exits form a keyboard interrupt cleanly
        try:
            #Create a connection to the simulator
            with VRep.connect(ip, port) as api:
                #Set the robot variable to an instance of the desired robot class
                robot = robotclass(api)
                #Execute the cell code - define robot commands as calls on: robot
                exec(cell)
        except KeyboardInterrupt:
            pass

    #@line_cell_magic
    @line_magic
    def vrep_robot_methods(self, line):
        "Show methods"
        robotclass = eval(line)
        methods = [method for method in dir(robotclass) if not method.startswith('_')]
        print('Methods available in {}:\n\t{}'.format(robotclass.__name__ , '\n\t'.join(methods)))

#Could install as magic separately
ip = get_ipython()
ip.register_magics(Vrep_Sim)

I’ve also added a bit of line magic to display the methods defined on a robot model class:

The tension now is a pedagogical one: for example, should I be providing students with the robot model, or should they be building up the various control functions (.move_forwards(), .turn_left(), etc.) themselves?

I’m also wondering whether I should push the while True: component into the magic? On balance, I think students need to see it in their code block because getting them to think about control loops rather than one shot execution of command statements is something they often don’t get the first, or even second, time round. But for reducing clutter, it’d make for far cleaner cell block code.

Oh How I Have Failed Thee, Jupyter Notebooks…

Although I first came across Jupyter – then IPython – notebooks in October 2012 (I think…), it took me another six months or so before I started playing regularly with them and pitched them for the then nascent TM351 course (geeknotes/history). We decided to explore the notebooks when the course/module team first met around about October 2013. Four years ago. Notebooks were also adopted for the Learn to Code for Data Analysis FutureLearn course (H/T to Michel Wermelinger for driving that) and only get the briefest of look-ins in the new level 1 course TM112 (even after I showed we could probably get turtle running in them…).

But to my shame I haven’t lobbied more on campus, and haven’t done the rounds giving talks and workshops and putting together meaningful demos.

Which is possibly the sort of activity that this newly advertised, and hugely attractive, role at the University of Edinburgh (h/t @PhilBarker) is designed to support – eLearning Officer Computational Notebooks.

Do you have a sound knowledge of technology and an enthusiasm for evaluating new approaches in education? We are looking for a learning technologist with a passion for communication and relationship management to lead a pilot of Jupyter notebooks for learning and teaching at the University of Edinburgh.

Jupyter notebooks are open-source web applications that enable learners to create, share and reuse computational narratives. Based within the central Information Services you will work closely with academic colleagues in Science and Engineering. You will analyse user requirements, advise on and support the use of Jupyter and evaluate the success of the pilot.

After clicking on the Apply button, we get to some more detail. Part of the purpose of the job is to “scope, assess demand and support requirements for a computational notebook (Jupyter Notebook) service”, something we’re trying to push through in a very limited form in the OU in order to support the TM112 notebook activity.

Here’s how the responsibilities unpick:

  1. To help academic and support staff make best use of learning technology services (in this case Jupyter Computational Notebook Service) and where required supporting and managing service change. Documenting use cases and sharing good practice and innovative solutions to improve the user experience. (Approx % of time 40%)
  2. To work with the user community and project partners in academic departments in order to continually improve the services and range of tools on offer. To maintain an up-to-date knowledge of the broader e-learning landscape in order to influence strategic direction and to develop innovative and appropriate use of learning technologies. (Approx % of time 30%)
  3. To participate and lead user and partner engagement events, in order to promote collaboration, knowledge sharing and greater awareness of services. To organise testing, training and workshops to support users. To represent the University and its interests both internally and externally. (Approx % of time 20%)
  4. Contribute to process improvement within both ISG and the wider University. Liaise and negotiate within members of University committees, user forums and working groups to formulate policy in accordance with the university strategic aims for learning and teaching. (Approx % of time 10%)

(On process improvement, I think Jupyter notebooks can provide a useful authoring environment (along with things like “written diagrams“) for “reproducible”  (which is to say, maintainable) course materials in the OU context. An approach I have had a total lack of success in promoting.)

I couldn’t help but try out a quick search for other notebooks related job ads, and turned up a handful of research posts, including one for a Bioinformatics Training Developer at the University of Cambridge – Cancer Research UK Cambridge Institute. The job duties and requirements provide an interesting complement to the skills required of a data journalist:

The training courses and summer schools already established are very popular and have gained a strong reputation. In this role, you will further develop the existing courses to reflect new advances. You will also create and deliver new training courses and materials in scientific data analysis and visualization, … . You will be responsible for assessing the training needs of research scientists and shaping a programme to meet those needs. This is an excellent opportunity to develop and apply new training approaches, making use of technologies such as R/Python Notebooks, Shiny web applications and Docker.

The successful candidate will have a degree in a scientific or computational discipline and preferably a postgraduate degree (MSc or PhD) and/or significant experience in Bioinformatics or Computational Biology, including the analysis of omics datasets using R. The role requires a high level of interpersonal and organizational skills and previous experience in preparation and delivery of training courses is essential. Strong practical skills in R and/or Python are highly desirable, including the use of version control systems, e.g. GitHub. …

[My emphasis.]

It’s maybe also worth mentioning here the current consultation around the draft Data Scientist Integrated Degree Apprenticeship (level 6) standard. Please comment if you can…

PS I popped together a feed for a search for “notebooks” on jobs.ac.uk using fetchrss.com to try to keep track of future upcoming academic job ads making mention of notebooks.

Writing Diagrams (Incl. Mathematical Diagrams)

Continuing an occasional series of posts on approaches to “writing” diagrams in a textual form and then letting the machine render them, here are some recent examples that caught my eye…

Via this Jupyter notebook on inverse kinematics, I came across Asymptote, “a standard for typesetting mathematical figures, just as TeX/LaTeX is the de-facto standard for typesetting equations” (file suffix: .asy). The language uses LaTeX for labels, and is a hight level programming language in it’s own right – which means is can do calculations in it’s own right as part of the diagram creation process.

Asymptote is also available via IPython magic, as demonstrated in this Asymptote demo notebook:

The inverse kinematics notebook is also worth reviewing in a couple of other respects. Firstly, it demonstrates embedding of another “written diagram” approach, using Graphviz:

One of the easiest ways to use Graphviz scripting in Jupyter notebooks is via some IPython Graphviz magic.

It also demonstrates how to use Sympy to to “implement” equations relating to the diagram and then generating animations based on them. (I still think it would be nice if we could unify the various maths rendering and calculating scripts.)

Way back when I learned to programme, I remember being given “railroad diagrams” (though I’m not sure they were called that? Syntax diagrams, maybe?) that described the programming language grammar defined in BNF in a visual way. Here’s a tool for generating them:

It’s a bit shiny, and a bit of pain that it doesn’t just take BNF. On the other hand, this Railroad Diagram Generator looks far more powerful:

Unfortunately, it looks to be web only and there’s no source. However, if you’re okay running Java, here’s an alternative – Chrriis/RRDiagram:

I did find a Python railroad diagram generator, Syntrax [code], but it didn’t accept BNF.

Along similar lines to the blockdiag tools I’ve described before is the purely in browser mermaid.js.

Supported chart types include flowcharts, sequence diagrams and Gantt charts.

Finally, another Grammar of Graphics style charting language – Brunel – for generating d3 output (among other things?). It can be used in notebooks, but it does require Java to be installed (as you might expect of something with an IBM relationship…?!)

PS although I think of writing diagrams more in the sense of generating rendered diagrams from text (which makes generating the diagram reproducible and maintable), these are maybe also relevant as a capture step:

Secrets and Lies Amongst Facebook Friends – Surprise Party Planning OpSec

Noting that: surprise parties can be organised and co-ordinated on Facebook between the friends and family of the person who will be subject to the surprise using private groups and private events. Potential party goers can be mined used the friends list of the subject, as well as the friends themselves.

Observing that: Facebook users seem to quickly get the hang of operational security (opsec) using the “public” medium of Facebook to mount a clandestine operation against one of the members of the same social circle.

Wondering whether: the Facebook algorithm either helps maintain that form of social/party planning opsec, or could possibly threaten it. For example, if someone accidentally makes a public post about the upcoming surprise party, does the Facebook algo suppress showing that post to the target (algorithmically, noting that a group with a particular social circle seem to be actively excluding one of the people you might expect to be in that circle), or might it prioritise showing that post to the target (algorithmically, on the the grounds that this person should normally be included in a discussions within a particular social circle and for some reason appears to have been excluded  – which Facebook can spot and fix…)