Mediated/Augmented Reality (Un)Course Notes, Part I

Pokemon Go seems to have hit the news this week – though I’m sure for anyone off social media last week and back to it next week, the whole thing will have completely passed them by – demonstrating that augmented reality apps really haven’t moved on much at all over the last five years or so.

But notwithstanding that, I’ve been trying to make sense of a whole range of mediated reality technologies for myself as prep for a very short unit on technologies and techniques on that topic.

Here’s what I’ve done to date, over on the Digital Worlds uncourse blog. This stuff isn’t official OU course material, it’s just my own personal learning diary of related stuff (technical term!;-)

More to come over the next couple of weeks or so. If you want to comment, and perhaps influence the direction of my meanderings, please feel free to do that here or on the relevant post.

An evolving feed of the posts is available in chronological order and in reverse chronological order.

Dogfooding… and Creating (Learning) for a Purpose

“Eating your own dogfood”, aka dogfooding, refers the practice of a company testing it’s own products by using them internally. At a research day held by Somerset College, a quote in a talk by Lorna Sheppard on Len Deighton’s cookbooks (yes, that Len Deighton…) from a 2014 Observer magazine article (Len Deighton’s Observer cookstrips, Michael Caine and the 1960s) caught my attention:

[G]enerally, you stand a better chance of succeeding in something if whatever you create, you also like to consume.

Implicit in this is the idea that you are also creating for a purpose.

In the OU engineering residential school currently running at the University of Bath, one of the four day long activities the students engage with is a robotics activity using Lego EV3 robots, where at each stage we try to build in a reason for adding another programming construct or learning how to work with a new sensor. That is, we try to motivate the learning by making it purposeful.

The day is structured around a series of challenges that allow students to develop familiarity with programming a Lego EV3 robot, adding sensors to it, logging data from the sensors and then interpreting the data. The activities are contextualised by comparing the work done on the Lego EV3’s with the behaviour of a Roomba robot vacuum cleaner – by the end of the morning, students will have programmed their robot to perform the majority of the Roomba’s control functions, including finding it’s way home to a homing beacon, as well as responding to touch (bumper), colour (line stopper) and proximity (infra-red and ultrasonic) sensors.

The day concludes with a challenge, where an autonomous robot must enter – and return from  – a closed tunnel network, using sensors to collect data about the internal structure of the tunnel, as well identifying the location of a casualty who has an infra-red emergency beacon with them.


(The lids are placed on the tunnels so the students can’t see inside.)

As well as the partition walls (which are relocated each time the challenge is run, so I’m not giving anything away!), pipework and cables (aka coloured tape) also run through the tunnel and may be mapped by the students using a downward facing light sensor.


The casualty is actually a small wooden artist’s mannequin – the cuddly teddy we used to use does not respond well to the ultrasound sensor the students use to map the tunnel.


The data logged by the students include motor rotation data to track the robots progress, ultrasonic sensor data to map the walls, infra-red sensor data to find the emergency beacon and a light sensor to identify the cables/pipework.

The data collected looks something like this:

final challenge

The challenge is then to map the (unseen by the students) tunnel network, and tell the robot’s story from the data.

The result is a narrative that describes the robot’s progress, and a map showing the internal structure of the tunnel:


If time allows, this can then be used as the basis for programming the robot to complete a rescue mission!

The strategies used by the students to log the data, and control the robot to send it into the tunnel and retrieve it safely again, are based on what they learned completing the earlier challenges set throughout the day.

The Internet of Thinking Things – Intelligence at the Edge

Via F1 journalist James Allen’s blog (Insight: Inside McLaren’s Secretive F1 Openerations Room, “Mission Control”), I learn that the wheel hub of McLaren’s latest MP4-31 Formula One car hacks its own data. According to McLaren boss, Ron Dennis:

Each wheel hub has its own processing power, we don’t even take data from the sensors that surround the wheel [that measure] brake temperatures, brake wear, tyre pressures, G-Forces – all of this gets processed actually in the wheel hub – it doesn’t even get transmitted to the central ECU, the Electronic Control Unit.

If driver locks a brake or the wheel throws itself out of balance, we’re monitoring the vibration that creates against a model that says, “if the driver continues with this level of vibration the suspension will fail”, or the opposite, “we can cope with this vibration”.

With artificial intelligence and machine learning modeling now available as a commodity service, at least for connected devices, it’ll be interesting to see what the future holds for intelligence at the edge – sensors that don’t just return data (“something moved” from a security sensor, but that return information (“I just saw a male, 6′, blue trousers, green top, leaving room 27 and going to the water cooler; it looked like… etc etc..”)

Of course, if you’re happy with your sensors just applying a model, rather than building one, that appears to be the case for the MP4-31 wheel hub, it seems that you can already do that at the 8 bit level using Deep Learning, as described by Pete Warden in How to Quantize Neural Networks with TensorFlow.

By the by, if you want to have a quick play with a TensorFlow learner, check out the TensorFlow Neural Network Playground. Or how about training a visual recognition system with IBM’s Visual Recognition Demo?

Browser Developer Tools Tricks

Noticing that Alan just posted a Little Web Inspector / CSS Trick for extracting logos from web pages, here’s one for cleaning up ads from a web page you want to grab a screen shot of.

For example, I often take screenshots of new web pages for adding to “topical timeline” style presentations. As a reference, I often include the page URL from the browser navigation bar and the newspaper banner. But some news sites have ads at the top that you can’t scroll away:


Using a browser’s developer tools, you can “live edit” the page HTML in the browser – first select the element you want:


then delete it…


If that doesn’t do the trick, you can always edit the HTML directly – or modify the CSS:


With a bit of tinkering, you can get a version of the page that you can get a clean screenshot of…



By editing the page HTML, you can also create you own local graffiti to web pages to amuse yourself and pass away the time…!;-)

For example, here’s me adding a Brython console to a copy of the OU home page in my browser…


This is purely a local copy, but functional nonetheless. And a great way of demonstrating to folk how you’d like a live web page to actually be, rather than how it currently is!-)



Participatory Surveillance – Who’s Been Tracking You Today?

With the internet of things still trying to find its way, I wonder why more folk aren’t talking about participatory surveillance?

For years, websites have been gifting information to third parties that you have visited them (Personal Declarations on Your Behalf – Why Visiting One Website Might Tell Another You Were There), but as more people are instrumenting themselves, the opportunities for mesh network based surveillance are ever more apparent.

Take something like thetrackr, for example. The device itself is a small bluetooth powered device the size of a coin that you attach to your key fob or keep in your wallet:

The TrackR is a Bluetooth device that connects to an app running on your phone. The phone app can monitor the distance between the phone and device by analyzing the power level of the received signal. This link can be used to ring the TrackR device or have the TrackR device ring the phone.

The other essentially part is an app you run permanently on your phone that listens out for the trackr device. Not just yours, but anyone’s. And when it detects one it posts its location to a central server:

[thetrackr] Crowd GPS is an alternative to traditional GPS and revolutionizes the possibilities of what can be tracked. Unlike traditional GPS, Crowd GPS uses the power of the existing cell phones all around us to help locate lost items. The technology works by having the TrackR device broadcast a unique ID over Bluetooth Low Energy when lost. Other users’ phones can detect this wireless signal in the background (without the user being aware). When the signal is detected, the phone records the current GPS location, sends a message to the TrackR server, and the TrackR server will then update the item’s last known location in its database. It’s a way that TrackR is enabling you to automatically keep track of the location of all your items effortlessly.

And if you don’t trust the trackr folk, other alternatives are available. Such as tile:

The Tile app allows you to anonymously enlist the help of our entire community in your search. It works both ways — if you’re running the app in the background and come within range of someone’s lost item, we’ll let the owner know where it is.

This sort of participatory surveillance can be used to track stolen items too, such as cars. The TRACKER mesh network (which I’ve posted about before: Geographical Rights Management, Mesh based Surveillance, Trickle-Down and Over-Reach) uses tracking devices and receivers fitted to vehicles to locate other similarly fitted vehicles as they pass by them:

TRACKER Locate or TRACKER Plant fitted vehicles listen out for the reply codes being sent out by stolen SVR fitted vehicles. When the TRACKER Locate or TRACKER Plant unit passes a stolen vehicle, it picks up its reply code and sends the position to the TRACKER Control Room.

That’s not the only way fitted vehicles can be used to track each other. A more general way is to fit your car with a dashboard camera, then use ANPR (automatic number plate recognition) to identify and track other vehicles on the road. And yes, there is an app for logging anti-social or dangerous driving acts the camera sees, as described in a recent IEEE Spectrum article on The AI dashcam app that wants to rate every driver in the world. It’s called the Nexar app, and as their website proudly describes:

Nexar enables you to use your mobile telephone to record the actions of other drivers, including the license plates, types and models of the cars being recorded, as well as signs and other surrounding road objects. When you open our App and begin driving, video footage will be recorded. …

If you experience a notable traffic incident recorded through your use of the App (such as someone cutting you off or causing an accident), you can alert Nexar that we should review the video capturing the event. We may also utilize auto-detection, including through the use of “machine vision” and “sensor fusion” to identify traffic law violations (such as a car in the middle of an intersection despite a red stop light). Such auto-detected events will appear in your history. Finally, time-lapse images will automatically be uploaded.

Upon learning of a traffic incident (from you directly or through auto-detection of events), we will analyze the video to identify any well-established traffic law violations, such as vehicle accidents. Our analysis will also take into account road conditions, topography and other local factors. If such a violation occurred, it will be used to assign a rating to the license plate number of the responsible driver. You and others using our App who have subsequent contact with that vehicle will be alerted of the rating (but not the nature of the underlying incidents that contributed to the other driver’s rating).

And of course, this is a social thing we can all participate in:

Nexar connects you to a network of dashcams, through which you will start getting real-time warnings to dangers on the road

It’s not creepy though, because they don’t try to relate to number plates to actual people:

Please note that although Nexar will receive, through video from App users, license plate numbers of the observed vehicles, we will not know the recorded drivers’ names or attempt to link license plate numbers to individuals by accessing state motor vehicle records or other means. Nor will we utilize facial recognition software or other technology to identify drivers whose conduct has been recorded.

So that’s all right then…

But be warned:

Auto-detection also includes monitoring of your own driving behavior.

so you’ll be holding yourself to account too…

Folk used to be able to go to large public places and spaces to be anonymous. Now it seems that the more populated the place, the more likely you are to be located, timestamped and identified.

The Future Is Already Here, It Just Hasn’t Been Approved Yet

Whether or not William Gibson actually said – either exactly, or approximately – “The future is already here. It’s just not evenly distributed yet” – it’s undoubtedly the case that many of the technologies that will come to influence our lives in the near future have already been invented, they just haven’t been fully tested, regulated, insured against or officially approved yet.

So to get an idea about what’s upcoming, one thing we can do is track the regulators and testing agencies, as well as new offerings from the insurers, such as the Driverless Car Insurance from Adrian Flux:

Our new driverless policy will cover you against:

  • Loss or damage to your car caused by hacking or attempted hacking of its operating system or other software
  • Updates and patches to your car’s operating system, firewall, and mapping and navigation systems that have not been successfully installed within 24 hours of you being notified by the manufacturer
  • Satellite failure or outages that affect your car’s navigation systems
  • Failure of the manufacturer’s software or failure of any other authorised in-car software
  • Loss or damage caused by failing when able to use manual override to avoid an accident in the event of a software or mechanical failure

Getting on for fifteen years ago now, approximately, the UK Health and Safety Executive commissioned a report on The future health and safety implications of global positioning satellite and machine automation, looking at the health and safety implications of automated machinery particularly in a quarrying context, the sort of thing introduced by Rio Tinto’s “Mine of the Future” in 2008. (The HSE also have a report from 2004 that, among other things, considers risks associated with autonomous underwater vehicles: Risk implications in site characterisation and analysis for offshore engineering and design. Which reminds me, when does the Unmanned Warrior exercise take place?)

Another place we might look to are registers of clinical trials. So for example, how are robots are being tested in UK Clinical Trials?


We could also run a similar search on the US register, or the ISRCTN Registry.

Or how about software related clinical trials?


Hmm.. thinks.. I wonder: is “software” being prescribed in the UK? If so, it should be recorded in the GP prescribing opendata… But as what, I wonder?!

PS One for the librarians out there – where else should I be looking? Tracking legislation and government codes of practice is one source (eg as per Regulating Autonomous Vehicles: Land, Sea and Air…). But what other sources are there?