Recalling Asimov’s Three Laws of Robotics in the Age of Machine Interoceptors…

One of the disservices I think we did to more than a few students who took the OU short course “Robotics and the Meaning of Life” was that we let them go away thinking that Asimov’s Three Laws of Robotics – and the positronic brains that implemented them – were science fiction in the sense of Verne, rather than Wells… (I thought we should take a stronger line on the subsumption architecture form of the laws, rather than literally interpreting the mystic values baked into them…)

Anyway – the laws:

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

As digital technologies in particular take more control over their actions, in part as a result of increasing amounts of built-in interoceptors (think: akin human senses of hunger, need to go to the bathroom) and proprioceptors (akin to human senses of joint angles, and motion) that allow them to takes decisions about what they know “about themselves” as well as what we want to use them for, the designers of their control systems appear to be building in self-preservation responses that we might start to anthropomorhise in Three Laws terms. (That sounds way too academic, doesn’t it?!)

For example: I tried to take a photo using the camera on my phone just now. My phone wouldn’t let me because the battery is low. I think it has enough juice to take the photo, but it won’t let me. So what else won’t it let me do?

Another – I run quite a few virtual machines on my laptop. As the laptop battery runs down, the VMs shut down silently, although I’m not sure according to what policy, or how to change it, whilst the other apps and services running on the host seem to carry on.

For several years, Mac laptops (at least) with built-in hard-disk drives have had a sudden motion sensor built-in. If you drop the laptop – or move it quickly – it parks the drive heads to minimise the risk of damage (“a robot must protect its own existence…”).

Increasingly, carmakers are building in sensor-based safety features, such as automatic braking systems that kick-in when a collision detection system raises an alert. In-car airbag systems that used to rely on mechanical crash sensors now rely on accelerometers and decision policies (for example, Air Bag Deployment Criteria [PDF]).

So – what other examples can you think of, or have witnessed, where the machine is acting in a self-preserving way, particularly where those actions are antagonistic to what you want to use the device or machine for?

Time to start critiquing the ways that as-if versions of Asimov’s Three Laws or Robotics are finding their ways into our self-sensing devices?

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

One thought on “Recalling Asimov’s Three Laws of Robotics in the Age of Machine Interoceptors…”

  1. Pingback: The Week in Robots

Comments are closed.

%d bloggers like this: