Legislating Autonomous Robots

Fifteen years or so ago, now, I worked on an OU short course – T184: Robotics and the Meaning Life. The course took a broad view of robotics, from the technical (physical design, typical control systems, relevant AI – artificial intelligence – techniques (and their limitations), through the social and political consequences. The course also included the RobotLab simulator, which could be used to programme a simple 2D robot, or a HEK – a self-purchased Home Experiment Kit in the form of Lego Mindstorms.

The course was delivered as part of the Technology Faculty Relevant Knowledge programme, originally championed by John Naughton. There’s a lot folk don’t know – or understand – about how the technology world works, and the Relevant Knowledge programme helped address that. The courses were for credit, 10 CAT points at level 1, and were fairly priced: 100 hours for a hundred and fifty quid, and 10 CAT points as a bonus.

One of the things I was keen to put in T184 was a section on robot law, which complemented a section on “robot rights”; this reviewed laws that had been applied to slaves, children, animals and the mentally infirm, “sentient creatures”, in other words, whose behaviour or actions might be the responsibility of someone else, and asked whether such laws might be a useful starting point for legislating around the behaviour of intelligent, self-adaptive robots and their owners / creators. The course also drew on science fiction depictions of robots, making the case that while positronic brains were a fiction, the “Three Laws” that they implemented could be seen as useful design principles for robot researchers:

whereas, until such time, if ever, that robots become or are made self-aware, Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;

That phrase does not come from the course, but it does appear in a draft report, published in May this year, from the European Parliament Committee on Legal Affairs [2015/2103(INL)]. The report includes “recommendations to the Commission on Civil Law Rules on Robotics” and, for the EU at least, perhaps acts as a starting pistol for a due consideration of what I assume will come to be referred to as “robot law”.

As well as considering robots as things deserving of rights that could be subjugated, I’d also explored the extent to which robots might be treated as “legal entities” in much the way that companies are legal entities, although I’m not sure that ever made it into the course.

whereas, ultimately, robots’ autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties, including liability for damage;

Again – that’s the EU report from a couple of months ago. So what exactly is it proposing, and what does it cover? Well, the report:

Calls on the Commission to propose a common European definition of smart autonomous robots and their subcategories by taking into consideration the following characteristics of a smart robot:

  • acquires autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and trades and analyses data
  • is self-learning (optional criterion)
  • has a physical support
  • adapts its behaviours and actions to its environment;

So not software robots, then? (Which raises a question – how might adaptive algorithms be regulated, and treated under law? Or algorithms that are manifest via “human” UIs, such as conversational chatbots?) Or would such things be argued as having “physical support”?

Hmmm… because whilst the report further notes :

… that there are no legal provisions that specifically apply to robotics, but that existing legal regimes and doctrines can be readily applied to robotics while some aspects appear to need specific consideration;

which is fine, it then seems to go off at a tangent as it:

calls on the Commission to come forward with a balanced approach to intellectual property rights when applied to hardware and software standards, and codes that protect innovation and at the same time foster innovation;

I can see the sense in this, though we maybe need to think about IPR of control models arising from the way an adaptive system is trained, compared to the way it was originally programmed to enable it to be trained and acquire it’s own models, particularly where a third party, rather than a manufacturer, does the training, but then the report seems to go off the rails a bit as it:

calls on the Commission to elaborate criteria for an ‘own intellectual creation’ for copyrightable works produced by computers or robots;

That last sentence surely suggests that they’re talking about algorithms rather than robots? Or are they saying that if I write an adaptive computer program that generates a PNG, it’s not copyrightable, but if I program an adaptive robot with a pen on its back and it draws a picture, that is copyrightable? (I can see the IPR issues here may get a bit messy, though presumably contacts and licenses associated with collaborative generative systems already start to address this?)

The report then seems to go off on another tangent, as it:

Points out that the use of personal data as a ‘currency’ with which services can be ‘bought’ raises new issues in need of clarification; stresses that the use of personal data as a ‘currency’ must not lead to a circumvention of the basic principles governing the right to privacy and data protection;

I’m not sure I see how that’s relevant here? There then follows a few sections relating to specific sorts of robot (autonomous cars, medial robots, drones) before addressing employment issues:

Bearing in mind the effects that the development and deployment of robotics and AI might have on employment and, consequently, on the viability of the social security systems of the Member States, consideration should be given to the possible need to introduce corporate reporting requirements on the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions; takes the view that in the light of the possible effects on the labour market of robotics and AI a general basic income should be seriously considered, and invites all Member States to do so;

So…. robots on the workforce means you have to pay a national insurance contribution for what? FTE human jobs replaced? But there’s also a call for a general basic income?!

Then we return to what I thought the report was about – liability:

Considers that robots’ civil liability is a crucial issue which needs to be addressed at EU level so as to ensure the same degree of transparency, consistency and legal certainty throughout the European Union for the benefit of consumers and businesses alike;

Considers that, in principle, once the ultimately responsible parties have been identified, their liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be; notes, in particular, that skills resulting from ‘education’ given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot’s harmful behaviour is actually due;

The current recommendation appears to be that liability issues be addressed via a compulsory insurance scheme:

Points out that a possible solution to the complexity of allocating responsibility for damage caused by increasingly autonomous robots could be an obligatory insurance scheme, as is already the case, for instance, with cars; notes, nevertheless, that unlike the insurance system for road traffic, where the insurance covers human acts and failures, an insurance system for robotics could be based on the obligation of the producer to take out an insurance for the autonomous robots it produces;

which is fine, and other paragraphs explore that further; but then the report goes off on one again:

creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;

Which is firmly in the territory I wanted to explore in T184 way back when. For example, is the suggestion that we have some sort of “Intelligent Robot/Algorithm Capacity Act”, akin to the 2005 Mental Capacity Act perhaps?! Or is it more akin to corporate liability which seems to be under-legislated? And here’s where I start to wonder – where do you distinguish between robots as autonomous things that are legislated against, algorithms as autonomous things that are legislated against, sets of interacting algorithms creating complex adaptive systems as autonomous things that are legislated against, complex adaptive systems such as companies that are legislated against, and so on… (I maybe need to read Iain M. Banks’ sci-fi books about The Culture again!)

The report then goes on to suggest a draft Code of Ethical Conduct for Robotics Engineers, a Licence for Designers and a Licence for Users. But not a Licence for Robots themselves. Nor any mention of the extent to which the built environment should be made accessible for mobile robots. (“Robot accessibility” was another thing I was interested in!;-)

Another document that came out recently hails from the DfT’s Centre for Connected and Autonomous Vehicles is a consultation (in the UK) around Advanced driver assistance systems and automated vehicle technologies: supporting their use in the UK [Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies – PDF]. Apparently:

The only immediate change that we have identified primary legislation that is required now is to update our insurance framework. This will give insurers and manufacturers time to consider what insurance products can come to market in time for when this technology arrives.

This reflects the likely short term arrival of “motorway assist systems for travel on high speed roads (i.e. motorways and major trunk roads); and remote control parking”. Platooning trials are also to take place.

For the longer term, the report distinguishes between “conventional driving, assisted driving and fully automated driving”:

driverless-cars-proposals-for-adas-and_avts_pdf

driverless-cars-proposals-for-adas-and_avts_pdf2

The consultation doc is worth reading in full, but here are a couple of points that jumped out at me:

a vehicle owner who is ‘driving’ the highly automated vehicle might have legitimately disengaged from the driving task, with the vehicle having taken over control. If the technology fails and injures the ‘driver’, the current legislation only requires insurance to cover third parties and not the driver. It is up to the policy owner to seek additional insurance to cover any injury they do to themselves as a result of their own actions or negligence. If the AVT fails then the driver, in effect, becomes a victim as their injuries are not as a result of their own actions or negligence. We therefore need to protect the driver as a potential victim.

So you’ll need to insure yourself against the car?

The last line of this amused me:

We have considered whether a different definition of ‘user’ is needed in the Road Traffic Act for automated vehicles for the purposes of insurance obligation. For the first generation of AVT (where the driver is only ‘hands-off’ and ‘eyes-off’ for parts of the journey) we think that the driver falls under the current definition of a ‘user’. Once fully automated vehicles are available – which would drive themselves for the entire journey – it might be more appropriate to put the insurance obligation solely on the registered keeper.

“Registered keeper”. This may well be the current wording relating to vehicle ownership, but it made me think of a wild animal keeper. So harking back to Robot Law, would it be worth looking at the Dangerous Wild Animals Act 1976 or the Dangerous Dogs Act 1991? (Hmm… code sharing libraries, model sharing algorithms – “breeding” new code from old code…!)

We are not currently proposing any significant change in our rules on liability in road traffic accidents to reflect the introduction of automated cars. We still think a fault based approach combined with existing product liability law, rather than a new strict liability regime, is the best approach for our legal system. We think that the existing common law on negligence should largely be able to adapt to this new technology.

So the car won’t be a legal entity in its own right… though I wonder if a class of vehicles running under the same model/operating system would under the EU approach hinted at above?

If you were of suspicious mind, you might think that there could be an ulterior motive for pushing forward various forms of automative automation…

Data will clearly be required to determine whether the driver or the vehicle was responsible for any collision, such as establishing who was in control at the time of the incident. This is likely to come from in-vehicle data recorders. Many vehicles already have data recorders fitted, although the data collected is not accessible without special equipment.

We expect that the out-of-the-loop motorway driving vehicles that are coming to market soon will have an event data recorder fitted. There are inevitably different views as to what data is essential and of course data protection and privacy considerations are important. It seems likely that data recorders would be regulated on an international basis, like most vehicle technologies. We will participate fully in this debate, equipped with views from the UK manufacturing and insurance industries, evidence from the various trials taking place and the first automated technologies that are coming to market.

Presumably, it’s easiest to just make everyone install a box…. (see Geographical Rights Management, Mesh based Surveillance, Trickle-Down and Over-Reach and Participatory Surveillance – Who’s Been Tracking You Today? for examples of how you can lose control of your car and/or data…) That said, boxes can be useful for crash investigations, and may be used in the defense of the vehicle’s actions, or perhaps in its praise: Tesla’s Autopilot May Have Saved A Life.

The following just calls out to be gamed – and also raises questions around updates, over-the-air or via a factory recall…

We do not think an insurer should be able to avoid paying damages to a third party victim where an automated vehicle owner fails to properly maintain and update the AVT or attempts to circumvent the AVT in breach of their insurance policy. Nor do we think that an insurer should be able to avoid paying damages to a third party victim if the vehicle owner or the named drivers on the policy attempt to use the vehicle inappropriately.

The following point starts to impinge on things like computer misuse as well as emerging robot law?

If an accident occurred as a result of an automated vehicle being hacked then we think it should be treated, for insurance purposes, in the same way as an accident caused by a stolen vehicle. This would mean that the insurer of the vehicle would have to compensate a collision victim, which could include the ‘not at fault driver’ for damage caused by hacking but, where the hacker could be traced, the insurer could recover the damages from the hacker.

In respect of the following point, I wonder of the many products we buy at the moment, how many of them integrate statistical computational models (rather than just rely on physics!)? Is the whole “product liability” thing due a review in more general terms?!

Currently the state of the art defence (section 4(1)(e) of the Consumer Protection Act 1987) provides a defence to product liability if, at the time the product was in the manufacturer’s control, the state of scientific and technical knowledge was not such that a manufacturer could have been expected to discover the defect. We could either leave manufacturers’ liability and product liability as it currently is or, instead, extend the insurance obligation to cover these circumstances so that the driver’s insurance would have to cover these claims.

To keep tabs on the roll out of autonomous vehicles in the UK, see the Driverless vehicles: connected and autonomous technologies policy area.

PS via Ray Corrigan, some interesting future law workshops under the banner Geek Law: Gikll 2013, Gikll 2014, Gikll 2015. The 2016 programme (for the London event, Sept 30) is available in an unreadable font here: Gikll 2016 programme.

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...