EU Legal Affairs – Proposals on Civil Law Rules on Robotics

The EU Legal Affairs Committee voted last week to call for legislation that is likely to include a definition of “smart autonomous robots”, regulation of the same, and an ethical code of conduct for designers, producers and users.

Note that the full proposal to be put forward is not currently available – the draft is here: COMMITTEES-JURI-PR-2017-01-12-1095387EN), with amendments.

One thing I noticed in the original document was that it seemed limited in it’s definition of “smart autonomous robots” as physically instantiated things, and ignored more general “smart systems” – AI software systems, for example. An amendment in PE592.405 addressed this, broadening the scope to cover “AI” more generally: smart autonomous robots and their subcategories by taking into consideration the following characteristics of a smart robot and an autonomous system, that comprised a physical support or is connected to a software programme without being embedded in a physical support.

When the topic was originally announced, a big thing was made in the news about the calls for robots to be classed as “electronic persons”. A policy study – European Civil Law Rules In Robotics – that fed into the deliberations attempted to debunk this:

3.1. Incongruity of establishing robots as liable legal persons
The motion for a resolution proposes creating a new category of individual, specifically for robots: electronic persons. Paragraph 31(f) calls upon the European Commission to explore the legal consequences of “creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause [to third parties], and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties”.

When considering civil law in robotics, we should disregard the idea of autonomous robots having a legal personality, for the idea is as unhelpful as it is inappropriate.

Traditionally, when assigning an entity legal personality, we seek to assimilate it to humankind. This is the case with animal rights, with advocates arguing that animals should be assigned a legal personality since some are conscious beings, capable of suffering, etc., and so of feelings which separate them from things. Yet the motion for a resolution does not tie the acceptance of the robot’s legal personality to any potential consciousness. Legal personality is therefore not linked to any regard for the robot’s inner being or feelings, avoiding the questionable assumption that the robot is a conscious being. Assigning robots such personality would, then, meet a simple operational objective arising from the need to make robots liable for their actions.

… the motion for a resolution would appear more inclined to fully erase the human presence. In viewing as an electronic person any “robots [which] make smart autonomous decisions or otherwise interact with third parties” (end of paragraph 31(f)), the motion seems to suggest that the robot itself would be liable and become a legal actor. This analysis finds support in paragraph S, which states that “the more autonomous robots are, the less they can be considered simple tools in the hands of other actors […] [and this] calls for new rules which focus on how a machine can be held — partly or entirely — responsible for its acts or omissions”. Once a robot is no longer controlled by another actor, it becomes the actor itself. Yet how can a mere machine, a carcass devoid of consciousness, feelings, thoughts or its own will, become an autonomous legal actor? How can we even conceive this reality as foreseeable within 10 to 15 years, i.e. within the time frame set in paragraph 25 of the motion for a resolution? From a scientific, legal and even ethical perspective, it is impossible today — and probably will remain so for a long time to come — for a robot to take part in legal life without a human being pulling its strings.

What is more, considering that the main purpose of assigning a robot legal personality would be to make it a liable actor in the event of damage, we should note that other systems would be far more effective at compensating victims; for example, an insurance scheme for autonomous robots, perhaps combined with a compensation fund (paragraphs 31(a) to (e)).

We also have to bear in mind that this status would unavoidably trigger unwanted legal consequences. Paragraph T of the motion states that creating a legal personality would mean that robots’ rights and duties had to be respected. How can we contemplate conferring rights and duties on a mere machine? How could a robot have duties, since this idea is closely linked with human morals? Which rights would we bestow upon a robot: the right to life (i.e. the right to non-destruction), the right to dignity, the right to equality with humankind, the right to retire, the right to receive remuneration (an option explicitly explored in paragraph 31(b) of the motion), etc.? …

In reality, advocates of the legal personality option have a fanciful vision of the robot, inspired by science-fiction novels and cinema. They view the robot — particularly if it is classified as smart and is humanoid — as a genuine thinking artificial creation, humanity’s alter ego. We believe it would be inappropriate and out-of-place not only to recognise the existence of an electronic person but to even create any such legal personality. Doing so risks not only assigning rights and obligations to what is just a tool, but also tearing down the boundaries between man and machine, blurring the lines between the living and the inert, the human and the inhuman. Moreover, creating a new type of person – an electronic person – sends a strong signal which could not only reignite the fear of artificial beings but also call into question Europe’s humanist foundations. Assigning person status to a nonliving, non-conscious entity would therefore be an error since, in the end, humankind would likely be demoted to the rank of a machine. Robots should serve humanity and should have no other role, except in the realms of science-fiction.

Yes but, no but… replace “robot” with “company” in the above. Companies have a legal basis, are defined as legal entities, albeit requiring (in the UK at least), at least one human officer to be responsible for them. To what extent could smart systems be similarly treated, or incorporated?

Or consider this – employees, who have human agency have responsibilities, and liabilities incurred by them may or may not become liabilities for their employer. To the extent that smart systems have “machine agency”, to what extent might they also be classed as employees, eg for insurance or liability purposes? (Related: Fragments – Should Algorithms, Deep Learning AI Models and/or Robots be Treated as Employees?.) To be an “employee”, do you also have to be a legal entity?

PS there’s also a clause in the amendments that expresses neo-Luddite tendencies with which I am not totally unsympathetic:

“whereas robotics and AI that can perform similar tasks to those performed by humans should be used mainly to support and boost the abilities of man, as opposed to trying to replace the human element completely”

I haven’t read all the amendments yet – there may be more nuggets in there…

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

%d bloggers like this: