Fragments – Should Algorithms, Deep Learning AI Models and/or Robots be Treated as Employees?

Ever ones for pushing their luck when it comes to respecting codes, regulations, and maybe even the law, Uber hit the news again last when it turned it out at least one of its automated vehicles was caught running a red light (Uber blames humans for self-driving car traffic offenses as California orders halt).

My first thought on this was to wonder who’s nominally in control of an automated vehicle of the vehicle itself detects a likely accident and quickly hands control over to a human driver (“Sh********t… you-drive”), particularly if reaction times are such that even the most attentive human operator doesn’t have time to respond properly.

My second was to start pondering the agency associated with an algorithm, particularly a statistical one where the mapping from inputs to outputs is not necessarily known in advance but is based on an expectation that the model used by the algorithm will give an “appropriate” response based on the training and testing regime.

[This is a v quick and poorly researched post, so some of the references are first stabs as I go fishing… They could undoubtedly be improved upon… If you can point me to better readings (though many appear to be stuck in books), please add them to the comments…]

In the UK, companies can be registered as legal entities; as such, they can act as an employer and become “responsible” for the behaviour of their employees through vicarious liability ([In Brief] Vicarious Liability: The Liability of an Employer).

According to solicitor Liam Lane, writing for HR Magazine (Everything you need to know about vicarious liability):

Vicarious liability does not apply to all staff. As a general rule, a business can be vicariously liable for actions of employees but not actions of independent contractors such as consultants.

This distinction can become blurred for secondees and agency workers. In these situations, there are often two ‘candidates’ for vicarious liability: the business that provided the employee, and the business that received him or her. To resolve this, courts usually ask:

(i) which business controls how the employee carries out his or her work; and

(ii) which business the employee is more integrated into.

Similarly, vicarious liability does not apply to every wrongful act that an employee carries out. A business is only vicariously liable for actions that are sufficiently close to what the employee was employed to do.

The CPS guidance on corporate prosecutions suggests that “A corporate employer is vicariously liable for the acts of its employees and agents where a natural person would be similarly liable (Mousell Bros Ltd v London and North Western Railway Co [1917] 2 KB 836).”

Findlaw (and others) also explore limitations, or otherwise, to an employer’s liability by noting the distinction between an employee’s “frolics and detours”: A detour is a deviation from explicit instructions, but so related to the original instructions that the employer will still be held liable. A frolic on the other hand, is simply the employee acting in his or her own capacity rather than at the instruction of an employer.

Also on limitations to employer liability, a briefing note note from Gaby Hardwicke Solicitors (Briefing Note: Vicarious Liability and Discrimination) brings all sorts of issues to mind. Firstly, on scope:

Vicarious liability may arise not just where an employment contract exists but also where there is temporary deemed employment, provided either the employer has an element of control over how the “employee” carries out the work or where the “employee” is integrated into the host’s business. The employer can be vicariously liable for those seconded to it and for temporary workers supplied to it by an employment business. In Hawley v Luminar Leisure Ltd, a nightclub was found vicariously liable for serious injuries as a result of an assault by a doorman, which it engaged via a third party security company.

The Equality Act 2010 widens the definition of employment for the purposes of discrimination claims so that the “employer” is liable for anything done by its agent, under its authority, whether within its knowledge or not. This would, therefore, confer liability for certain acts carried out by agents such as solicitors, HR consultants, accountants, etc.

In addition, the briefing describes how “Although vicarious liability is predominantly a common law concept, for the purposes of anti-discrimination law, it is enshrined in statute under section 109 Equality Act 2010. This states that anything that is done by an individual in the course of his employment, must be treated as also done by the employer, unless the employer can show that it took all reasonable steps to prevent the employee from doing that thing or from doing anything of that description.

So, I’m wondering… Companies act through their employees and agents. To what extent might we start to see “algorithms” and/or “computational models” (eg trained “Deep Learning”/neural networks models) starting to be treated as legal entities in their own right, at least in so far as they may be identified qua employees or agents when it comes to acting on behalf of a company. When one company licenses an algorithm/model to another, how will any liability be managed? Will algorithms and models start to have their own (employment) agents? Or are statistical models and algorithms (with parameters set) actually just patentable inventions, albeit with very specifically prescribed dimensions and attribute values?

In terms of liability, companies currently seem keen to try on wriggling around accountability by waving their hands in the air when an algorithm occurs. But when does the accountability lie, and where does the agency lie (in the sense that algorithms and models make (automated) decisions on behalf of their operators? Are there any precedents around ruling decision making algorithms as something akin to “employees” when it comes to liability? Or companies arguing for (or against) such claims? Can an algorithm be defined as a company, with its articles and objects enshrined in code, and if so, how is liability then limited as far as its directors are concerned?

I guess what I’m wondering is: are we going to see algorithms/models/robots becoming entities in law? Whether defined as their own class of legal entity, companies, employees, agents, or some other designation?

PS pointers to case law and examples much appreciated. Eg this sort of thing is maybe relevant? Search engine liability for autocomplete suggestions: personality, privacy and the power of the algorithm in a discussion of how an algorithm operator is liable for the actions of the algorithm?

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

%d bloggers like this: