From Linked Data to Linked Applications?

Pondering how to put together some Docker IPython magic for running arbitrary command line functions in arbitrary docker containers (this is as far as I’ve got so far), I think the commands must include a couple of things:

  1. the name of the container (perhaps rooted in a particular repository): psychemedia/contentmine or dockerhub::psychemedia/contentmine, for example;
  2. the actual command to be called: for example, one of the contentine commands: getpapers -q {QUERY} -o {OUTPUTDIR} -x

We might also optionally specify mount directories with the calling and called containers, using a conventional default otherwise.

This got me thinking that the called functions might be viewed as operating in a namespace (psychemedia/contentmine or dockerhub::psychemedia/contentmine, for example). And this in turn got me thinking about “big-L, big-A” Linked Applications.

According to Tim Berners Lee’s four rules of Linked Data, the web of data should:

  1. Use URIs as names for things
  2. Use HTTP URIs so that people can look up those names.
  3. When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL)
  4. Include links to other URIs. so that they can discover more things.

So how about a web of containerised applications, that would:

  1. Use URIs as names for container images
  2. Use HTTP URIs so that people can look up those names.
  3. When someone looks up a URI, provide useful information (in the minimal case, this corresponds to a Dockerhub page for example; in a user-centric world, this could just return a help file identifying the commands available in the container, along with help for individual commands; )
  4. Include a Dockerfile. so that they can discover what the application is built from (also may link to other Dockerfiles).

Compared with Linked Data, where the idea is about relating data items one to another, the identifying HTTP URI actually represents the ability to make a call into a functional, execution space. Linkage into the world of linked web resources might be provided through Linked Data relations that specify that a particular resource was generated from an instance of a Linked Application or that the resource can be manipulated by an instance of a particular application.

So for example, files linked to on the web might have a relation that identifies the filetype, and the filetype is linked by another relation that says it can be opened in a particular linked application. Another file might link to a description of the workflow that created it, and the individual steps in the workflow might link to function/command identifiers that are linked to linked applications through relations that associate particular functions with a particular linked application.

Workflows may be defined generically, and then instantiated within a particular experiment. So for example: load file with particular properties, run FFT on particular columns, save output file becomes instantiated within a particular run of an experiment as load file with this URI, run the FFT command from this linked application on particular columns, save output file with this URI.

Hmm… thinks.. there is a huge amount of work already done in the area of automated workflows and workflow execution frameworks/environments for scientific computing. So this is presumably already largely solved? For example, Integrating Containers into Workflows: A Case Study Using Makeflow, Work Queue, and Docker, C. Zheng & D. Thain, 2015 [PDF]?

A handful of other quick points:

  • the model I’m exploring in the Docker magic context is essentially stateless/serverless computing approach, where a commandline container is created on demand and treated in a disposable way to just run a particular function before being destroyed; (see also the OpenAPI approach).
  • The Linked Application notion extends to other containerised applications, such as ones that expose an HTML user interface over http that can be accessed via a browser. In such cases, things like WSDL (or WADL; remember WADL?) provided a machine readable formalised way of describing functional resource availability.
  • In the sense that commandline containerised Linked Applications are actually services, we can also think about web services publishing an http API in a similar way?
  • services such as Sandstorm, which have the notion of self-running containerised documents, have the potentially to actually bind a specific document within an interactive execution environment for that document.

Hmmm… so how much nonsense is all of the above, then?

Fragments – Should Algorithms, Deep Learning AI Models and/or Robots be Treated as Employees?

Ever ones for pushing their luck when it comes to respecting codes, regulations, and maybe even the law, Uber hit the news again last when it turned it out at least one of its automated vehicles was caught running a red light (Uber blames humans for self-driving car traffic offenses as California orders halt).

My first thought on this was to wonder who’s nominally in control of an automated vehicle of the vehicle itself detects a likely accident and quickly hands control over to a human driver (“Sh********t… you-drive”), particularly if reaction times are such that even the most attentive human operator doesn’t have time to respond properly.

My second was to start pondering the agency associated with an algorithm, particularly a statistical one where the mapping from inputs to outputs is not necessarily known in advance but is based on an expectation that the model used by the algorithm will give an “appropriate” response based on the training and testing regime.

[This is a v quick and poorly researched post, so some of the references are first stabs as I go fishing… They could undoubtedly be improved upon… If you can point me to better readings (though many appear to be stuck in books), please add them to the comments…]

In the UK, companies can be registered as legal entities; as such, they can act as an employer and become “responsible” for the behaviour of their employees through vicarious liability ([In Brief] Vicarious Liability: The Liability of an Employer).

According to solicitor Liam Lane, writing for HR Magazine (Everything you need to know about vicarious liability):

Vicarious liability does not apply to all staff. As a general rule, a business can be vicariously liable for actions of employees but not actions of independent contractors such as consultants.

This distinction can become blurred for secondees and agency workers. In these situations, there are often two ‘candidates’ for vicarious liability: the business that provided the employee, and the business that received him or her. To resolve this, courts usually ask:

(i) which business controls how the employee carries out his or her work; and

(ii) which business the employee is more integrated into.

Similarly, vicarious liability does not apply to every wrongful act that an employee carries out. A business is only vicariously liable for actions that are sufficiently close to what the employee was employed to do.

The CPS guidance on corporate prosecutions suggests that “A corporate employer is vicariously liable for the acts of its employees and agents where a natural person would be similarly liable (Mousell Bros Ltd v London and North Western Railway Co [1917] 2 KB 836).”

Findlaw (and others) also explore limitations, or otherwise, to an employer’s liability by noting the distinction between an employee’s “frolics and detours”: A detour is a deviation from explicit instructions, but so related to the original instructions that the employer will still be held liable. A frolic on the other hand, is simply the employee acting in his or her own capacity rather than at the instruction of an employer.

Also on limitations to employer liability, a briefing note note from Gaby Hardwicke Solicitors (Briefing Note: Vicarious Liability and Discrimination) brings all sorts of issues to mind. Firstly, on scope:

Vicarious liability may arise not just where an employment contract exists but also where there is temporary deemed employment, provided either the employer has an element of control over how the “employee” carries out the work or where the “employee” is integrated into the host’s business. The employer can be vicariously liable for those seconded to it and for temporary workers supplied to it by an employment business. In Hawley v Luminar Leisure Ltd, a nightclub was found vicariously liable for serious injuries as a result of an assault by a doorman, which it engaged via a third party security company.

The Equality Act 2010 widens the definition of employment for the purposes of discrimination claims so that the “employer” is liable for anything done by its agent, under its authority, whether within its knowledge or not. This would, therefore, confer liability for certain acts carried out by agents such as solicitors, HR consultants, accountants, etc.

In addition, the briefing describes how “Although vicarious liability is predominantly a common law concept, for the purposes of anti-discrimination law, it is enshrined in statute under section 109 Equality Act 2010. This states that anything that is done by an individual in the course of his employment, must be treated as also done by the employer, unless the employer can show that it took all reasonable steps to prevent the employee from doing that thing or from doing anything of that description.

So, I’m wondering… Companies act through their employees and agents. To what extent might we start to see “algorithms” and/or “computational models” (eg trained “Deep Learning”/neural networks models) starting to be treated as legal entities in their own right, at least in so far as they may be identified qua employees or agents when it comes to acting on behalf of a company. When one company licenses an algorithm/model to another, how will any liability be managed? Will algorithms and models start to have their own (employment) agents? Or are statistical models and algorithms (with parameters set) actually just patentable inventions, albeit with very specifically prescribed dimensions and attribute values?

In terms of liability, companies currently seem keen to try on wriggling around accountability by waving their hands in the air when an algorithm occurs. But when does the accountability lie, and where does the agency lie (in the sense that algorithms and models make (automated) decisions on behalf of their operators? Are there any precedents around ruling decision making algorithms as something akin to “employees” when it comes to liability? Or companies arguing for (or against) such claims? Can an algorithm be defined as a company, with its articles and objects enshrined in code, and if so, how is liability then limited as far as its directors are concerned?

I guess what I’m wondering is: are we going to see algorithms/models/robots becoming entities in law? Whether defined as their own class of legal entity, companies, employees, agents, or some other designation?

PS pointers to case law and examples much appreciated. Eg this sort of thing is maybe relevant? Search engine liability for autocomplete suggestions: personality, privacy and the power of the algorithm in a discussion of how an algorithm operator is liable for the actions of the algorithm?

Personal Health Calendar Feeds and a Social Care Annunciator?

Over the last few weeks and months I’ve started pondering all sorts of health and care related stuff that may help when trying to support family members a couple of hundred miles away. One of the things we picked up on was a “friendly” digital clock display (often sold as a “dementia clock” or “memory loss calendar”), with a clear screen, and easy to read date and time.

The clock supports a variety of daily reminders (“Take your pills…”) and can also be programmed to display images or videos at set dates and times (“Doctor’s today”).

One of the things this reminded me of was the parliamentary annunciators, that detail the current activity in the House of Commons and House of Lords, and that can be found all over the parliamentary estate.

Which got me thinking:

  • what if I could send a short text message or reminder to the screen via SMS?
  • what if I could subscribe to a calendar feed from the device that could be interpreted to generate a range of alerts leading up to an event (*”Doctor’s tomorrow morning”*, *”Hospital this afternoon at 2pm”*).

(Lots of other ideas came to mind too, but the primary goal is to keep the device as simple as possible and the display as clear as possible, which includes being able to read it from a distance.)

The calendar feed idea also sparked a far more interesting idea – one of the issues of trying to support family members with ongoing health appointments is knowing when they are taking place, whether you need to go along to provide advocacy or support, whether hospital stays are being planned, and so on. Recent experience suggests that different bits of the NHS ac independently of each other:

  • the GP doesn’t know when hospital surgery has been booked, let alone when pre-op assessments requiring a hospital visit are scheduled;
  • district nurses don’t know when hospital visits are planned;
  • different parts of the hospital don’t know when other parts of the hospital have visits planned,

and so on…

In short, it seems that the hospital doesn’t seem to have a calendar associated with each patient.

As with “student first” initiatives in HE, I suspect “patient first” initiatives are more to do with tracking internal performance metrics and KPIs rather than initiatives formulated from the patient perspective, but a personal “health and social care calendar” could benefit a whole range of parties:

  • the patient, wanting to keep track of appointments;
  • health and social care agencies wanting to book appointments and follow up on appointments with other parts of the service;
  • family members looking to support the patient.

So I imagine a situation where a patient books a GP appointment, and the receptionist adds it to the patient’s personal calendar.

A hospital appointment is generated by a consultant and, along with the letter informing the patient of the date, the event is added to the patient’s calendar (possibly with an option to somehow acknowledge it, confirm it, cancel it?).

A patient asks the GP to add a family member to the calendar/calendar feed so they can also access it.

A range of privacy controls allow different parts of the health and social care system to request/make use of read access to a patient’s health and social care calendar.

The calendar keeps a memory of historical appointments as well as future ones. Options may be provided to say whether an appointment was attended, cancelled or rescheduled. Such information may be useful to a GP (“I see you had your appointment with the consultant last week…”) or consultant (“I see you have an appointment with your GP next week? It may be worth mentioning to them…”)

Hmmm…thinks… is this sort of thing has this sort of thing being explored (or has it been in the past?), or maybe considered at an NHS Hack Day? Or is it the sort of thing I could put together as an NHS tech funding pitch?

PS Some of the features of the Amazon Show could also work well in the context of a health/care annunciator, but… the Amazon Show is too feature rich and could easily lead to feature creep and complexity in use; I’d have “privacy concerns” using the Amazon backend and always on Alexa/Echo mic.