Legally Does It…

Included in the several hundred blog feeds I subscribe to are several legal ones. I don’t really understand the law well enough to read it properly, or think through the consequences of how it might be applied, misapplied, or gamed, so I tend to rely on other commentators for the interpretation and then just skim their reviews for choice phrases or ideas.

So here’s a quick round up of several law related issues that crossed my wires over the last few days, some from law blogs, some not…

First up, it seems Wyoming have been working on a “data trespassing” law – In Wyoming it’s now illegal to collect data about pollution – ENROLLED ACT NO. 61, SENATE SIXTY-THIRD LEGISLATURE OF THE STATE OF WYOMING 2015 GENERAL SESSION:

6-3-414. Trespassing to unlawfully collect resource data; unlawful collection of resource data.
(a) A person is guilty of trespassing to unlawfully collect resource data if he:
(i) Enters onto open land for the purpose of collecting resource data; and
(ii) Does not have:
(A) An ownership interest in the real property or, statutory, contractual or other legal authorization to enter or access the land to collect resource data; or
(B) Written or verbal permission of the owner, lessee or agent of the owner to enter or access the land to collect the specified resource data.
(b) A person is guilty of unlawfully collecting resource data if he enters onto private open land and collects resource data without:
(i) An ownership interest in the real property or, statutory, contractual or other legal authorization to enter the private land to collect the specified resource data; or
(ii) Written or verbal permission of the owner, lessee or agent of the owner to enter the land to collect the specified resource data.


(d) As used in this section:
(i) “Collect” means to take a sample of material, acquire, gather, photograph or otherwise preserve information in any form from open land which is submitted or intended to be submitted to any agency of the state or federal government;
(ii) “Open land” means land outside the exterior boundaries of any incorporated city, town, subdivision approved pursuant to W.S. 18-5-308 or development approved
pursuant to W.S. 18-5-403;

(iv) “Resource data” means data relating to land or land use, including but not limited to data regarding agriculture, minerals, geology, history, cultural artifacts, archeology, air, water, soil, conservation, habitat, vegetation or animal species. “Resource data” does not include data:
(A) For surveying to determine property boundaries or the location of survey monuments;
(B) Used by a state or local governmental entity to assess property values;
(C) Collected or intended to be collected by a peace officer while engaged in the lawful performance of his official duties.
(e) No resource data collected in violation of this section is admissible in evidence in any civil, criminal or administrative proceeding, other than a prosecution for violation of this section or a civil action against the violator.
(f) Resource data collected in violation of this section in the possession of any governmental entity as defined by W.S. 1-39-103(a)(i) shall be expunged by the entity from all files and data bases, and it shall not be considered in determining any agency action.

So, it seems as if you are guilty of trespassing on “open land” if you collect air monitoring or pollution data, for example, for the intention of submitting it to a government agency, state or federal, without permission. And if you do collect it, it can’t be admitted in evidence (and if it is, you presumably admit liability for trespass if you collected it and try to submit it as evidence?); and if that data finds its way into a government database, it has to be deleted and can’t be used by the government entity. Note that “collect” also includes photograph. I’m not sure if drones collecting such data result in the drone operator committing the trespass? Would a drone intrude onto such land? What about aerial photography? Or satellite imagery? Or air / dust data collected outside the boundary on a windy day with the wind blowing across the land in question at you?

One of the things that the OUseful.info blog helps me with is not forgetting. This is handy, not only for cheap told-you-so-years-ago moments, but also keeping track of events that seemed notable at the time, which can help when folk later try to rewrite history. A post today on the IALS Information Law and and Policy blog by Hugh Tomlinson QC – “Right to be forgotten” requires anonymisation of online newspaper archive – reports on a Belgian ruling that seems to have implications for news archives (I don’t count Google’s index as an archive). Apparently:

Digital archiving of an article which was originally lawfully published is not exempt from the application of the right to be forgotten. The interferences with freedom of expression justified by the right to be forgotten can include the alteration of an archived text.

The Court of Appeal had correctly decided that the archiving of the article online constituted a new disclosure of a previous conviction which could interfere with his right to be forgotten.

Balancing the right to be forgotten and the right of the newspaper to constitute archives corresponding to historical truth and of the public to consult these, the applicant should benefit from the right to be forgotten. As the Court of Appeal held, the maintenance of the online article, many years after the events it describes, is likely to cause the applicant disproportionate damage compared to the benefits of the strict respect for freedom of expression.

This is the first case that I am aware of in which a Court has ordered that an online archive should be anonymised – as opposed to the less drastic remedy of ordering the newspaper to take steps to ensure that the article was not indexed by search engines. The Belgian courts were not impressed by arguments in favour of keeping the integrity of online archives.

The English courts have yet to engage with the issue as to whether and to what extent “rehabilitated offenders” should be protected from continuing online dissemination of information about their spent convictions. There are powerful arguments – under both data protection and privacy law – that such protection should be provided in appropriate cases. Online news archives do not possess any “absolute immunity” – they are regularly amended in defamation cases – and effective privacy protection may sometimes require their amendment. It remains to be seen how the English courts will deal with these issues.

What do the librarians think about this?

And what happens when the historical record isn’t? I guess historians really won’t be able to trust press reports as first drafts any more?!

Over on the Inforrm blog, Dan Tench writes about the Digital Economy Bill: new offences for the disclosure of information and the risk to journalists:

Part 5 creates a number of new criminal offences (at clauses 33, 34, 42, 50 and 58) imposing criminal liability on those who receive the information and then disclose it to third parties. For the offence to be committed, the information in question must constitute “personal information”, which is information which relates to and identifies a particular “person”, including a body corporate (clause 32(4)). This is a bizarre definition which means that, contrary to ordinary language and the use of the term in other legal contexts, any information about an identified company would be “personal information” – even something as anodyne as information that a particular company has a number of government contracts.

Defining legal entities such as companies as “persons” to whom data protection clauses apply?! Seriously? (Will that also apply to robots, as per Legislating Autonomous Robots?)

Dan goes on:

Even more significantly, these provisions would also impose criminal liability on the third parties who receive the information if they subsequently disseminate it. In both cases, the offences would be committed even if the disclosure of the information by the original public authority (absent the provisions of the Bill) would not itself constitute a criminal offence.

So imagine if an official at the Environment Agency discloses some information to a say, a local authority, to “improve public service delivery” pursuant to the provisions in clause 29. An individual at the local authority considers that this information reveals a serious iniquity relating a corporate entity and passes it on to a journalist on a national newspaper. The newspaper then publishes the information. It would appear that under these provisions the individual at the local authority, the journalist and most probably the newspaper would all be committing criminal offences.

By contrast, if the official at the Environment Agency had equally taken umbrage with the information in question, he or she had revealed it to the journalist and it had been published on those circumstances, it is unlikely that any offence would have been committed.

There seems no logic in that. It is true that it might be a somewhat rare circumstance when these conditions might apply but making criminal disclosures of any information in any situation is surely something which should be done only with the greatest of care, not least because of the consequences to freedom of expression.

Also today, Out-Law report that the UK government tests whether ‘online activity history’ can serve to verify identity:

“We have been looking at projects that consider the use of different sources of activity history when proving an individual is who they say they are,” [said] Livia Ralph, industry engagement lead at the GDS.

Ralph said that if data from social media accounts can be used for digital ID verification purposes then it could increase UK adults’ use of Verify by 9% and by up to 38% in the case of 16-25 year olds.

Under the Verify system, individuals using government online services choose a certified ID assurance provider with which to verify their identity. This involves answering security questions and entering a unique code sent to an individuals’ mobile number, email address or issued in a call to their fixed-line telephone number.

When using government services online thereafter, government bodies are able to rely on the third party verifications of individuals’ identities. The system is still in development but is aimed at streamlining the identity verification process for both government bodies and the public.

The phrase that jumped out at me first? “When using government services online thereafter, government bodies are able to rely on the third party verifications of individuals’ identities”. And then you just have to flip this to realise that every time you log on to a government or public service, which presumably doesn’t have Facebook (or whoever) tracking set on it, the login will provide Facebook (or whoever) with that information. Good oh – everyone helping everyone else track everyone and everything.

And finally – an email went round the OU a few days ago about some new whistleblowing and anti-fraud policies. One reason for whistleblowing is to get information out about nefarious or fraudulent activities that are either being conducted in secret, or where oversight is failing. I note that public bodies are free to set up operating companies to conduct particular bits of their business (FutureLearn in the OU’s case, for example, or companies set up by local councils). I also note that such companies are not necessarily subject to FOI (the Unison Branch guide to local authority trading companies suggests this is the case if they are not solely owned, for example? FutureLearn is solely owned by the OU – so is it FOIable? It seems so…). With many of the FutureLearn papers tabled to OU committees labeled as “confidential” (and as such not viewable by members of the university not on those committees), presumably on grounds of commercial confidentiality, I wonder more generally about the extent to which universities and public bodies may create companies to limit information sharing? Particularly if such companies come to be classed as “persons” about whom “personal” information, sensitive or otherwise, may not be shared.

Legislating Autonomous Robots

Fifteen years or so ago, now, I worked on an OU short course – T184: Robotics and the Meaning Life. The course took a broad view of robotics, from the technical (physical design, typical control systems, relevant AI – artificial intelligence – techniques (and their limitations), through the social and political consequences. The course also included the RobotLab simulator, which could be used to programme a simple 2D robot, or a HEK – a self-purchased Home Experiment Kit in the form of Lego Mindstorms.

The course was delivered as part of the Technology Faculty Relevant Knowledge programme, originally championed by John Naughton. There’s a lot folk don’t know – or understand – about how the technology world works, and the Relevant Knowledge programme helped address that. The courses were for credit, 10 CAT points at level 1, and were fairly priced: 100 hours for a hundred and fifty quid, and 10 CAT points as a bonus.

One of the things I was keen to put in T184 was a section on robot law, which complemented a section on “robot rights”; this reviewed laws that had been applied to slaves, children, animals and the mentally infirm, “sentient creatures”, in other words, whose behaviour or actions might be the responsibility of someone else, and asked whether such laws might be a useful starting point for legislating around the behaviour of intelligent, self-adaptive robots and their owners / creators. The course also drew on science fiction depictions of robots, making the case that while positronic brains were a fiction, the “Three Laws” that they implemented could be seen as useful design principles for robot researchers:

whereas, until such time, if ever, that robots become or are made self-aware, Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;

That phrase does not come from the course, but it does appear in a draft report, published in May this year, from the European Parliament Committee on Legal Affairs [2015/2103(INL)]. The report includes “recommendations to the Commission on Civil Law Rules on Robotics” and, for the EU at least, perhaps acts as a starting pistol for a due consideration of what I assume will come to be referred to as “robot law”.

As well as considering robots as things deserving of rights that could be subjugated, I’d also explored the extent to which robots might be treated as “legal entities” in much the way that companies are legal entities, although I’m not sure that ever made it into the course.

whereas, ultimately, robots’ autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties, including liability for damage;

Again – that’s the EU report from a couple of months ago. So what exactly is it proposing, and what does it cover? Well, the report:

Calls on the Commission to propose a common European definition of smart autonomous robots and their subcategories by taking into consideration the following characteristics of a smart robot:

  • acquires autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and trades and analyses data
  • is self-learning (optional criterion)
  • has a physical support
  • adapts its behaviours and actions to its environment;

So not software robots, then? (Which raises a question – how might adaptive algorithms be regulated, and treated under law? Or algorithms that are manifest via “human” UIs, such as conversational chatbots?) Or would such things be argued as having “physical support”?

Hmmm… because whilst the report further notes :

… that there are no legal provisions that specifically apply to robotics, but that existing legal regimes and doctrines can be readily applied to robotics while some aspects appear to need specific consideration;

which is fine, it then seems to go off at a tangent as it:

calls on the Commission to come forward with a balanced approach to intellectual property rights when applied to hardware and software standards, and codes that protect innovation and at the same time foster innovation;

I can see the sense in this, though we maybe need to think about IPR of control models arising from the way an adaptive system is trained, compared to the way it was originally programmed to enable it to be trained and acquire it’s own models, particularly where a third party, rather than a manufacturer, does the training, but then the report seems to go off the rails a bit as it:

calls on the Commission to elaborate criteria for an ‘own intellectual creation’ for copyrightable works produced by computers or robots;

That last sentence surely suggests that they’re talking about algorithms rather than robots? Or are they saying that if I write an adaptive computer program that generates a PNG, it’s not copyrightable, but if I program an adaptive robot with a pen on its back and it draws a picture, that is copyrightable? (I can see the IPR issues here may get a bit messy, though presumably contacts and licenses associated with collaborative generative systems already start to address this?)

The report then seems to go off on another tangent, as it:

Points out that the use of personal data as a ‘currency’ with which services can be ‘bought’ raises new issues in need of clarification; stresses that the use of personal data as a ‘currency’ must not lead to a circumvention of the basic principles governing the right to privacy and data protection;

I’m not sure I see how that’s relevant here? There then follows a few sections relating to specific sorts of robot (autonomous cars, medial robots, drones) before addressing employment issues:

Bearing in mind the effects that the development and deployment of robotics and AI might have on employment and, consequently, on the viability of the social security systems of the Member States, consideration should be given to the possible need to introduce corporate reporting requirements on the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions; takes the view that in the light of the possible effects on the labour market of robotics and AI a general basic income should be seriously considered, and invites all Member States to do so;

So…. robots on the workforce means you have to pay a national insurance contribution for what? FTE human jobs replaced? But there’s also a call for a general basic income?!

Then we return to what I thought the report was about – liability:

Considers that robots’ civil liability is a crucial issue which needs to be addressed at EU level so as to ensure the same degree of transparency, consistency and legal certainty throughout the European Union for the benefit of consumers and businesses alike;

Considers that, in principle, once the ultimately responsible parties have been identified, their liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be; notes, in particular, that skills resulting from ‘education’ given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot’s harmful behaviour is actually due;

The current recommendation appears to be that liability issues be addressed via a compulsory insurance scheme:

Points out that a possible solution to the complexity of allocating responsibility for damage caused by increasingly autonomous robots could be an obligatory insurance scheme, as is already the case, for instance, with cars; notes, nevertheless, that unlike the insurance system for road traffic, where the insurance covers human acts and failures, an insurance system for robotics could be based on the obligation of the producer to take out an insurance for the autonomous robots it produces;

which is fine, and other paragraphs explore that further; but then the report goes off on one again:

creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;

Which is firmly in the territory I wanted to explore in T184 way back when. For example, is the suggestion that we have some sort of “Intelligent Robot/Algorithm Capacity Act”, akin to the 2005 Mental Capacity Act perhaps?! Or is it more akin to corporate liability which seems to be under-legislated? And here’s where I start to wonder – where do you distinguish between robots as autonomous things that are legislated against, algorithms as autonomous things that are legislated against, sets of interacting algorithms creating complex adaptive systems as autonomous things that are legislated against, complex adaptive systems such as companies that are legislated against, and so on… (I maybe need to read Iain M. Banks’ sci-fi books about The Culture again!)

The report then goes on to suggest a draft Code of Ethical Conduct for Robotics Engineers, a Licence for Designers and a Licence for Users. But not a Licence for Robots themselves. Nor any mention of the extent to which the built environment should be made accessible for mobile robots. (“Robot accessibility” was another thing I was interested in!;-)

Another document that came out recently hails from the DfT’s Centre for Connected and Autonomous Vehicles is a consultation (in the UK) around Advanced driver assistance systems and automated vehicle technologies: supporting their use in the UK [Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies – PDF]. Apparently:

The only immediate change that we have identified primary legislation that is required now is to update our insurance framework. This will give insurers and manufacturers time to consider what insurance products can come to market in time for when this technology arrives.

This reflects the likely short term arrival of “motorway assist systems for travel on high speed roads (i.e. motorways and major trunk roads); and remote control parking”. Platooning trials are also to take place.

For the longer term, the report distinguishes between “conventional driving, assisted driving and fully automated driving”:

driverless-cars-proposals-for-adas-and_avts_pdf

driverless-cars-proposals-for-adas-and_avts_pdf2

The consultation doc is worth reading in full, but here are a couple of points that jumped out at me:

a vehicle owner who is ‘driving’ the highly automated vehicle might have legitimately disengaged from the driving task, with the vehicle having taken over control. If the technology fails and injures the ‘driver’, the current legislation only requires insurance to cover third parties and not the driver. It is up to the policy owner to seek additional insurance to cover any injury they do to themselves as a result of their own actions or negligence. If the AVT fails then the driver, in effect, becomes a victim as their injuries are not as a result of their own actions or negligence. We therefore need to protect the driver as a potential victim.

So you’ll need to insure yourself against the car?

The last line of this amused me:

We have considered whether a different definition of ‘user’ is needed in the Road Traffic Act for automated vehicles for the purposes of insurance obligation. For the first generation of AVT (where the driver is only ‘hands-off’ and ‘eyes-off’ for parts of the journey) we think that the driver falls under the current definition of a ‘user’. Once fully automated vehicles are available – which would drive themselves for the entire journey – it might be more appropriate to put the insurance obligation solely on the registered keeper.

“Registered keeper”. This may well be the current wording relating to vehicle ownership, but it made me think of a wild animal keeper. So harking back to Robot Law, would it be worth looking at the Dangerous Wild Animals Act 1976 or the Dangerous Dogs Act 1991? (Hmm… code sharing libraries, model sharing algorithms – “breeding” new code from old code…!)

We are not currently proposing any significant change in our rules on liability in road traffic accidents to reflect the introduction of automated cars. We still think a fault based approach combined with existing product liability law, rather than a new strict liability regime, is the best approach for our legal system. We think that the existing common law on negligence should largely be able to adapt to this new technology.

So the car won’t be a legal entity in its own right… though I wonder if a class of vehicles running under the same model/operating system would under the EU approach hinted at above?

If you were of suspicious mind, you might think that there could be an ulterior motive for pushing forward various forms of automative automation…

Data will clearly be required to determine whether the driver or the vehicle was responsible for any collision, such as establishing who was in control at the time of the incident. This is likely to come from in-vehicle data recorders. Many vehicles already have data recorders fitted, although the data collected is not accessible without special equipment.

We expect that the out-of-the-loop motorway driving vehicles that are coming to market soon will have an event data recorder fitted. There are inevitably different views as to what data is essential and of course data protection and privacy considerations are important. It seems likely that data recorders would be regulated on an international basis, like most vehicle technologies. We will participate fully in this debate, equipped with views from the UK manufacturing and insurance industries, evidence from the various trials taking place and the first automated technologies that are coming to market.

Presumably, it’s easiest to just make everyone install a box…. (see Geographical Rights Management, Mesh based Surveillance, Trickle-Down and Over-Reach and Participatory Surveillance – Who’s Been Tracking You Today? for examples of how you can lose control of your car and/or data…) That said, boxes can be useful for crash investigations, and may be used in the defense of the vehicle’s actions, or perhaps in its praise: Tesla’s Autopilot May Have Saved A Life.

The following just calls out to be gamed – and also raises questions around updates, over-the-air or via a factory recall…

We do not think an insurer should be able to avoid paying damages to a third party victim where an automated vehicle owner fails to properly maintain and update the AVT or attempts to circumvent the AVT in breach of their insurance policy. Nor do we think that an insurer should be able to avoid paying damages to a third party victim if the vehicle owner or the named drivers on the policy attempt to use the vehicle inappropriately.

The following point starts to impinge on things like computer misuse as well as emerging robot law?

If an accident occurred as a result of an automated vehicle being hacked then we think it should be treated, for insurance purposes, in the same way as an accident caused by a stolen vehicle. This would mean that the insurer of the vehicle would have to compensate a collision victim, which could include the ‘not at fault driver’ for damage caused by hacking but, where the hacker could be traced, the insurer could recover the damages from the hacker.

In respect of the following point, I wonder of the many products we buy at the moment, how many of them integrate statistical computational models (rather than just rely on physics!)? Is the whole “product liability” thing due a review in more general terms?!

Currently the state of the art defence (section 4(1)(e) of the Consumer Protection Act 1987) provides a defence to product liability if, at the time the product was in the manufacturer’s control, the state of scientific and technical knowledge was not such that a manufacturer could have been expected to discover the defect. We could either leave manufacturers’ liability and product liability as it currently is or, instead, extend the insurance obligation to cover these circumstances so that the driver’s insurance would have to cover these claims.

To keep tabs on the roll out of autonomous vehicles in the UK, see the Driverless vehicles: connected and autonomous technologies policy area.

PS via Ray Corrigan, some interesting future law workshops under the banner Geek Law: Gikll 2013, Gikll 2014, Gikll 2015. The 2016 programme (for the London event, Sept 30) is available in an unreadable font here: Gikll 2016 programme.

Community Detection? (And Is Your Phone a Cookie?)

A few months ago, I noticed that the Google geolocation service would return a lat/long location marker when provided with the MAC address of a wifi router (Using Google to Look Up Where You Live via the Physical Location of Your Wifi Router [code]) and in various other posts I’ve commented on how communities of bluetooth users can track each other’s devices (eg Participatory Surveillance – Who’s Been Tracking You Today?).

Which got me wondering… are there any apps out there that let me detect the MAC address of Bluetooth devices in my vicinity, and is there anyone aggregating the data, perhaps as a quid pro quo for making such an app available?

Seems like the answer is yes, and yes…

For example, John Abraham’s Bluetooth 4.0 Scanner [Android] app will let you [scan] for Bluetooth devices… The information is recorded includes: device name, location, RSSI signal strength, MAC address, MAC address vendor lookup.

In a spirit of sharing, the Bluetooth 4.0 Scanner app “supports the earthping.com project – crowdsourced Bluetooth database. Users are also reporting usage to find their lost Bluetooth devices”.

So when you run the app to check the presence of Bluetooth devices in your own vicinity, you also gift location of those devices – along with their MAC addresses – to a global database – earthping. Good stuff…not.

We’re all familiar (at least in the UK) with surveillance cameras everywhere, and as object recognition and reconciliation tools improves it seems as if tracking targets across multiple camera views will become a thing, as demonstrated by the FX Pal Dynamic Object Tracking System (DOTS) for “office surveillance”.

It’s also increasingly the case that street furniture is appearing that captures the address of our electronic devices as we pass them. For example, in New York, Link NYC “is a first-of-its-kind communications network that will replace over 7,500 pay phones across the five boroughs with new structures called Links. Each Link will provide superfast, free public Wi-Fi, phone calls, device charging and a tablet for Internet browsing, access to city services, maps and directions”. The points will also allow passers-by to ‘view public service announcements and more relevant advertising on two 55” HD displays’ – which is to say they track everything that passes, tries to profile anyone who goes online via the service, and then delivers targeted advertising to exactly the sort of people passing each link.

LinkNYC is completely free because it’s funded through advertising. Its groundbreaking digital OOH advertising network not only provides brands with a rich, context-aware platform to reach New Yorkers and visitors, but will generate more than a half billion dollars in revenue for New York City.

[Update: 11/16 – it seems that offering pavement wifi hubs had consequences: “It took less than a year for New Yorkers to lose sidewalk internet privileges. … Soon came the reports of people gathered for hours around these digital campfires, streaming music or watching movies and porn. …LinkNYC disabled web browsing …” Public In/Formation]

So I wondered just what sorts of digital info we leak as we do walk down the street. Via Tracking people via WiFi (even when not connected), I learn that devices operate in one of two modes – a listening beacon mode, where they essentially listening for access points, but at high battery cost. Or a lower energy ping mode, where they announce themselves (along with MAC address) to anyone who’s listening.

If you want to track passers-by, many of whom will be pinging their credentials to anyone whose listening, you can set up things like wifi routers in monitor mode to listen out for – and log – such pings. Edward Keeble describes how to do it in the post Passive WiFi Tracking

If you’d rather not hack together such a device yourself, you can always buy something off the shelf to log the MAC addresses of passers-by, eg from somebody such as Libelium’s Meshlium Scanner [datasheet – PDF]. So for example:

  • Meshlium Scanner AP – It allows to detect (sic) Smartphones (iPhone, Android) and in general any device which works with WiFi or Bluetooth interfaces. This model can receive and store data from Waspmotes with GPRS, 3G or WiFi, sending via HTTP protocol. The collected data can be send (sic) to the Internet by using the Ethernet.
  • Meshlium Scanner 3G/GPRS-AP – It allows to detect (sic) Smartphones (iPhone, Android) and in general any device which works with WiFi or Bluetooth interfaces. This model can receive and store data from Waspmotes with GPRS, 3G or WiFi, sending via HTTP protocol. The collected data can be send (sic) to the Internet by using the Ethernet, and 3G/GPRS connectivity
  • Meshlium Scanner XBee/LoRa -AP – It allows to detect (sic) Smartphones (iPhone, Android) and in general any device which works with WiFi or Bluetooth interfaces. It can also capture the sensor data which comes from the Wireless Sensor Network (WSN) made with Waspmote sensor devices. The collected data can be send (sic) to the Internet by using the Ethernet and WiFi connectivity.

So have any councils started installing that sort of device I wonder? And if so, on what grounds?

On the ad-tracking/marketing front, I’m also wondering whether there are extensions to cookie matching services that can match MAC addresses to cookies?

PS you know that unique tat you’ve got?! FBI Develops tattoo tracking technology!

PPS capturing data from wifi and bluetooth devices is easy enough, but how about listening out for mobile phone as phones? Seems that’s possible too, though perhaps not off-the-shelf for your everyday consumer…? What you need, apparently, is an IMSI catcher such as the Harris Corp Stingray. Examples of use here and here.

See also: Tin Foil Hats or Baseball Caps? Why Your Face is a Cookie and Your Data is midata and We Are Watching, and You Will be Counted.

PS Interesting piece from the Bristol Cable Oct 2016: Revealed: Bristol’s police and mass mobile phone surveillance. Picked up by the Guardian: Controversial snooping technology ‘used by at least seven police forces’.

Accessible Jupyter Notebooks?

Pondering the extent to which Jupyter notebooks provide an accessible UI, I had a naive play with the Mac VoiceOver app run over Jupyter notebooks the other day: markdown cells were easy enough to convert to speech, but the code cells and their outputs are nested block elements which seemed to take a bit more navigation (I think I really need to learn how to use VoiceOver properly for a proper test!). Suffice to say, I really should learn how to use screen-reader software, because as it stands I can’t really tell how accessible the notebooks are…

A quick search around for accessibility related extensions turned up the jupyter-a11y: reader extension [code], which looks like it could be a handy crib. This extension will speak aloud a the contents of a code cell or markdown cell as well as navigational features such as whether you are in the cell at the top or the bottom of the page. I’m not sure it speaks aloud the output of code cell though? But the code looks simple enough, so this might be worth a play with…

On the topic of reading aloud code cell outputs, I also started wondering whether it would be possible to generate “accessible” alt or longdesc text for matplotlib generated charts and add those to the element inserted into the code cell output. This text could also be used to feed the reader narrator. (See also First Thoughts on Automatically Generating Accessible Text Descriptions of ggplot Charts in R for some quick examples of generating textual descriptions from matplotlib charts.)

Another way of complementing the jupyter-a11y reader extension might be to use the python pindent [code] tool to annotate the contents of code cells with accessible comments (such as comments that identify the end of if/else blocks, and function definitions). Another advantage of having a pindent extension to annotate the content of notebook python code cells is that it might help improve the readability of code for novices. So for example, we could have a notebook toolbar button that will toggle pindent annotations on a selected code cell.

For code read aloud by the reader extension, I wonder if it would be worth running the content of any (python) code cells through pindent first?

PS FWIW, here’s a related issue on Github.

PPS another tool that helps make python code a bit more accessible, in an active sense, in a Jupyter notebook is this pop-up variable inspector widget.

Simple Live Timing Data Scraper…

A couple of weeks ago, I noticed an F1 live timing site with an easy to hit endpoint… here’s the Mac commandline script I used to grab the timing info, once every five seconds or so…

mkdir f1_silverstone
i=1; sleep 900; while true ; do curl http://www.livesportstreaming24.com/live.php >> f1_silverstone/f1output_race_${i}.txt ;i=$((i+1)); sleep 5 ; done

Now I just need to think what I’m going to do with the data! Maybe an opportunity to revisit this thing and try out some realtime dashboard widget toys?

PS to get the timestamp of each file in python:

import os
os.path.getctime(filename)

Mediated/Augmented Reality (Un)Course Notes, Part I

Pokemon Go seems to have hit the news this week – though I’m sure for anyone off social media last week and back to it next week, the whole thing will have completely passed them by – demonstrating that augmented reality apps really haven’t moved on much at all over the last five years or so.

But notwithstanding that, I’ve been trying to make sense of a whole range of mediated reality technologies for myself as prep for a very short unit on technologies and techniques on that topic.

Here’s what I’ve done to date, over on the Digital Worlds uncourse blog. This stuff isn’t official OU course material, it’s just my own personal learning diary of related stuff (technical term!;-)

More to come over the next couple of weeks or so. If you want to comment, and perhaps influence the direction of my meanderings, please feel free to do that here or on the relevant post.

An evolving feed of the posts is available in chronological order and in reverse chronological order.

Dogfooding… and Creating (Learning) for a Purpose

“Eating your own dogfood”, aka dogfooding, refers the practice of a company testing it’s own products by using them internally. At a research day held by Somerset College, a quote in a talk by Lorna Sheppard on Len Deighton’s cookbooks (yes, that Len Deighton…) from a 2014 Observer magazine article (Len Deighton’s Observer cookstrips, Michael Caine and the 1960s) caught my attention:

[G]enerally, you stand a better chance of succeeding in something if whatever you create, you also like to consume.

Implicit in this is the idea that you are also creating for a purpose.

In the OU engineering residential school currently running at the University of Bath, one of the four day long activities the students engage with is a robotics activity using Lego EV3 robots, where at each stage we try to build in a reason for adding another programming construct or learning how to work with a new sensor. That is, we try to motivate the learning by making it purposeful.

The day is structured around a series of challenges that allow students to develop familiarity with programming a Lego EV3 robot, adding sensors to it, logging data from the sensors and then interpreting the data. The activities are contextualised by comparing the work done on the Lego EV3’s with the behaviour of a Roomba robot vacuum cleaner – by the end of the morning, students will have programmed their robot to perform the majority of the Roomba’s control functions, including finding it’s way home to a homing beacon, as well as responding to touch (bumper), colour (line stopper) and proximity (infra-red and ultrasonic) sensors.

The day concludes with a challenge, where an autonomous robot must enter – and return from  – a closed tunnel network, using sensors to collect data about the internal structure of the tunnel, as well identifying the location of a casualty who has an infra-red emergency beacon with them.

27804006510_8058ebaf59_k

(The lids are placed on the tunnels so the students can’t see inside.)

As well as the partition walls (which are relocated each time the challenge is run, so I’m not giving anything away!), pipework and cables (aka coloured tape) also run through the tunnel and may be mapped by the students using a downward facing light sensor.

27803993990_0f3948050a_z

The casualty is actually a small wooden artist’s mannequin – the cuddly teddy we used to use does not respond well to the ultrasound sensor the students use to map the tunnel.

27514269174_12d0923f67_k

The data logged by the students include motor rotation data to track the robots progress, ultrasonic sensor data to map the walls, infra-red sensor data to find the emergency beacon and a light sensor to identify the cables/pipework.

The data collected looks something like this:

final challenge

The challenge is then to map the (unseen by the students) tunnel network, and tell the robot’s story from the data.

The result is a narrative that describes the robot’s progress, and a map showing the internal structure of the tunnel:

27603239203_722872db89_k

If time allows, this can then be used as the basis for programming the robot to complete a rescue mission!

The strategies used by the students to log the data, and control the robot to send it into the tunnel and retrieve it safely again, are based on what they learned completing the earlier challenges set throughout the day.

The Internet of Thinking Things – Intelligence at the Edge

Via F1 journalist James Allen’s blog (Insight: Inside McLaren’s Secretive F1 Openerations Room, “Mission Control”), I learn that the wheel hub of McLaren’s latest MP4-31 Formula One car hacks its own data. According to McLaren boss, Ron Dennis:

Each wheel hub has its own processing power, we don’t even take data from the sensors that surround the wheel [that measure] brake temperatures, brake wear, tyre pressures, G-Forces – all of this gets processed actually in the wheel hub – it doesn’t even get transmitted to the central ECU, the Electronic Control Unit.

If driver locks a brake or the wheel throws itself out of balance, we’re monitoring the vibration that creates against a model that says, “if the driver continues with this level of vibration the suspension will fail”, or the opposite, “we can cope with this vibration”.

With artificial intelligence and machine learning modeling now available as a commodity service, at least for connected devices, it’ll be interesting to see what the future holds for intelligence at the edge – sensors that don’t just return data (“something moved” from a security sensor, but that return information (“I just saw a male, 6′, blue trousers, green top, leaving room 27 and going to the water cooler; it looked like… etc etc..”)

Of course, if you’re happy with your sensors just applying a model, rather than building one, that appears to be the case for the MP4-31 wheel hub, it seems that you can already do that at the 8 bit level using Deep Learning, as described by Pete Warden in How to Quantize Neural Networks with TensorFlow.

By the by, if you want to have a quick play with a TensorFlow learner, check out the TensorFlow Neural Network Playground. Or how about training a visual recognition system with IBM’s Visual Recognition Demo?

Browser Developer Tools Tricks

Noticing that Alan just posted a Little Web Inspector / CSS Trick for extracting logos from web pages, here’s one for cleaning up ads from a web page you want to grab a screen shot of.

For example, I often take screenshots of new web pages for adding to “topical timeline” style presentations. As a reference, I often include the page URL from the browser navigation bar and the newspaper banner. But some news sites have ads at the top that you can’t scroll away:

Tesla_driver_dies_in_first_fatal_crash_while_using_autopilot_mode___Technology___The_Guardian

Using a browser’s developer tools, you can “live edit” the page HTML in the browser – first select the element you want:

Tesla_driver_dies_in_first_fatal_crash_while_using_autopilot_mode___Technology___2

then delete it…

Tesla_driver_dies_in_first_fatal_crash_while_using_autopilot_mode___Technology___3

If that doesn’t do the trick, you can always edit the HTML directly – or modify the CSS:

Tesla_driver_dies_in_first_

With a bit of tinkering, you can get a version of the page that you can get a clean screenshot of…

Tesla_driver_dies_in_first_fatal_crash

 

By editing the page HTML, you can also create you own local graffiti to web pages to amuse yourself and pass away the time…!;-)

For example, here’s me adding a Brython console to a copy of the OU home page in my browser…

Distance_Learning_Courses_and_Adult_Education_-_The_Open_University

This is purely a local copy, but functional nonetheless. And a great way of demonstrating to folk how you’d like a live web page to actually be, rather than how it currently is!-)