A few weeks ago, the UK Department for Transport (DfT) published a summary report and action plan entitled The Pathway to Driverless Cars that sets out the UK response to car manufacturers and tech companies pushing to develop, test and produce autonomous vehicles on public roads. This was followed by an announcement in the recent (March, 2015) budget where the Chancellor announced that “we are going to back our brilliant automotive industry by investing £100 million to stay ahead in the race to driverless technology” Hansard), and a new code of practice around the testing of driverless vehicles to appear sometime this Spring (2015).
One of the claims widely made in favour of autonomous vehicles/driverless cars by the automotive lobby is that in testing they have a better safety record than human drivers. (Human error plays a role in many accidents.) Whilst the story that a human driver crashes Google’s self-driving car is regularly wheeled out to illustrate how Google’s cars are much safer than humans, we don’t tend to see so many stories about when the human test driver had to take control of the vehicles, whether to avoid an accident, or because road and/or traffic conditions were “inappropriate” for “driverless” operation.
Nor do we hear much about the technology firms making road transport by safer by doing something to mitigate the role that their technology plays in causing accidents.
As Nick Carr writes in his history of automation, The Glass Cage:
It’s worth noting Silicon Valley’s concern with highway safety, though no doubt sincere, has been selective. The distractions caused by cellphones and smartphones have in recent years become a major factor in car crashes. An analysis by the National Safety Council [in the US] implicated phone use in one-fourth of all accidents on US roads in 2012. Yet Google and other top tech firms have made little or no effort to develop software to prevent people from calling, texting or using apps while driving — surely a modest undertaking compared with building a car that can drive itself.
To see what proportion of road traffic incidents involved distractions caused by mobile phones, I thought I’d check the STATS19 dataset. This openly published dataset records details of UK road accidents involving casualties reported to the police. The form used to capture the data includes information relating to up to six contributory factors, including “Driver using mobile phone”.
Unfortunately, the data collected on this part of the form is deemed to be RESTRICTED as opposed to UNCLASSIFIED (the latter classification applying to those elements released in the STATS19 dataset), which means we can’t do any stats around this from the raw data. (I think the reason the data is not released is that it may be used to help de-anonymise incident data by triangulating information contained in the dataset with information gleaned from local news reports about an incident, for example?)
The closest it seems we can get are the DfT’s annual reported road casualties reports (eg 2013) and an old DfT mobile phone usage survey.
The release of the 2013 annual report is supported by a set of statistical tables that break down accidents in all sorts of ways including two tables (ras50012.xls and ras50016.xls) that summarise accident counts on a local authority basis that include mobile phones as a contributory factor in reported incidents. So for example, in England in 2013 there were 384 such incidents. (It is not clear how many of the 2,669 incidents that included a “distraction in vehicle” might also have related to distractions caused by mobile phones particularly… Nor is it clear what the severity or impact of incidents with mobile phones recorded as a contributory factor actually were…
In terms of autonomous vehicle safety, and how the lobbying groups pitch their case, it would also be interesting to know how autonomous vehicles are likely to cope in the context of other contributory factors, such as vision affected by external factors (10,272 (11%) in England in 2013), pedestrian factors only (11,877 (12%)), vehicle defects (1,757, (2%)), or road environment contributed (12,436 (13%)). For cases where there was an “impairment or distraction” in general (12,162 (13%)), it would be interesting to know what would be likely to happen in an autonomous vehicle where the vehicle tried to hand control back to a supervising human driver… (Note that percentages across contributory factors do not sum to 100% – incidents may have had several contributory factors.)
As technology continues to offer ever more “solutions” to claimed problems, I’m really mindful that we need to start being more critical of it and the claims made in pushing particular solutions. In particular, three things concern me: 1) that if we look at the causes of problems that technology claims to fix, maybe technology is contributing to the problem (and the answer is not to apply more technology to treat problems caused by other technology); 2) that we don’t tend to look at the systemic consequences of applying a particular technology; 3) that we don’t tend to recognise how adopting a particular technology can lock us in to a particular set of (inflexible) technology mediated practices, nor how we change our behaviour to suit that technological solution.
On balance, I’m probably negative on the whole tech thing, even though I guess I work within it…