Category: Infoskills

Using Google to Look Up Where You Live via the Physical Location of Your Wifi Router

During a course team meeting today, I idly mentioned that we should be able to run a simple browser based activity involving the geolocation of a student’s computer based on Google knowing the location of their wifi router. I was challenged about the possibility of this, so I did a quick bit of searching to see if there was an easy way of looking up the MAC addresses (BSSID) of wifi access points that were in range, but not connected to:

show_wifi_access_point_mac_address_chrome_os_x_-_Google_Search

which turned up:

The airport command with '-s' or '-I' options is useful: /System/Library/PrivateFrameworks/Apple80211.framework/Resources/airport

airport-mac

(On Windows, the equivalent is maybe something like netsh wlan show network mode=bssid ???)

The second part of the jigsaw was to try to find a way of looking up a location from a wifi access point MAC address – it seems that the Google geolocation API does that out of the can:

The_Google_Maps_Geolocation_API_ _ _Google_Maps_Geolocation_API_ _ _Google_Developers_and_Add_New_Post_‹_OUseful_Info__the_blog____—_WordPress

An example of how to make a call is also provided, as long as you have an API key… So I got a key and gave it a go:

wifi-post

:-)

Looking at the structure of the example Google calls, you can enter several wifi MAC addresses, along with signal strength, and the API will presumably triangulate based on that information to give a more precise location.

The geolocation API also finds locations from cell tower IDs.

So back to the idea of a simple student activity to sniff out the MAC addresses of wifi routers their computer can see from the workplace or home, and then look up the location using the Google geolocation API and pop it on a map.

Which is actually the sort of thing your browser will do when you turn on geolocation services:

Mozilla_Firefox_Web_Browser_—_Geolocation_in_Firefox_—_Mozilla

But maybe when you run the commands yourself, it feels a little bit more creepy?

PS sort of very loosely related, eg in terms of trying to map spaces from signals in the surrounding aether, a technique for trying to map the insides of a room based on it’s audio signature in response to a click of the fingers: http://www.r-bloggers.com/intro-to-sound-analysis-with-r/

Grabbing Screenshots of folium Produced Choropleth Leaflet Maps from Python Code Using Selenium

I had a quick play with the latest updates to the folium python package today, generating a few choropleth maps around some of today’s Gov.UK data releases.

The problem I had was that folium generates an interactive Leaflet map as an HTML5 document (eg something like an interactive Google map), but I wanted a static image of it – a png file. So here’s a quick recipe showing how I did that, using a python function to automatically capture a screengrab of the map…

First up, a simple function to get a rough centre for the extent of some boundaries in a geoJSON boundaryfile containing the boundaries for LSOAs in the Isle of Wight:

#GeoJSON from https://github.com/martinjc/UK-GeoJson
import json
import fiona
fi=fiona.open(GEO_JSON)
fi.bounds

centre_lon, centre_lat=((bounds[0]+bounds[2])/2,(bounds[1]+bounds[3])/2)

Now we can get some data – I’m going to use the average travel time to a GP from today’s Journey times to key services by lower super output area data release and limit it to the Isle of Wight data.

import pandas as pd

gp='https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/485260/jts0505.xls'
xl=pd.ExcelFile(gp)
#xl.sheet_names

tname='JTS0505'
dbd=xl.parse(tname,skiprows=6)

iw=dbd[dbd['LA_Code']=='E06000046']

The next thing to do is generate the map – folium makes this quite easy to do: all I need to do is point to the geoJSON file (geo_path), declare where to find the labels I’m using to identify each shape in that file (key_on), include my pandas dataframe (data), and state which columns include the shape/area identifiers and the values I want to visualise (columns=[ID_COL, VAL_COL]).

import folium
m = folium.Map([centre_lat,centre_lon], zoom_start=11)

m.choropleth(
    geo_path='../IWgeodata/lsoa_by_lad/E06000046.json',
    data=iw,
    columns=['LSOA_code', 'GPPTt'],
    key_on='feature.properties.LSOA11CD',
    fill_color='PuBuGn', fill_opacity=1.0
    )
m

The map object is included in the variable m. If I save the map file, I can then use the selenium testing package to open a browser window that displays the map, generate a screen grab of it and save the image, and then close the browser. Note that I found I had to add in a slight delay because the map tiles occasionally took some time to load.

#!pip install selenium

import os
import time
from selenium import webdriver

delay=5

#Save the map as an HTML file
fn='testmap.html'
tmpurl='file://{path}/{mapfile}'.format(path=os.getcwd(),mapfile=fn)
m.save(fn)

#Open a browser window...
browser = webdriver.Firefox()
#..that displays the map...
browser.get(tmpurl)
#Give the map tiles some time to load
time.sleep(delay)
#Grab the screenshot
browser.save_screenshot('map.png')
#Close the browser
browser.quit()

Here’s the image of the map that was captured:

map

I can now upload the image to WordPress and include it in an automatically produced blog post:-)

PS before I hit on the Selenium route, I dabbled with a less useful, but perhaps still handy library for taking screenshots: pyscreenshot.

#!pip install pyscreenshot
import pyscreenshot as ImageGrab

im=ImageGrab.grab(bbox=(157,200,1154,800)) # X1,Y1,X2,Y2
#To grab the whole screen, omit the bbox parameter

#im.show()
im.save('screenAreGrab.png',format='png')

The downside was I had to find the co-ordinates of the area of the screen I wanted to grab by hand, which I couldn’t find a way of automating… Still, could be handy…

Finding Common Phrases or Sentences Across Different Documents

As mentioned in the previous post, I picked up on a nice little challenge from my colleague Ray Corrigan a couple days ago to find common sentences across different documents.

My first, rather naive, thought was to segment each of the docs into sentences and then compare sentences using a variety of fuzzy matching techniques, retaining the ones that sort-of matched. That approach was a bit ropey (I’ll describe it in another post), but whilst pondering it over a dog walk a much neater idea suggested itself – compare n-grams of various lengths over the two documents. At it’s heart, all we need to do is find the intersection of the ngrams that occur in each document.

So here’s a recipe to do that…

First, we need to get documents into a text form. I started off with PDF docs, but it was easy enough to extract the text using textract.

!pip install textract

import textract
txt = textract.process('ukpga_19840012_en.pdf')

The next step is to compare docs for a particular size n-gram – the following bit of code finds the common ngrams of a particular size and returns them as a list:

import nltk
from nltk.util import ngrams as nltk_ngrams

def common_ngram_txt(tokens1,tokens2,size=15):
    print('Checking ngram length {}'.format(size))
    ng1=set(nltk_ngrams(tokens1, size))
    ng2=set(nltk_ngrams(tokens2, size))

    match=set.intersection(ng1,ng2)
    print('..found {}'.format(len(match)))

    return match

I want to be able to find common ngrams of various lengths, so I started to put together the first fumblings of an n-gram sweeper.

The core idea was really simple – starting with the largest common n-gram, detect increasingly smaller n-grams; then do a concordance report on each of the common ngrams to show how that ngram appeared in the context of each document. (See n-gram / Multi-Word / Phrase Based Concordances in NLTK.)

Rather than generate lots of redundant reports – if I detected the common 3gram “quick brown fox”, I would also find the common ngrams “quick brown” and “brown fox” – I started off with the following heuristic: if a common n-gram is part of a longer common n-gram, ignore it. But this immediately turned up a problem. Consider the following case:

Document 1: the quick brown fox
Document 2: the quick brown fox and the quick brown cat and the quick brown dog

Here, there is a common 4-tuple:the quick brown fox. There is also a common 3-tuple: the quick brown, which a concordance plot would reveal as being found in the context of a cat and a dog as well as a fox. What I really need to do is keep a copy of common n-gram locations that are not contained within the context of a longer n-gram context in the second document, but drop copies of locations where it is subsumed in an already found longer ngram.

Indexing on token number within the second doc, I need to return something like this:

([('the', 'quick', 'brown', 'fox'),
  ('the', 'quick', 'brown'),
  ('the', 'quick', 'brown')],
 [[0, 3], [10, 12], [5, 7]]

which shows up the shorter common ngrams only in places where it is not part of the longer common ngram.

In the following, n_concordance_offset() finds the location of a phrase token list within a document token list. The ngram_sweep_txt() scans down a range of ngram lengths, starting with the longest, trying to identify locations that are not contained with an already discovered longer ngram.

def n_concordance_offset(text,phraseList):
    c = nltk.ConcordanceIndex(text.tokens, key = lambda s: s.lower())
    
    #Find the offset for each token in the phrase
    offsets=[c.offsets(x) for x in phraseList]
    offsets_norm=[]
    #For each token in the phraselist, find the offsets and rebase them to the start of the phrase
    for i in range(len(phraseList)):
        offsets_norm.append([x-i for x in offsets[i]])
    #We have found the offset of a phrase if the rebased values intersect
    #via http://stackoverflow.com/a/3852792/454773
    intersects=set(offsets_norm[0]).intersection(*offsets_norm[1:])
    
    return intersects
    
def ngram_sweep_txt(txt1,txt2,ngram_min=8,ngram_max=50):    
    tokens1 = nltk.word_tokenize(txt1)
    tokens2 = nltk.word_tokenize(txt2)

    text1 = nltk.Text( tokens1 )
    text2 = nltk.Text( tokens2 )

    ngrams=[]
    strings=[]
    ranges=[]
    for i in range(ngram_max,ngram_min-1,-1):
        #Find long ngrams first
        newsweep=common_ngram_txt(tokens1,tokens2,size=i)
        for m in newsweep:
            localoffsets=n_concordance_offset(text2,m)

            #We need to avoid the problem of masking shorter ngrams by already found longer ones
            #eg if there is a common 3gram in a doc2 4gram, but the 4gram is not in doc1
            #so we need to see if the current ngram is contained within the doc index of longer ones already found
            
            for o in localoffsets:
                fnd=False
                for r in ranges:
                    if o>=r[0] and o<=r[1]:
                        fnd=True
                if not fnd:
                    ranges.append([o,o+i-1])
                    ngrams.append(m)
    return ngrams,ranges,txt1,txt2

def ngram_sweep(fn1,fn2,ngram_min=8,ngram_max=50):
    txt1 = textract.process(fn1).decode('utf8')
    txt2 = textract.process(fn2).decode('utf8')
    ngrams,ranges,txt1,txt2=ngram_sweep_txt(txt1,txt2,ngram_min=ngram_min,ngram_max=ngram_max)
    return ngrams,ranges,txt1,txt2

What I really need to do is automatically detect the largest n-gram and work back from there, perhaps using a binary search starting with an n-gram the size of the number of tokens in the shortest doc… But that’s for another day…

Having discovered common phrases, we need to report them. The following n_concordance() function, (based on this) does just that; the concordance_reporter() function manages the outputs.

import textract

def n_concordance(txt,phrase,left_margin=5,right_margin=5):
    #via https://simplypython.wordpress.com/2014/03/14/saving-output-of-nltk-text-concordance/
    tokens = nltk.word_tokenize(txt)
    text = nltk.Text(tokens)

    phraseList=nltk.word_tokenize(phrase)

    intersects= n_concordance_offset(text,phraseList)
    
    concordance_txt = ([text.tokens[map(lambda x: x-left_margin if (x-left_margin)>0 else 0,[offset])[0]:offset+len(phraseList)+right_margin]
                        for offset in intersects])
                         
    outputs=[''.join([x+' ' for x in con_sub]) for con_sub in concordance_txt]
    return outputs

def concordance_reporter(fn1='Draft_Equipment_Interference_Code_of_Practice.pdf',
                         fn2='ukpga_19940013_en.pdf',fo='test.txt',ngram_min=10,ngram_max=15,
                         left_margin=5,right_margin=5,n=5):
    
    fo=fn2.replace('.pdf','_ngram_rep{}.txt'.format(n))
    
    f=open(fo, 'w+')
    f.close()
     
    print('Handling {}'.format(fo))
    ngrams,strings, txt1,txt2=ngram_sweep(fn1,fn2,ngram_min,ngram_max)
    #Remove any redundancy in the ngrams...
    ngrams=set(ngrams)
    with open(fo, 'a') as outfile:
        outfile.write('REPORT FOR ({} and {}\n\n'.format(fn1,fn2))
        print('found {} ngrams in that range...'.format(len(ngrams)))
        for m in ngrams:
            mt=' '.join(m)
            outfile.write('\n\-------\n{}\n\n'.format(mt.encode('utf8')))
            for c in n_concordance(txt1,mt,left_margin,right_margin):
                outfile.write('<<<<<{}\n\n'.format(c.encode('utf8')))
            for c in n_concordance(txt2,mt,left_margin,right_margin):
                outfile.write('>>>>>{}\n\n'.format(c.encode('utf8')))
    return

Finally, the following function makes it easier to compare a document of interest with several other documents:

for f in ['Draft_Investigatory_Powers_Bill.pdf','ukpga_19840012_en.pdf',
          'ukpga_19940013_en.pdf','ukpga_19970050_en.pdf','ukpga_20000023_en.pdf']:
    concordance_reporter(fn2=f,ngram_min=10,ngram_max=40,left_margin=15,right_margin=15)

Here’s an example of the sort of report it produces:

REPORT FOR (Draft_Equipment_Interference_Code_of_Practice.pdf and ukpga_19970050_en.pdf


\-------
concerning an individual ( whether living or dead ) who can be identified from it

>>>>>personal information is information held in confidence concerning an individual ( whether living or dead ) who can be identified from it , and the material in question relates 

<<<<<section `` personal information '' means information concerning an individual ( whether living or dead ) who can be identified from it and relating—- ( a ) to his 


\-------
satisfied that the action authorised by it is no longer necessary .

>>>>>must cancel a warrant if he is satisfied that the action authorised by it is no longer necessary . 4.13 The person who made the application 

<<<<<an authorisation given in his absence if satisfied that the action authorised by it is no longer necessary . ( 6 ) If the authorising officer 

<<<<<cancel an authorisation given by him if satisfied that the action authorised by it is no longer necessary . ( 5 ) An authorising officer shall 


\-------
involves the use of violence , results in substantial financial gain or is conduct by a large number of persons in pursuit of a common purpose

>>>>>one or more offences and : It involves the use of violence , results in substantial financial gain or is conduct by a large number of persons in pursuit of a common purpose ; or a person aged twenty-one or 

<<<<<if , — ( a ) it involves the use of violence , results in substantial financial gain or is conduct by a large number of persons in pursuit of a common purpose , or ( b ) the offence 


\-------
to an express or implied undertaking to hold it in confidence

>>>>>in confidence if it is held subject to an express or implied undertaking to hold it in confidence or is subject to a restriction on 

>>>>>in confidence if it is held subject to an express or implied undertaking to hold it in confidence or it is subject to a restriction 

<<<<<he holds it subject— ( a ) to an express or implied undertaking to hold it in confidence , or ( b ) to a 


\-------
no previous convictions could reasonably be expected to be sentenced to

>>>>>a person aged twenty-one or over with no previous convictions could reasonably be expected to be sentenced to three years’ imprisonment or more . 4.5 

<<<<<attained the age of twenty-one and has no previous convictions could reasonably be expected to be sentenced to imprisonment for a term of three years 


\-------
considers it necessary for the authorisation to continue to have effect for the purpose for which it was

>>>>>to have effect the Secretary of State considers it necessary for the authorisation to continue to have effect for the purpose for which it was given , the Secretary of State may 

<<<<<in whose absence it was given , considers it necessary for the authorisation to continue to have effect for the purpose for which it was issued , he may , in writing 

The first line in a block is the common phrase, the >>> elements are how the phrase appears in the first doc, the <<< elements are how it appears in the second doc. The width of the right and left margins of the contextual / concordance plot are parameterised and can be easily increased.

This seems such a basic problem – finding common phrases in different documents – I’d have expected there to be a standard solution to this? But in the quick search I tried, I couldn’t find one? It was quite a fun puzzle to play with though, and offers lots of scope for improvement (I suspect it’s a bit ropey when it comes to punctuation, for example). But it’s a start…:-)

There’s lots could be done on a UI front, too. For example, it’d be nice to be able to link documents, so you can click through from the first to show where the phrase came from in the second. But to do that requires annotating the original text, which in turn means being able to accurately identify where in a doc a token sequence appears. But building UIs is hard and time consuming… it’d be so much easier if folk could learn to use a code line UI!;-)

If you know of any “standard” solutions or packages for dealing with this sort of problem, please let me know via the comments:-)

PS The code could also do with some optimisation – eg if we know we’re repeatedly comparing against a base doc, it’s foolish to keep opening and tokenising the base doc…

Slackbot Data Wire, Initial Sketch

Via a round-up post from Matt Jukes/@jukesie (Interesting elsewhere: Bring on the Bots), I was prompted to look again at Slack. OnTheWight’s Simon Perry originally tried to hook me in to Slack, but I didn’t need another place to go to check messages. Simon had also mentioned, in passing, how it would be nice to be able to get data alerts into Slack, but I’d not really followed it through, until the weekend, when I read again @jukesie’s comment that “what I love most about it [Slack] is the way it makes building simple, but useful (or at least funny), bots a breeze.”

After a couple of aborted attempts, I found a couple of python libraries to wrap the Slack API: pyslack and python-rtmbot (the latter also requires python-slackclient).

Using pyslack to send a message to Slack was pretty much a one-liner:

#Create API token at https://api.slack.com/web
token='xoxp-????????'

#!pip install pyslack
import slack
import slack.chat
slack.api_token = token
slack.chat.post_message('#general', 'Hello world', username='testbot')

general___OUseful_Slack

I was quite keen to see how easy it would be to reuse one of more of my data2text sketches as the basis for an autoresponder that could get accept a local data request from a Slack user and provide a localised data response using data from a national dataset.

I opted for a JSA (Jobseekers Allowance) textualiser (as used by OnTheWight and reported here: Isle of Wight innovates in a new area of Journalism and also in this journalism.co.uk piece: How On The Wight is experimenting with automation in news) that I seem to have bundled up into a small module, which would let me request JSA figures for a council based on a council identifier. My JSA textualiser module has a couple of demos hardwired into it (one for the Isle of Wight, one for the UK) so I could easily call on those.

To put together an autoresponder, I used the python-rtmbot, putting the botcode folder into a plugins folder in the python-rtmbot code directory.

The code for the bot is simple enough:

from nomis import *
import nomis_textualiser as nt
import pandas as pd

nomis=NOMIS_CONFIG()

import time
crontable = []
outputs = []

def process_message(data):

	text = data["text"]
	if text.startswith("JSA report"):
		if 'IW' in text: outputs.append([data['channel'], nt.otw_rep1(nt.iwCode)])
		elif 'UK' in text: outputs.append([data['channel'], nt.otw_rep1(nt.ukCode)])
	if text.startswith("JSA rate"):
		if 'IW' in text: outputs.append([data['channel'], nt.rateGetter(nt.iwCode)])
		elif 'UK' in text: outputs.append([data['channel'], nt.rateGetter(nt.ukCode)])

general___OUseful_Slack2

Rooting around, I also found a demo I’d put together for automatically looking up a council code from a Johnston Press newspaper title using a lookup table I’d put together at some point (I don’t remember how!).

Which meant that by using just a tiny dab of glue I could extend the bot further to include a lookup of JSA figures for a particular council based on the local rag JP covering that council. And the glue is this, added to the process_message() function definition:

	def getCodeForTitle(title):
		code=jj_titles[jj_titles['name']==title]['code_admin_district'].iloc[0]
		return code

	if text.startswith("JSA JP"):
		jj_titles=pd.read_csv("titles.csv")
		title=text.split('JSA JP')[1].strip()
		code=getCodeForTitle(title)

		outputs.append([data['channel'], nt.otw_rep1(code)])
		outputs.append([data['channel'], nt.rateGetter(code)])

general___OUseful_Slack3

This is quite an attractive route, I think, for national newsgroups: anyone in the group can create a bot to generate press release style copy at a local level from a national dataset, and then make it available to reporters from other titles in the group – who can simply key in by news title.

But it could work equally well for a community network of hyperlocals, or councils – organisations that are locally based and individually do the same work over and over again on national datasets.

The general flow is something a bit like this:

Tony_Hirst_-_Cardiff_-_community_journalism_-_data_wire_pptx

which has a couple of very obvious pain points:

Tony_Hirst_-2_Cardiff_-_community_journalism_-_data_wire_pptx

Firstly, finding the local data from the national data, cleaning the data, etc etc. Secondly, making some sort of sense of the data, and then doing some proper journalistic work writing a story on the interesting bits, putting them into context and explaining them, rather than just relaying the figures.

What the automation route does is to remove some of the pain, and allow the journalist to work up the story from the facts, presented informatively.

Tony_Hirst_-3_Cardiff_-_community_journalism_-_data_wire_pptx

This is a model I’m currently trying to work up with OnTheWight and one I’ll be talking about briefly at the What next for community journalism? event in Cardiff on Wednesday [slides].

PS Hmm.. this just in, The Future of the BBC 2015 [PDF] [announcement].

Local Accountability Reporting Service

Under this proposal, the BBC would allocate licence fee funding to invest in a service that reports on councils, courts and public services in towns and cities across the UK. The aim is to put in place a network of 100 public service reporters across the country.

Reporting would be available to the BBC but also, critically, to all reputable news organisations. In addition, while it would have to be impartial and would be run by the BBC, any news organisation — news agency, independent news provider, local paper as well as the BBC itself—could compete to win the contract to provide the reporting team for each area.

A shared data journalism centre Recent years have seen an explosion in data journalism. New stories are being found daily in government data, corporate data, data obtained under the Freedom of Information Act and increasing volumes of aggregated personalised data. This data offers new means of sourcing stories and of holding public services, politicians and powerful organisations to account.

We propose to create a new hub for data journalism, which serves both the BBC and makes available data analysis for news organisations across the UK. It will look to partner a university in the UK, as the BBC seeks to build a world-class data journalism facility that informs local, national and global news coverage.

A News Bank to syndicate content

The BBC will make available its regional video and local audio pieces for immediate use on the internet services of local and regional news organisations across the UK.

Video can be time-consuming and resource-intensive to produce. The News Bank would make available all pieces of BBC video content produced by the BBC’s regional and local news teams to other media providers. Subject to rights and further discussion with the industry we would also look to share longer versions of content not broadcast, such as sports interviews and press conferences.

Content would be easily searchable by other news organisations, making relevant material available to be downloaded or delivered by the outlets themselves, or for them to simply embed within their own websites. Sharing of content would ensure licence fee payers get maximum value from their investment in local journalism, but it would also provide additional content to allow news organisations to strengthen their offer to audiences without additional costs. We would also continue to enhance linking out from BBC Online, building on the work of Local Live.

Hmm… Share content – or share “pre-content”. Use BBC expertise to open up the data to more palatable forms, forms that the BBC’s own journalists can work with, but also share those intermediate forms with the regionals, locals and hyperlocals?

Data Literacy – Do We Need Data Scientists, Or Data Technicians?

One of the many things I vaguely remember studying from my school maths days are the various geometric transformations – rotations, translations and reflections – as applied particularly to 2D shapes. To a certain extent, knowledge of these operations helps me use the limited Insert Shape options in Powerpoint, as I pick shapes and arrows from the limited palette available and then rotate and reflect them to get the orientation I require.

But of more pressing concern to me on a daily basis is the need to engage in data transformations, whether as summary statistic transformations (find the median or mean values within several groups of the same dataset, for example, or calculating percentage differences away from within group means across group members for multiple groups, or shape transformations, reshaping a dataset from a wide to a long format, for example, melting a subset of columns or recasting a molten dataset into a wider format. (If that means nothing to you, I’m not surprised. But if you’ve ever worked with a dataset and copied and pasted data from multiple columns in to multiple rows to get it to look right/into the shape you want, you’ve suffered by not knowing how to reshape your dataset!)

Even though I tinker with data most days, I tend to avoid all but the simplest statistics. I know enough to know I don’t understand most statistical arcana, but I suspect there are folk who do know how to do that stuff properly. But what I do know from my own tinkering is that before I can run even the simplest stats, I often have to do a lot of work getting original datasets into a state where I can actually start to work with them.

The same stumbling blocks presumably present themselves to the data scientists and statisticians who not only know how to drive arcane statistical tests but also understand how to interpret and caveat them. Which is where tools like Open Refine come in…

Further down the pipeline are the policy makers and decision makers who use data to inform their policies and decisions. I don’t see why these people should be able to write a regexp, clean a dirty dataset, denormalise a table, write a SQL query, run a weird form of multivariate analysis, or reshape a dataset and then create a novel data visualisation from it based on a good understanding of the principles of The Grammar of Graphics; but I do think they should be able to pick up on the stories contained within the data and critique the way it is presented, as well as how the data was sourced and the operations applied to it during analysis, in addition to knowing how to sensibly make use of the data as part of the decision making or policy making process.

A recent Nesta report (July 2015) on Analytic Britain: Securing the right skills for the data-driven economy [PDF] gave a shiny “analytics this, analytics that” hype view of something or other (I got distracted by the analytics everything overtone), and was thankfully complemented by a more interesting report from the Universities UK report (July 2015) on Making the most of data: Data skills training in English universities [PDF].

In its opening summary, the UUK report found that “[t]he data skills shortage is not simply characterised by a lack of recruits with the right technical skills, but rather by a lack of recruits with the right combination of skills”, and also claimed that “[m]any undergraduate degree programmes teach the basic technical skills needed to understand and analyse data”. Undergrads may learn basic stats, but I wonder how many of them are comfortable with the hand tools of data wrangling that you need to be familiar with if you ever want to turn real data into something you can actually work with? That said, the report does give a useful review of data skills developed across a range of university subject areas.

(Both reports championed the OU-led urban data school, though I have to admit I can’t find any resources associated with that project? Perhaps the OU’s Smart Cities MOOC on FutureLearn is related to it? As far as I know, OUr Learn to Code for Data Analysis MOOC isn’t?)

From my perspective, I think it’d be a start if folk learned:

  • how to read simple charts;
  • how to identify meaningful stories in charts;
  • how to use data stories to inform decision making.

I also worry about the day-to-day practicalities of working with data in a hands on fashion and the roles associated with various data related tasks that fall along any portrayal of the data pipeline. For example, of the top of my head I think we can distinguish between things like:

  • data technician roles – for example, reshaping and cleaning datasets;
  • data engineering roles – managing storage, building and indexing databases, for example;
  • data analyst/science and data storyteller roles – that is, statisticians who can work with clean and well organised datasets to pull out structures, trends and patterns from within them;
  • data graphics/visualisation practitioners – who have the eye and the skills for developing visual ways of uncovering and relating the stories, trends, patterns and structures hidden in datasets, perhaps in support of the analyst, perhaps in support of the decision-making end-user ;
  • and data policymakers and data driven decision makers, who can phrase questions in such a way that makes it possible to use data to inform the decision or policymaking process, even if they don’t have to skills to wrangle or analyse the data that they can then use.

I think there is also a role for data questionmasters who can phrase and implement useful and interesting queries that can be applied to datasets, which might also fall to the data technician. I also see a role for data technologists, who are perhaps strong as a data technician, but with an appreciation of the engineering, science, visualisation and decision/policy making elements, though not necessarily strong as a practitioner in any of those camps.

(Data carpentry as a term is also useful, describing a role that covers many of the practical skills requirements I’d associate with a data technician, but that additionally supports the notion of “data craftsmanship”? A lot of data wrangling does come down to being a craft, I think, not least because the person working at the raw data end of the lifecycle may often develop specialist, hand crafted tools for working with the data that an analyst would not be able to justify spending the development time on.)

Here’s another carving of the data practitioner roles space, this time from Liz Lyon & Aaron Brenner (Bridging the Data Talent Gap: Positioning the iSchool as an Agent for Change, International Journal of Digital Curation, 10:1 (2015)):

Bridging_the_Data_Talent_Gap__Positioning_the_iSchool_as_an_Agent_for_Change___Lyon___International_Journal_of_Digital_Curation

The Royal Statistical Society Data Manifesto [PDF] (September 2014) argues for giving “[p]oliticians, policymakers and other professionals working in public services (such as regulators, teachers, doctors, etc.) … basic training in data handling and statistics to ensure they avoid making poor decisions which adversely affect citizens” and suggest that we need to “prepare for the data economy” by “skill[ing] up the nation”:

We need to train teachers from primary school through to university lecturers to encourage data literacy in young people from an early age. Basic data handling and quantitative skills should be an integral part of the taught curriculum across most A level subjects. … In particular, we should ensure that all students learn to handle and interpret real data using technology.

I like the sentiment of the RSS manifesto, but fear the Nesta buzzword hype chasing and the conservatism of the universities (even if the UUK report is relatively open minded).

On the one hand, we often denigrate the role of the technician, but I think technical difficulties associated with working with real data are often a real blocker; which means we either skill up ourselves, or recognise the need for skilled data technicians. On the other, I think there is a danger of hyping “analytics this” and “data science that” – even if only as part of debunking it – because it leads us away from the more substantive point that analytics this, data science that is actually about getting numbers into a form that tell stories that we can use to inform decisions and policies. And that’s more about understanding patterns and structures, as well as critiquing data collection and analysis methods, than it is about being a data technician, engineer, analyst, geek, techie or quant.

Which is to say – if we need to develop data literacy, what does that really mean for the majority?

PS Heh heh – Kin Lane captures further life at the grotty end of the data lifecycle: Being a Data Janitor and Cleaning Up Data Portability Vomit.

Converting Spreadsheet Rows to Text Based Summary Reports Using OpenRefine

In Writing Each Row of a Spreadsheet as a Press Release? I demonstrated how we could generate a simple textual report template that could “textualise” separate rows of a spreadsheet. This template could be applied to each row from a subset of rows to to produce a simple human readable view of the data contained in each of those rows. I picked up on the elements of this post in Robot Journalists or Robot Press Secretaries?, where I reinforced the idea that such an approach was of a similar kind to the approach used in mail merge strategies supported by many office suites.

It also struck me that we could use OpenRefine’s custom template export option to generate a similar sort of report. So in this post I’ll describe a simple recipe for recreating the NHS Complaints review reports from a couple of source spreadsheets using OpenRefine.

This is just a recasting of the approach demonstrated in the Writing Each Row… post, and more fully described in this IPython notebook, so even if you don’t understand Python, it’s probably worth reviewing those just to get a feeling of the steps involved.

To start with, let’s see how we might generate a basic template from the complaints CSV file, loaded in with the setting to parse numerical columns as such.

OpenRefine

The default template looks something like this:

default template

We can see how each the template provides a header slot, for the start of the output, a template applied to each row, a separator to spilt the rows, and a footer.

The jsonize function makes sure the output is suitable for output as a JSON file. We just want to generate text so we can forget that.

Here’s the start of a simple report…

Report for {{cells["Practice_Code"].value}} ({{cells["Year"].value}}):

  Total number of written complaints received:
  - by area: {{cells["Total number of written complaints received"].value}} (of which, {{cells["Total number of written 
complaints upheld"].value}} upheld)
  - by subject: {{cells["Total number of written complaints received 2"].value}} (of which, {{cells["Total number of written 
complaints upheld 2"].value}} upheld)

custom_export _start

The double braces ({{ }} allow you to access GREL statements. Outside the braces, the content is treated as text.

Note that the custom template doesn’t get saved… I tend to write the custom templates in a text editor, then copy and paste them into OpenRefine.

We can also customise the template with some additional logic using the if(CONDITION, TRUE_ACTION, FALSE_ACTION) construction. For example, we might flag a warning that a lot of complaints were upheld:

openrefine template warning

The original demonstration pulled in additional administrative information (practice name and address, for example) from another source spreadsheet. Merging Datasets with Common Columns in Google Refine describes a recipe for merging in data from another dataset. In this case, if our source is the epraccur spreadsheet, we can create an OpenRefine project from the epraccur spreadsheet (use no lines as the header – it doesn’t have a header row) and then merge in data from the epraccur project into the complaints project using the practice code (Column 1 in the epraccur project) as the key column used to add an additional practice name column based on the Practice_Code column in the complaints project – cell.cross("epraccur xls", "Column 1").cells["Column 2"].value[0]

Note that columns can only be merged in one column at a time.

In order to filter the rows so we can generate reports for just the Isle of Wight, we also need to merge in the Parent Organisation Code (Column 15) from the epraccur project. To get Isle of Wight practices, we could then filter on code 10L. If we then used out custom exporter template, we could get just textual reports for the rows corresponding to Isle of Wight GP practices.

nhs openrefine filter

Teasing things apart a bit, we also start to get a feel for a more general process. Firstly, we can create a custom export template to generate a textual representation of each row in a dataset. Secondly, we can use OpenRefine’s filtering tools to select which rows we want to generate reports from, and order them appropriately. Thirdly, we could also generate new columns containing “red flags” or news signals associated with particular rows, and produce a weighted sum column on which to rank items in terms of newsworthiness. We might also want to merge in additional data columns from other sources, and add elements from those in to the template. Finally, we might start to refine the export template further to include additional logic and customisation of the news release output.

See also Putting Points on Maps Using GeoJSON Created by Open Refine for a demo of how to generate a geojson file using the OpenRefine custom template exporter as part of a route to getting points onto a map.

Fragment – Data Journalism or Data Processing?

A triptych to read and reflect on in the same breath…

String of Rulings Bodes Ill for the Future of Journalism in Europe:

On July 21, 2015, the European Court of Human Rights ruled that making a database of public tax records accessible digitally was illegal because it violated the right to privacy [1]. The judges wrote that publishing an individual’s (already public) data on an online service could not be considered journalism, since no journalistic comment was written alongside it.

This ruling is part of a wider trend of judges limiting what we can do with data online. A few days later, a court of Cologne, Germany, addressed data dumps. In this case, the German state sued a local newspaper that published leaked documents from the ministry of Defense related to the war in Afghanistan. The documents had been published in full so that users could highlight the most interesting lines. The ministry sued on copyright grounds and the judges agreed, arguing that the journalists should have selected some excerpts from the documents to make their point and that publishing the data in its entirety was not necessary [2].

These two rulings assume that journalism must take the form of a person collecting information then writing an article from it. It was true in the previous century but fails to account for current journalistic practices.

ICO: Samaritans Radar failed to comply with Data Protection Act:

It is our view that if organisations collect information from the internet and use it in a way that’s unfair, they could still breach the data protection principles even though the information was obtained from a publicly available source. It is particularly important that organisations should consider the data protection implications if they are planning to use analytics to make automated decisions that could have a direct effect on individuals.

The Labour Party “purge” and social media privacy:

[A news article suggests] that the party has been scouring the internet to find social media profiles of people who have registered. Secondly, it seems to suggest that for people not to have clearly identifiable social media profiles is suspicious.

The first idea, that it’s ‘OK’ to scour the net for social media profiles, then analyse them in detail is one that is all too common. ‘It’s in the public, so it’s fair game’ is the essential argument – but it relies on a fundamental misunderstanding of privacy, and of the way that people behave.

Collecting “public” data and processing or analysing it may bring the actions of the processor into the scope of the Data Protection Act. Currently, the Act affords protections to to journalists. But if these protections are eroded, it weakens the ability of journalists to use these powerful investigatory tools.