OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education

Creating Interactive Election Maps Using folium and IPython Notebooks

leave a comment »

During the last couple of weeks of Cabinet Office Code Clubs, we’ve started to explore how we can use the python folium library to generate maps. Last week we looked at getting simple markers onto maps along with how to pull data down from a third party API (the Food Standards Agency hygiene ratings), and this week we demonstrated how to use shapefiles.

As a base dataset, I used Chris Hanretty et al.’s election forecasts data as a foil for making use of Westminster parliamentary constituency shapefiles. The dataset gives a forecast of the likelihood of each party winning a particular seat, so within a party we can essentially generate a heat map of how likely a party is to win each seat. So for example, here’s a forecast map for the Labour party

Week_8_-_Shapefiles_likelihood_LAB

Although the election data table doesn’t explicitly say which party has the highest likelihood of winning each seat, we can derive that from the data with a little bit of code to melt the original dataset into a form where a row indicates a constituency and party combination (rather than a single row per constituency, with columns for each party’s forecast), then grouping by constituency, sorting by forecast value and picking the first (highest) value. (Ties will be ignored…)

electionforecast_reshape

We can then generate a map based on the discrete categorical values of which party has the highest forecast likelihood of taking each seat.

Week_8_-_Shapefiles_likelyparty

An IPython notebook showing how to generate the maps can be found here: how to use shapefiles.

One problem with this sort of mapping technique for the election forecast data is that the areas we see coloured are representative of geographical area, not population size. Indeed, the population of each constituency is roughly similar, so our impression that the country is significantly blue is skewed by the relative areas of the forecast blue seats compared to the forecast red ones, for example.

Ways round this are to use cartograms, or regularly sized hexagonal boundaries, such as described on Benjamin Hennig’s Views of the World website, from which the following image is republished; (see also the University of Sheffield’s (old) Social and Spatial Inequalities Research Group election mapping project website):

UK_election2010_MapsCompared

(A hexagonal constituency KML file, coloured by 2010 results, and corresponding to constituencies defined for that election, can be found from this post.)

Written by Tony Hirst

April 17, 2015 at 11:42 am

Posted in Uncategorized

Tagged with

Scraping Web Pages With R

with 13 comments

One of the things I tend to avoid doing in R, partly because there are better tools elsewhere, is screenscraping. With the release of the new rvest package, I thought I’d have a go at what amounts to one of the simplest webscraping activites – grabbing HTML tables out of webpages.

The tables I had in my sights (when I can actually find them…) are the tables that appear on the newly designed FIA website that describe a range of timing results for F1 qualifying and races [quali example, race example].

Inspecting an example target web page, whilst a menu allows you to select several different results tables, a quick look at the underlying HTML source code reveals that all the tables relevant to the session (that is, a particular race, or complete qualifying session) are described within a single page.

So how can we grab those tables down from a target page? The following recipe seems to do the trick:

#install.packages("rvest")
library(rvest)

#URL of the HTML webpage we want to scrape
url="http://www.fia.com/events/formula-1-world-championship/season-2015/qualifying-classification"

fiaTableGrabber=function(url,num){
  #Grab the page
  hh=html(url)
  #Parse HTML
  cc=html_nodes(hh, xpath = "//table")[[num]] %>% html_table(fill=TRUE)
  #TO DO - extract table name
  
  #Set the column names
  colnames(cc) = cc[1, ]
  #Drop all NA column
  cc=Filter(function(x)!all(is.na(x)), cc[-1,])
  #Fill blanks with NA
  cc=apply(cc, 2, function(x) gsub("^$|^ $", NA, x))
  #would the dataframe cast handle the NA?
  as.data.frame(cc)
}

#Usage:
#NUM:
## Qualifying:
### 1 CLASSIFICATION 
### 2 BEST SECTOR TIMES
### 3 SPEED TRAP 
### 4 MAXIMUM SPEEDS
##Race:
### 1 CLASSIFICATION
### 2 FASTEST LAPS
### 3 BEST SECTOR TIMES
### 4 SPEED TRAP
### 5 MAXIMUM SPEEDS
### 6 PIT STOPS
xx=fiaTableGrabber(url,NUM)

The fiaTableGrabber() grabs a particular table from a page with a particular URL (I really should grab the page separately and then extract whatever table from the fetched page, or at least cache the page (unless there is a cacheing option built-in?)

Depending on the table grabbed, we may then need to tidy it up. I hacked together a few sketch functions that tidy up (and remap) column names, convert “natural times” in minutes and seconds to seconds equivalent, and in the case of the race pits data, separate out two tables that get merged into one.

#1Q
fiaQualiClassTidy=function(xx){
  for (q in c('Q1','Q2','Q3')){
    cn=paste(q,'time',sep='')
    xx[cn]=apply(xx[q],1,timeInS)
  }
  
  xx=dplyr:::rename(xx, Q1_laps=LAPS)
  xx=dplyr:::rename(xx, Q2_laps=LAPS.1)
  xx=dplyr:::rename(xx, Q3_laps=LAPS.2)
  xx
}

#2Q, 3R 
fiaSectorTidy=function(xx){
  colnames(xx)=c('pos',
                's1_driver','s1_nattime',
                's2_driver','s2_nattime',
                's3_driver','s3_nattime')
  for (s in c('s1','s2','s3')) {
    sn=paste(s,'_time',sep='')
    sm=paste(s,'_nattime',sep='')
    xx[sn]=apply(xx[sm],1,timeInS)
  }
  
  xx[-1,]
}

#3Q, 4R
fiaTrapTidy=function(xx){
  xx
}

# 4Q, 5R
fiaSpeedTidy=function(xx){
  colnames(xx)=c('pos',
                'inter1_driver','inter1_speed',
                'inter2_driver','inter2_speed',
                'inter3_driver','inter3_speed')
  
  xx[-1,]
}

# 2R
fiaRaceFastlapTidy=function(xx){
  xx['time']=apply(xx['LAP TIME'],1,timeInS)
  xx
}

# 6R
fiaPitsSummary=function(xx){
  r=which(xx['NO']=='RACE - PIT STOP - DETAIL')
  xx['tot_time']=apply(xx['TOTAL TIME'],1,timeInS)
  Filter(function(x)!all(is.na(x)), xx[1:r-1,])
}

#6R
fiaPitsDetail=function(xx){
  colnames(xx)=c('NO','DRIVER','LAP','TIME','STOP','NAT DURATION','TOTAL TIME')
  xx['tot_time']=apply(xx['TOTAL TIME'],1,timeInS)
  xx['duration']=apply(xx['NAT DURATION'],1,timeInS)
  r=which(xx['NO']=='RACE - PIT STOP - DETAIL')
  xx=xx[r+2:nrow(xx),]
  #Remove blank row - http://stackoverflow.com/a/6437778/454773
  xx[rowSums(is.na(xx)) != ncol(xx),]
}

So for example:

rscraper

I’m still not convinced that R is the most natural, efficient, elegant or expressive language for scraping with, though…

PS In passing, I note the release of the readxl Excel reading library (no external-to-R dependencies, compatible with various flavours of Excel spreadsheet).

PPS Looking at the above screenshot, it strikes me that if we look at the time of day of and the duration, we can tell if there is a track position (at least) change in the pits… So for example, ROS goes in at 15:11:11 with a 33.689 stop and RIC goes in at 15:11:13 with a 26.714. So ROS enters the pits ahead of RIC and leaves after him? The following lap chart from f1fanatic perhaps reinforces this view?

2015_Malaysian_Grand_Prix_lap_charts_-_F1_Fanatic

Written by Tony Hirst

April 15, 2015 at 9:56 pm

Posted in Rstats

Tagged with

Printing Out Spreadsheet Cell Values by (Hierarchical) Column Using pandas

leave a comment »

Building on from Wrangling Complex Spreadsheet Column Headers, I’ve been hacking the spreadsheet published here a bit more so that I can print out each column value from each sheet in a given spreadsheet for a particular local authority (that is, particular key value in a particular column), to get an output of the form:

Housing_Data_printrow

(I guess I could add a print suppressor to only print statements where the value is not 0?)

The original notebook can be found here.

The major novelty over the previous post is the colmapbuilder() function that generates a nested dict from a group of hierarchical column names that terminates with either the column code or the cell value for that column and a given row selector (I need to tidy up the function args…)

import pandas as pd
dfx=pd.ExcelFile('Local_Authority_Housing_Statistics_dataset_2013-14.xlsx')

#Menu sheet parse to identify sheets A-I
import re
def getSheetDetails(dfx):
    sd=re.compile(r'Section (\w) - (.*)$')
    sheetDetails={}
    for row in dfx.parse('Menu')[[1]].values:
        if str(row[0]).startswith('Section'):
            sheetDetails[sd.match(row[0]).group(1)]=sd.match(row[0]).group(2)
    return sheetDetails

def dfgrabber(dfx,sheet):
    #First pass - identify row for headers
    df=dfx.parse(sheet,header=None)
    df=df.dropna(how='all')
    row = df[df.apply(lambda x: (x == "DCLG code").any(), axis=1)].index.tolist()[0]#.values[0] # will be an array
    #Second pass - generate dataframe
    df=dfx.parse(sheet,header=row).dropna(how='all').dropna(how='all',axis=1)
    df=df[df['DCLG code'].notnull()].reset_index(drop=True)
    df.columns=[c.split(' ')[0] for c in df.columns]
    return df,row


import collections
def coldecoder(dfx,sheet,row):
    zz=dfx.parse(sheet,header=None)
    stitle=zz[0][[0]][0]
    
    xx=zz[1:row].dropna(how='all')
    #Fill down
    xx.fillna(method='ffill', axis=0,inplace=True)
    #Fill across
    xx=xx.fillna(method='ffill', axis=1)
    #How many rows in the header?
    keydepth=len(xx)
    header=[i for i in range(0,keydepth)]

    xx=xx.append(zz[row:row+1])
    xx.to_csv('multi_index.csv',header=False,index=False,encoding='utf-8')
    mxx=pd.read_csv('multi_index.csv',header=header,encoding='utf-8')
    for c in mxx.columns.get_level_values(0).tolist():
        if c.startswith('Unnamed'):
            mxx = mxx.drop(c, level=0, axis=1)
    #We need to preserve the order of the header columns
    dd=mxx.to_dict(orient='split')
    ddz=zip(dd['columns'],dd['data'][0])
    keyx=collections.OrderedDict() #{}
    for r in ddz:
        if not pd.isnull(r[1]):
            #print r[1].split(' ')[0]
            keyx[r[1].split(' ')[0]]=r[0]
    return stitle,keyx,keydepth

#Based on http://stackoverflow.com/a/10756547/454773
def myprint(d,l=None):
  if l is None: l=''
  for k, v in d.iteritems():
    if isinstance(v, dict):
      print("{}{}".format(l,k))
      myprint(v,l=l+'-')
    else:
      print "{0} {1} : {2}".format(l,k.encode('utf-8'), v)

def colmapbuilder(dfx,sheet,code=None,retval=True):
    df,row=dfgrabber(dfx,sheet)
    sname,skey,kd=coldecoder(dfx,sheet,row)
    kq=collections.OrderedDict() #{}
    for k in skey:
        kq[k]=[]
        for j in skey[k]:
            if j not in kq[k]: kq[k].append(j)
    colmapper=collections.OrderedDict() #{}
    for kkq in kq:
        curr_level = colmapper
        depth=0
        for path in kq[kkq]:
            depth=depth+1
            if path not in curr_level:
                if depth<len(kq[kkq]):
                    curr_level[path] = collections.OrderedDict() #{}
                    curr_level = curr_level[path]
                else:
                    if retval and code is not None:
                        curr_level[path] = df[df['Current\nONS']==code][kkq].iloc[0]
                    else:
                        curr_level[path] = kkq
            else:
                curr_level = curr_level[path]
    return sname, colmapper

ll=dfx.sheet_names
ll.remove('Menu')
for lll in ll:
    sname,cmb=colmapbuilder(dfx,lll,'E06000046')
    print(sname+'\n')
    myprint(cmb)
    print('\n\n')

I’m not sure how this helps, other than demonstrating how we might be able to quickly generate a crude textualisation of values in a single row in spreadsheet with a complex set of hierarchical column names?

The code is also likely to be brittle, so the main questions are:

– is the method reusable?
– can the code/approach be generalised or at least made a little bit more robust and capable of handling other spreadsheets with particular properties? (And then – what properties, and how might we be able to detect those properties?)

Written by Tony Hirst

April 14, 2015 at 4:41 pm

Posted in Tinkering

Tagged with

Wrangling Complex Spreadsheet Column Headers

with one comment

[This isn’t an R post, per se, but I’m syndicating it via RBloggers because I’m interested – how do you work with hierarchical column indices in R? Do you try to reshape the data to something tidier on the way in? Can you autodetect elements to help with any reshaping?]

Not a little p****d off by the Conservative election pledge to extend the right-to-buy to housing association tenants (my response: so extend the right to private tenants too?) I thought I’d have a dig around to see what data might be available to see what I could learn about the housing situation on the Isle of Wight, using a method that could also be used in other constituencies. That is, what datasets are provided at a national level, broken down to local level. (To start with, I wanted to see what I could lean ex- of visiting the DCLG OpenDataCommuniteis site.

One source of data seems to be the Local authority housing statistics data returns for 2013 to 2014, a multi-sheet spreadsheet reporting at a local authority level on:

– Dwelling Stock
– Local Authority Housing Disposals
– Allocations
– Lettings, Nominations and Mobility Schemes
– Vacants
– Condition of Dwelling Stock
– Stock Management
– Local authority Rents and Rent Arrears
– Affordable Housing Supply

Local_Authority_Housing_Statistics_dataset_2013-14_xlsx

Something I’ve been exploring lately are “external spreadsheet data source” wrappers for the pandas Python library that wrap frequently released spreadsheets with a simple (?!) interface that lets you pull the data from the spreadsheet into a pandas dataframe.

For example, I got started on the LA housing stats sheet as follows – first a look at the sheets, then a routine to grab sheet names out of the Menu sheet:

import pandas as pd
dfx=pd.ExcelFile('Local_Authority_Housing_Statistics_dataset_2013-14.xlsx')
dfx.sheet_names
#...
#Menu sheet parse to identify sheets A-I
import re
sd=re.compile(r'Section (\w) - (.*)$')
sheetDetails={}
for row in dfx.parse('Menu')[[1]].values:
    if str(row[0]).startswith('Section'):
        sheetDetails[sd.match(row[0]).group(1)]=sd.match(row[0]).group(2)
sheetDetails
#{u'A': u'Dwelling Stock',
# u'B': u'Local Authority Housing Disposals',
# u'C': u'Allocations',
# u'D': u'Lettings, Nominations and Mobility Schemes',
# u'E': u'Vacants',
# u'F': u'Condition of Dwelling Stock',
# u'G': u'Stock Management',
# u'H': u'Local authority Rents and Rent Arrears',
# u'I': u'Affordable Housing Supply'}

All the data sheets have similar columns on the left-hand side, which we can use as a crib to identify the simple, single row, coded header column.

def dfgrabber(dfx,sheet):
    #First pass - identify row for headers
    df=dfx.parse(sheet,header=None)
    df=df.dropna(how='all')
    row = df[df.apply(lambda x: (x == "DCLG code").any(), axis=1)].index.tolist()[0]#.values[0] # will be an array
    #Second pass - generate dataframe
    df=dfx.parse(sheet,header=row).dropna(how='all').dropna(how='all',axis=1)
    df=df[df['DCLG code'].notnull()].reset_index(drop=True)
    return df

#usage:
dfgrabber(dfx,'H')[:5]

That gives something like the following:

Housing_Data

This is completely useable if we know what the column codes refer to. What is handy is that a single row is available for columns, although metadata that neatly describes the codes is not so tidily presented:

excel_headers_complex

Trying to generate pandas hierarchical index from this data is a bit messy…

One approach I’ve explored is trying to create a lookup table from the coded column names back into the hierarchical column names.

For example, if we can detect the column multi-index rows, we can fill down on the first row (for multicolumn labels, the label is in the leftmost cell), then fill down to fill the index grid spanned cells with the value that spans them.

#row is autodetected and contains the row for the simple header
row=7
#Get the header columns - and drop blank rows
xx=dfx.parse('A',header=None)[1:row].dropna(how='all')
xx

excel_headerparse1

#Fill down
xx.fillna(method='ffill', axis=0,inplace=True)
#Fill across
xx=xx.fillna(method='ffill', axis=1)
xx

excel_headerparse2

#append the coded header row
xx=xx.append(dfx.parse('A',header=None)[row:row+1])
xx

excel_headerparse3

#Now make use of pandas' ability to read in a multi-index CSV
xx.to_csv('multi_index.csv',header=False, index=False)
mxx=pd.read_csv('multi_index.csv',header=[0,1,2])
mxx

excel_headerparse4

Note that the pandas column multi-index can span several columns, but not “vertical” levels.

Get rid of the columns that don’t feature in the multi-index:

for c in mxx.columns.get_level_values(0).tolist():
    if c.startswith('Unnamed'):
        mxx = mxx.drop(c, level=0, axis=1)
mxx

Now start to work on the lookup…

#Get a dict from the multi-index
mxx.to_dict(orient='record')

excel_headerparse5

We can then use this as a basis for generating a lookup table for the column codes.

keyx={}
for r in dd:
    keyx[dd[r][0].split(' ')[0]]=r
keyx

excel_headerparse6

We could also generate more elaborate dicts to provide ways of identifying particular codes.

Note that the key building required a little bit of tidying required arising from footnote numbers that appear in some of the coded column headings:

excel header footnote

This tidying should be also be applied to the code column generation step above…

I’m thinking there really should be an easier way?

PS and then, of course, there are the additional gotchas… like UTF-8 pound signs that’s break ascii encodings…

non-ascii

Written by Tony Hirst

April 14, 2015 at 12:35 pm

Posted in Rstats, Tinkering

Tagged with

A Couple of Handy ggplot Tricks – Using Environmental Variables and Saving Charts

with one comment

A couple of handy tricks when working with ggplot that had escaped my radar until today.

First up, I had a problem in a function I was using to generate a ggplot2 in which I wanted to accept a couple of optional arguments in to the function and then make use of them in a ggplot aes() element.

Normally, ggplot will try to dereference a variable as a column in the current ggplot data context, rather than as a variable in it’s own right. So what can we do? Hack around aith aes_string()? In actual fact, the following trick identified via Stack Overflow did exactly what I needed – make a particular environment context available in ggplot directly:

core_qualifying_rank_slopegraph= function(qualiResults,qm,
                                          spacer=0.25,cutoff=c(16,10)){  
  #http://stackoverflow.com/questions/10659133/local-variables-within-aes
  .e = environment()
  # if we pass this context into ggplot, we can access both spacer and cutoff
  g=ggplot(qualiResults,aes(x=session,y=laptime), environment = .e)
  g= g+geom_text(data=qm[qm['session']=='q1time',],
                 aes(x=1,y=qspos,label=driverName,
                     colour=(qspos>cutoff[1] )
                 ), size=3)
  g= g+geom_text(data=qm[qm['session']=='q2time',],
                 aes(x=2,y=qspos,label=driverName,
                     colour=(qspos>cutoff[2] )
                 ), size=3)
  ...
  g=g+geom_segment(data=qualiResults[!is.na(qualiResults['q2time']),],
                   x=1+spacer,xend=2-spacer,
                   aes(y=q1pos,yend=q2pos,group=driverName),
                   colour='slategrey')
  ...
  g
}

By the by, the complete version of the fragment above generates a chart like the, heavily influenced by Tufte style slopegraphs, which shows progression through the F1 qualifying session in China this weekend:

f1_chn_2015_qualiprogression

Note that I use a discrete rank rather than continuous laptime scale for the y-axis, which would be more in keeping with the original slope graph idea. (The f1datajunkie.com post on F1 China 2015 – How They Qualified explores another chart type where continuous laptime scales are used, and a two column layout reminiscent of an F1 grid as a trick to try to minimise overlap of drivername labels, along with a 1-dimensional layout that shows all the qualifying session classification laptimes.)

The second useful trick I learned today was a recipe for saving chart objects. (With sales of the Wrangling F1 Data With R book (which provides the context for my learning these tricks) all but stalled, I need a new dream to live for that gives me hope of making enough from F1 related posts to cover the costs of seeing a race for real one weekend, so I’ve started wondering whether I could sell or license particular charts one day (if I can produce them quickly enough), either as PDFs or, perhaps, as actual chart objects, rather than having to give away all the code and wrangled data, for example….

So in that respect, the ability to save ggplot chart objects and then share them in a way that others can use them (if they have a workflow that can accommodate R/grid grobs) could be quite attractive… and this particular trick seems to do the job…

g=ggplot(..etc..) ...
#Get the grob...
g_out = ggplotGrob(g)
#Save the grob
save(g_out,file='ggrobtest')

#Look - nothing up my sleeves
rm(g_out)
rm(g)
#> g
#Error: object 'g' not found
#> g_out
#Error: object 'g_out' not found

load("ggrobtest")

#The g_out grob is reinstated and can be plotted as follows:
library(grid)
grid.draw(g_out) 

Handy…:-)

Written by Tony Hirst

April 12, 2015 at 9:18 pm

Posted in Rstats

Mixing Numbers and Symbols in Time Series Charts

with one comment

One of the things I’ve been trying to explore with my #f1datajunkie projects are ways of representing information that work both in a glanceable way as well as repaying deeper reading. I’ve also been looking at various ways of using text labels rather than markers to provide additional information around particular data points.

For example, in a race battlemap, with lap number on the horizontal x-axis and gap time on the vertical y-axis, I use a text label to indicate which driver is ahead (or behind) a particular target driver.

battlemaps-postionbattles-1

In the revised version of this chart type shown in F1 Malaysia, 2015 – Rosberg’s View of the Race, and additional numerical label along the x-axis indicatesd the race position of the target driver at the end of each lap.

What these charts are intended to do is help the eye see particular structural shapes within the data – for example whether a particular driver is being attacked from behind in the example of a battlemap, or whether they are catching the car ahead (perhaps with intervening cars in the way – although more needs to be done on the chart with respect to this for examples where there are several intervening cars; currently, only a single intervening car immediately ahead on track is shown.)

Two closer readings of the chart are then possible. Firstly, by looking at the y-value we can see the actual time a car is ahead (and here the dashed guide line at +/1 1s helps indicate in a glanceable way the DRS activation line; I’m also pondering how to show an indication of pit loss time to indicate what effect a pit stop might have on the current situation). Secondly, we can read off the labels of the drivers involved i a battle to get a more detailed picture of the race situation.

The latest type of chart I’ve been looking at are session utilisation maps, which in their simplest form look something like the following:

simple_session_utilisation

The charts show how each driver made use of a practice session or qualifying – drivers are listed on the vertical y-axis and the time into the session each lap was recorded at is identified along the horizontal x-axis.

This chart makes it easy to see how many stints, and of what length, were completed by each driver and at what point in the session. Other information might be inferred – for example, significant gaps in which no cars are recording times may indicate poor weather conditions or red flags. However, no information is provided about the times recorded for each lap.

We can, however, use colour to identify “purple” laps (fastest lap time recorded so far in the session) and “green” laps (a driver’s fastest laptime so far in the session that isn’t a purple time), as well as laps on which a driver pitted:

augmented_session_utilisation

But still, no meaningful lap times.

One thing to note about laptimes is that they come in various flavours, such as outlaps, when a driver starts the lap from the pitlane; inlaps, or laps on which a driver comes into the pits at the end of the lap; and flying laps when a driver is properly going for it. There are also those laps on which a driver may be trying out various new lines, slowing down to give themselves space for a flying lap, and so on.

Assuming that inlaps and outlaps are not the best indicators of pace, we can use a blend of symbols and text labels on the chart to identify inlaps and outlaps, as well as showing laptimes for “racing” laps, also using colour to highlight purple and green laps:

session_utlisation_annotated

The chart is produced using ggplot, and a layered approach in which chart elements are added to the chart in separate layers.

#The base chart with the dataset used to create the original chart
#In this case, the dataset included here is redundant
g = ggplot(f12015test)

#Layer showing in-laps (laps on which a driver pitted) and out-laps
#Use a subset of the dataset to place markers for outlaps and inlaps
g = g + geom_point(data=f12015test[f12015test['outlap'] | f12015test['pit'],],aes(x=cuml, y=name, color=factor(colourx)), pch=1)

#Further annotation to explicitly identify pit laps (in-laps)
g = g + geom_point(data=f12015test[f12015test['pit']==TRUE,],aes(x=cuml, y=name),pch='.')

#Layer showing full laps with rounded laptimes and green/purple lap highlights
#In this case, use the laptime value as a text label, rather than a symbol marker
g = g + geom_text(data=f12015test[!f12015test['outlap'] & !f12015test['pit'],],aes(x=cuml, y=name, label=round(stime,1), color=factor(colourx)), size=2, angle=45)

#Force the colour scale to be one we want
g = g + scale_colour_manual(values=c('darkgrey','darkgreen','purple'))

This version of the chart has the advantage of being glanceable when it comes to identifying session utilisation (number, duration and timing of stints) as well as when purple and green laptimes were recorded, as well as repaying closer reading when it comes to inspecting the actual laptimes recorded during each stint.

To reduce clutter on the chart, laptimes are round to 1 decimal place (tenths of a second) rather than using the full lap time which is recorded down to thousandths of a second.

Session utilisation charts are described more fully in a forthcoming recently released chapter of the Wrangling F1 Data With R Leanpub book. Buying a copy of the book gains you access to future updates of the book. A draft version of the chapter can be found here.

Written by Tony Hirst

April 8, 2015 at 1:33 pm

Posted in Rstats

Tagged with

From Front Running Algorithms to Bot Fraud… Or How We’ve Lost Control of the Bits…

leave a comment »

I’ve just finished reading Michael Lewis’ Flash Boys, a cracking read about algorithmic high frequency trading and how the code and communication systems that contribute to the way stock exchanges operate can be gamed by front-running bots. (For an earlier take, see also Scott Patterson’s Dark Pools; for more “official” takes, see things like the SEC’s regulatory ideas response to the flash crash of May 6, 2010, an SEC literature review on high frequency trading, or this Congressional Research Service report on High-Frequency Trading: Background, Concerns, and Regulatory Developments).

As the book describes, some of the strategies pursued by the HFT traders were made possible because of the way the code underlying the system was constructed. As Lessig pointed out way back way in Code and Other Laws of Cyberspace, and revisited in Codev2:

There is regulation of behavior on the Internet and in cyberspace, but that regulation is imposed primarily through code. The differences in the regulations effected through code distinguish different parts of the Internet and cyberspace. In some places, life is fairly free; in other places, it is more controlled. And the difference between these spaces is simply a difference in the architectures of control — that is, a difference in code.

The regulation imposed on the interconnected markets by code was gameable. Indeed, it seems that it could be argued that it was even designed to be gameable…

Another area in which the bots are gaming code structures is digital advertising. A highly amusing situation is described in the following graphic, taken from The Bot Baseline: Fraud in Digital Advertising (via http://www.ana.net/content/show/id/botfraud):

ANA-White_Ops_-_The_Bot_Baseline_-_Fraud_in_Digital_Advertising_pdf

A phantom layer of “ad laundering” fake websites whose traffic comes largely from bots is used to generate ad-impression revenue. (Compare this with networks of bots on social media networks that connect to each other, send each other messages, and so on, to build up “authentic” profiles of themselves, at least in terms of traffic usage dynamics. Examples: MIT Technlogy Review on Fake Persuaders; or this preprint on The Rise of Social Bots.)

As the world becomes more connected and more and more markets become exercises simply in bit exchange, I suspect we’ll be seeing more and more of these phantom layer/bot audience combinations on the one hand, and high-speed, market stealing, front running algorithms on the other.

PS Not quite related, but anyway: how you’re being auctioned in realtime whenever you visit a website that carries ads – The Curse of Our Time – Tracking, Tracking Everywhere.

Written by Tony Hirst

April 8, 2015 at 10:05 am

Posted in Anything you want

Tagged with

Follow

Get every new post delivered to your Inbox.

Join 1,415 other followers