OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education

Archive for the ‘Anything you want’ Category

It was Ever Thus… On the Pace (or Lack of It) in Scholarly Publishing

From 1973, Charles Bachman in his acceptance lecture for that year’s Turing Award (The Programmer as Navigator) commenting on challenges in shifting the world view of the time about database design:

The publication policies of the technical literature are also a problem. The ACM SIGBDP and SIGFIDET publications are the best available, and membership in these groups should grow. The refereeing rules and practices of Communications of the ACM result in delays of one year to 18 months between submittal and publication. Add to that the time for the author to prepare his ideas for publication and you have at least a two-year delay between the detection of significant results and their earliest possible publication.

1973. We’re now in 2014. Do, as they say, the math…

Written by Tony Hirst

April 4, 2014 at 1:14 pm

Posted in Anything you want

Mixing Stuff Up

Remember mashups? Five years or so ago they were all the rage. At their heart, they provided ways of combining things that already existed to do new things. This is a lazy approach, and one I favour.

One of the key inspirations for me in this idea combinatorial tech, or tech combinatorics, is Jon Udell. His Library Lookup project blew me away in its creativity (the use of bookmarklets, the way the project encouraged you to access one IT service from another, the using of “linked data”, common/core-canonical identifiers to bridge services and leverage or enrich one from another, and so on) and was the spark that fired many of my own doodlings. (Just thinking about it again excites me now…)

As Jon wrote on his blog yesterday (Shiny old tech) (my emphasis):

What does worry me, a bit, is the recent public conversation about ageism in tech. I’m 20 years past the point at which Vinod Khosla would have me fade into the sunset. And I think differently about innovation than Silicon Valley does. I don’t think we lack new ideas. I think we lack creative recombination of proven tech, and the execution and follow-through required to surface its latent value.

Elm City is one example of that. Another is my current project, Thali, Yaron Goland’s bid to create the peer-to-peer web that I’ve long envisioned. Thali is not a new idea. It is a creative recombination of proven tech: Couchbase, mutual SSL authentication, Tor hidden services. To make Thali possible, Yaron is making solid contributions to Thali’s open source foundations. Though younger than me, he is beyond Vinod Khosla’s sell-by date. But he is innovating in a profoundly important way.

Can we draw a clearer distinction between innovation and novelty?

Creative recombination.

I often think of this in terms of appropriation (eg Appropriating Technology, Appropriating IT: innovative uses of emerging technologies or Appropriating IT: Glue Steps).

Or repurposing, a form of reuse that differs from the intended original use.

Openness helps here. Open technologies allow users to innovate without permission. Open licensing is just part of that open technology jigsaw; open standards another; open access and accessibility a third. Open interfaces accessed sideways. And so on.

Looking back over archived blog posts from five, six, seven years ago, the web used to be such fun. An open playground, full of opportunities for creative recombination. Now we have Facebook, where authenticated APIs give you access to local social neighbourhoods, but little more. Now we have Google using link redirection and link pollution at every opportunity. Services once open are closed according to economic imperatives (and maybe scaling issues; maybe some creative recombinations are too costly to support when a network scales). Maybe my memory of a time when the web was more open is a false memory?

Creative recombination, ftw.

PS just spotted this (Walking on custard), via @plymuni. If you don’t see why it’s relevant, you probably don’t get the sense of this post!

Written by Tony Hirst

April 3, 2014 at 9:21 am

First Signs (For Me) of Linked Data Being Properly Linked…?!

As anyone who’s followed this blog for some time will know, my relationship with Linked Data has been an off and on again one over the years. At the current time, it’s largely off – all my OpenRefine installs seem to have given up the ghost as far as reconciliation and linking services go, and I have no idea where the problem lies (whether with the plugins, the installs, with Java, with the endpoints, with the reconciliations or linkages I’m trying to establish).

My dabblings with pulling data in from Wikipedia/DBpedia to Gephi (eg as described in Visualising Related Entries in Wikipedia Using Gephi and the various associated follow-on posts) continue to be hit and miss due to the vagaries of DBpedia and the huge gaps in infobox structured data across Wikipedia itself.

With OpenRefine not doing its thing for me, I haven’t been able to use that app as the glue to bind together queries made across different Linked Data services, albeit in piecemeal fashion. Because from the occasional sideline view I have of the Linked Data world, I haven’t seen any obvious way of actually linking data sets other than by pulling identifiers in to a new OpenRefine column (or wherever) from one service, then using those identifiers to pull in data from another endpoint into another column, and so on…

So all is generally not well.

However, a recent post by the Ordnance Survey’s John Goodwin (aka @gothwin) caught my eye the other day: Federating SPARQL Queries Across Government Linked Data. It seems that federated queries can now be made across several endpoints.

John gives an example using data from the Ordnance Survey SPARQL endpoint and an endpoint published by the Environment Agency:

The Environment Agency has published a number of its open data offerings as linked data … A relatively straight forward SPARQL query will get you a list of bathing waters, their name and the district they are in.

[S]uppose we just want a list of bathing water areas in South East England – how would we do that? This is where SPARQL federation comes in. The information about which European Regions districts are in is held in the Ordnance Survey linked data. If you hop over the the Ordnance Survey SPARQL endpoint explorer you can run [a] query to find all districts in South East England along with their names …

Using the SERVICE keyword we can bring these two queries together to find all bathing waters in South East England, and the districts they are in:

And here’s the query John shows, as run against the Ordnance Survey SPARQL endpoint

SELECT ?x ?name ?districtname WHERE {
  ?x a <http://environment.data.gov.uk/def/bathing-water/BathingWater> .
  ?x <http://www.w3.org/2000/01/rdf-schema#label> ?name .
  ?x <http://statistics.data.gov.uk/def/administrative-geography/district> ?district .
  SERVICE <http://data.ordnancesurvey.co.uk/datasets/boundary-line/apis/sparql>
    ?district <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/within> <http://data.ordnancesurvey.co.uk/id/7000000000041421> .
    ?district <http://www.w3.org/2000/01/rdf-schema#label> ?districtname .
  }
} ORDER BY ?districtname

In a follow on post, John goes even further “by linking up data from Ordnance Survey, the Office of National Statistics, the Department of Communities and Local Government and Hampshire County Council”.

So that’s four endpoints – the original one against which the query is first fired, and three others…

SELECT ?districtname ?imdrank ?changeorder ?opdate ?councilwebsite ?siteaddress WHERE {
  ?district <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/within <http://data.ordnancesurvey.co.uk/id/7000000000017765> .
  ?district a <http://data.ordnancesurvey.co.uk/ontology/admingeo/District> .
  ?district <http://www.w3.org/2000/01/rdf-schema#label> ?districtname .
  SERVICE <http://opendatacommunities.org/sparql> {
    ?s <http://purl.org/linked-data/sdmx/2009/dimension#refArea> ?district .
    ?s <http://opendatacommunities.org/def/IMD#IMD-rank> ?imdrank . 
    ?authority <http://opendatacommunities.org/def/local-government/governs> ?district .
    ?authority <http://xmlns.com/foaf/0.1/page> ?councilwebsite .
  }
  ?district <http://www.w3.org/2002/07/owl#sameAs> ?onsdist .
  SERVICE <http://statistics.data.gov.uk/sparql> {
    ?onsdist <http://statistics.data.gov.uk/def/boundary-change/originatingChangeOrder> ?changeorder .
    ?onsdist <http://statistics.data.gov.uk/def/boundary-change/operativedate> ?opdate .
  }
  SERVICE <http://linkeddata.hants.gov.uk/sparql> {
    ?landsupsite <http://data.ordnancesurvey.co.uk/ontology/admingeo/district> ?district .
    ?landsupsite a <http://linkeddata.hants.gov.uk/def/land-supply/LandSupplySite> .
    ?landsupsite <http://www.ordnancesurvey.co.uk/ontology/BuildingsAndPlaces/v1.1/BuildingsAndPlaces.owl#hasAddress> ?siteaddress .
  }
}

Now we’re getting somewhere….

Written by Tony Hirst

March 25, 2014 at 3:25 pm

Posted in Anything you want

Tagged with

Recreational Data

Part of my weekend ritual is to buy the weekend papers and have a go at the recreational maths problems that are Sudoku and Killer. I also look for news stories with a data angle that might prompt a bit of recreational data activity…

In a paper that may or may not have been presented at the First European Congress of Mathematics in Paris, July, 1992, Prof. David Singmaster reflected on “The Unreasonable Utility of Recreational Mathematics”.

unreasonableUtility

To begin with, it is worth considering what is meant by recreational
mathematics.

First, recreational mathematics is mathematics that is fun and popular – that is, the problems should be understandable to the interested layman, though the solutions may be harder. (However, if the solution is too hard, this may shift the topic from recreational toward the serious – e.g. Fermat’s Last Theorem, the Four Colour Theorem or the Mandelbrot Set.)

Secondly, recreational mathematics is mathematics that is fun and used as either as a diversion from serious mathematics or as a way of making serious mathematics understandable or palatable. These are the pedagogic uses of recreational mathematics. They are already present in the oldest known mathematics and continue to the present day.

These two aspects of recreational mathematics – the popular and the pedagogic – overlap considerably and there is no clear boundary between them and “serious” mathematics.

How is recreational mathematics useful?

Firstly, recreational problems are often the basis of serious mathematics. The most obvious fields are probability and graph theory where popular problems have been a major (or the dominant) stimulus to the creation and evolution of the subject. …

Secondly, recreational mathematics has frequently turned up ideas of genuine but non-obvious utility. …

Anyone who has tried to do anything with “real world” data knows how much of a puzzle it can represent: from finding the data, to getting hold of it, to getting it into a state and a shape where you can actually work with it, to analysing it, charting it, looking for pattern and structure within it, having a conversation with it, getting it to tell you one of the many stories it may represent, there are tricks to be learned and problems to be solved. And they’re fun.

An obvious definition [of recreational mathematics] is that it is mathematics that is fun, but almost any mathematician will say that he enjoys his work, even if he is studying eigenvalues of elliptic differential operators, so this definition would encompass almost all mathematics and hence is too general. There are two, somewhat overlapping, definitions that cover most of what is meant by recreational mathematics.

…the two definitions described above.

So how might we define “recreational data”. For me, recreational data activities are, in who or in part, data investigations, involving one or more steps of the data lifecycle (discovery, acquisition, cleaning, analysis, visualisation, storytelling). They are the activities I engage in when I look for, or behind, the numbers that appear in a news story. They’re the stories I read on FullFact, or listen to on the OU/BBC co-pro More or Less; they’re at the heart of the beautiful little book that is The Tiger That Isn’t; recreational data is what I do in the “Diary of a Data Sleuth” posts on OpenLearn.

Recreational data is about the joy of trying to find stories in data.

Recreational data is, or can be, the data journalism you do for yourself or the sense you make of the stats in the sports pages.

Recreational data is a safe place to practice – I tinker with Twitter and formulate charts around Formula One. But remember this: “recreational problems are often the basis of serious [practice]“. The “work” I did around Visualising Twitter User Timeline Activity in R? I can (and do) reuse that code as the basis of other timeline analyses. The puzzle of plotting connected concepts on Wikipedia I described in Visualising Related Entries in Wikipedia Using Gephi? It’s a pattern I can keep on playing with.

If you think you might like to do some doodle of your own with some data, why not check out the School Of Data. Or watch out on OpenLearn for some follow up stories from the OU/BBC co-pro of Hans Rosling’s award winning Don’t Panic

Written by Tony Hirst

March 21, 2014 at 9:56 am

Using AdServers Across Networked Organisations

Via my feeds, I noticed this the other day: Google is pushing a new content-recommendation system for publishers, in which VentureBeat quoted a Google originated email sent to them: “Our engineers are working on a content recommendation beta that will present users relevant internal articles on your site after they read a page. This is a great way to drive loyal users and more pageviews.” Hmm.. what’s taken them so long?

(FWIW, use contextual ad-servers to serve content has been one of those ideas that I keep coming round to but never really pursuing:for example, Contextual Content Server, Courtesy of Google?, Google Banner Ads as On-Campus Signage? or Contextual Content Delivery on Higher Ed Websites Using Ad Servers.)

Reflecting on this, I started thinking again about the uses to which we might be able to put adservers. It struck me that one way is actually to use them to serve… ads.

One of the things I’ve noticed about the Open Knowledge Foundation (disclaimer: I work one day a week for the Open Knowledge Foundation’s School of Data) is that it throws up a lot of websites. Digging out a couple of tricks from Pondering Mapping the Pearson Network, I spot at least these domains, for example:

OKF sites

An emergent social positioning map around the School)fData twitter account also identifies a wealth of OKF related projects and local chapters (bottom region of the map), many of which will also run their own web presence:

schoolofdata

One of the issues associated with such a widely dispersed and loosely coupled networked organisation relates to the running of campaigns, and promoting strong single campaign issue messages out across the various websites. So I wonder: would an internal adserver work???

For example, using something like Revive Adserver (formerly OpenX Source)? As Revive Adserver for publishers describes:

… you define web sites, and for each website you then define one or more zones. A zone is a representation of a place on the web pages where the adverts must appear. For every zone, Revive Adserver generates a small piece of HTML code, which must be placed in the site at the exact spot where the ads must be displayed. …

You must also create advertisers, campaigns and advertisements …

The final step is to link the right campaigns to the right zones. This determines which ads will be displayed where. You can combine this with various forms of ‘targeting’, which means you can adjust the advertising to specific situations.

So…each website in the OKF sprawl could include a local adserver zone and display OKF ads. Such ads might be campaign related, or announcements of upcoming dates and events likely to be relevant across the OKF network (for example, internationl open data days, or open data census days).

Other ad blocks/zones could be defined to serve content from particular ad channels or campaigns.

Ad/content could in part be editorially controlled from the centre – for example, a campaign manager might be responsible for choosing which ads are in the pool for a particular campaign or set of campaigns. Site owners might allocate different zones that they can sign up to different ad channels that only serve ad/content on a particular theme?

Members of local groups and project teams could submit ads to the adserver relating to their projects or group activities, with associated campaign codes and topics so that content can be suitably targeted by the platform. The adserver thus also becomes a(nother) possible communications channel across the network.

Written by Tony Hirst

February 13, 2014 at 5:45 pm

Posted in Anything you want

Socially Mapping the Isle of Wight – @onthewight Twitter ESP

Having dusted off and reversioned my Twitter emergent social positioning (ESP) code, and in advance of starting to think about what sorts of analyses I might properly start running, here’s a look back at what I was doing before in terms of charting where particular Twitter accounts sat amongst the other accounts commonly followed by the target account’s followers.

No longer having a whitelisted Twitter API key means the sample sizes I’m running are smaller than they used to be, to maybe that’s a good thing becuase it means I’ll have to start working properly on the methodology…

Anyway, here’s a quick snapshot of where I think hyperlocal news bloggers @onthewight might be situated on Twitter…

onthewight twitter esp

The view aims to map out accounts that are followed by 10 or more people from a sample of about 200 or so followers of @onthewight. The network is layed out according to a force directed layout algorithm with a dash of aesthetic tweaking; nodes are coloured based on community grouping as identified using the Gephi modularity statistic. Which has it’s issues, but it’s a start. The nodes are sized in the first case according to PageRank.

The quick take home from this original sketchmap is that there are a bunch of key information providers in the middle group, local accounts on the left, and slebs on the right.

If we look more closely at the key information providers, they seem to make sense…

key info providers IW

These folk are likely to be either competitors of @onthewight, or prospects who might be worth approaching for advertising on the basis that @onthewight’s followers also follow the target account. (Of course, you could argue that because they share followers, there’s no point also using @onthewight as a channel. Except that @onthewight also has a popular blog presence, which would be where any ads were placed. (The @onthewight Twitter feed is generally news announcements and live reporting.) A better case could probably be made by looking at the follower profiles of the prospects, along with the ESP maps for the prospects, to see how well the audiences match, what additional reach could be offered, etc etc.

A broad brush view over the island community is a bit more cluttered:

wightlife1

If we tweak the layout a little, rerun PageRank to resize the nodes (note this will no longer take into account contributions from the other communities) and tweak the layout, again using a force directed algorithm, we get a bit less of a mess, though the map is still hard to read. Arts to the top, perhaps, Cowes to the right?

wightlife2

Again, with a bit more data, or perhaps a bit more of a think about what sort of map would be useful (and hence, what sort of data to collect), this sort of map might become useful for B2B marketing marketing purposes on the Island. (I’m not really interested in, erm, the plebs such as myself… i.e. people rather than bizs or slebs; though a pleb interest/demographic/reach analysis would probably be the one that would be most useful to take to prospects?).

If we look at the celebrity common follows, again resized and re-layed out, we see what I guess is a typical spread (it’s some time since I looked at these – not sure what the base line is, though @stephenfry still seems to feature high up in the background radiation count).

IW celebrity outlook

For bigger companies with their own marketing $, I guess this sort of map is the sort of place to look for potential celebrity endorsements to reinforce a message (folk following these accounts are already aware of @onthewight because they follow @onthewight) as well as potentially widen reach. But I guess the endorsement as reinforcement is more valuable as a legitimising thing?

Hmm…

Just got to work out what to do next, now, and how to start tightening this up and making it useful rather than just of passing interest…

PS A related chart that could be plotted using Facebook data would be to grab down all the likes of the friends of a person of company on Facebook, though I’m not not sure how that would work if their account is a page as a opposed to a “person”? I’m not so hot on Facebook API/permissions etc, or what sort of information page owners can get about their audience? Also, I’m not sure about the extent to which I can get likes from folk who aren’t my friends or who haven’t granted me app permissions? I used to be able to grab lists of people from groups and trawl through their likes, but I’m not sure default Facebook permissions make that as easy pickings now compared to a year or two ago? (The advantage of Twitter is that the friend/follow data is open on most accounts…)

Written by Tony Hirst

February 7, 2014 at 6:32 pm

Posted in Anything you want

Tagged with ,

Polling the News…

Towards the end of last week I attended a two day symposium on Statistics in Journalism Practice and Education at the University of Sheffield. The programme was mixed, with several reviews of data journalism is or could be, and the occasional consideration of what stats might go into a statistics curriculum for students, but it got me thinking again about the way that content gets created and shunted around the news world.

Take polls, for example. At one point a comment got me idly wondering about the percentage of news copy that is derived from polls or surveys, and how it might be possible to automate the counting of such things. (My default position in this case is usually to wonder what might be possible be with the Guardian open platform content API. But I also started to wonder about how we could map the fan out from independent or commissioned polls or surveys as they get reported in the news media, then maybe start to find their way into other reports and documents by virtue of having been reported in the news.

This sort of thing is a corollary to tracking the way in which news stories might make their way from the newswires and into the papers via a bit of cut-and-pasting, as Nick Davies wrote so damningly about several years ago now in Flat Earth News, his indictment of churnalism and all that goes with it; it also reminds me of this old, old piece of Yahoo Pipes pipework where I tried to support the discovery of Media Release Related News Stories by putting university press release feeds into the same timeline view as news stories about that university.

aberdeenPressRelease

I don’t remember whether I also built a custom search engine at the time for searching over press releases and news sites for mentions of universities, but that was what came immediately to mind this time round.

So for starters, here’s a quick Google Custom Search Engine that searches over a variety of polling organisation and news media websites looking for polls and surveys – Churnalism Times (Polls & Surveys Edition).

Here’s part of the setup, showing the page URL patterns to be search over.

List the sites you want to search over

I added additional refinements to the tab that searches over the news organisations so only pull out pages where “poll” or “survey” is mentioned. Note that if these words are indexed in the chrome around the news story (eg in a banner or sidebar), then we can get a false positive hit on the page (i.e. pull back a page where an irrelevant story is mentioned because a poll is linked to in the sidebar).

Add refinements

From way back when when I took an interest in search more than I do now, I thought Google was trying to find ways of distinguishing content from furniture, but I’m not so sure any more…

Anyway, here’s an example of a search into polls and surveys published by some of the big pollsters:

tuitionFeesPoll

And an example of results from the news orgs:

Tuition fees media

For what it’s worth I also put together a custom search engine for searching over press releases – Churnalism Times (PR wires edition):

PR search

The best way of using this is to just past in a quote, or part of a quote, from a news story, in double quotes, to see which PR notice it came from…

PR search for quote

To make life easier, an old bookmarklet generator I produced way back when on an Arcadia fellowship at the Cambridge University Library, can be used to knock up a simple bookmarklet that will let you highlight a chunk of text and then search for it – get-selection bookmarklet generator.

Give it a sensible title; then this is the URL chunk you need to add:

https://www.google.com/cse/publicurl?cx=016419300868826941330:wvfrmcn2oxc&q=

bookmarklet genrator

Sigh.. I used to have so much fun…

PS it actually makes more sense to enclose the selected quote in quotes. Here’s a tweaked version of the bookmarklet code I grabbed from my installation of it in Chrome:

javascript:(function()%7Bvar t%3Dwindow.getSelection%3Fwindow.getSelection().toString()%3Adocument.selection.createRange().text%3Bwindow.location%3D%27https%3A%2F%2Fwww.google.com%2Fcse%2Fpublicurl%3Fcx%3D016419300868826941330%3Awvfrmcn2oxc%26q%3D"%27%2Bt%2B%27"%27%3B%7D)()

PPS I’ve started to add additional search domains to the PR search engine to include political speeches.

Written by Tony Hirst

February 6, 2014 at 7:26 pm

Baudelaire, Intellectual Talisman Along the Way to Impressionism

During tumultuous times there is often an individual, an intellectual talisman if you like, who watches events unfold and extracts the essence of what is happening into a text, which then provides a handbook for the oppressed. For the frustrated Paris-based artists battling with the Academy during the second half of the nineteenth century, Baudelaire was that individual, his essay, The Painter of Modern Life, the text.

… He claimed that ‘for the sketch of manners, the depiction of bourgeois life … [sic] there is a rapidity of movement which calls for an equal speed of execution from the artist’. …

… Baudelaire passionately believed that it was incumbent upon living artists to document their time, recognizing the unique position that a talented painter or sculptor finds him or herself in: ‘Few men are gifted with the capacity of seeing; there are fewer still who possess the power of expression …’ … He challenged artists to find in modern life ‘the eternal from the the transitory’. That, he thought, was the essential purpose of art – to capture the universal in the everyday, which was particular to their here and now: the present.

And the way to do that was by immersing oneself in the day-to-day of metropolitan living: watching, thinking, feeling and finally recording.

Will Gompertz, What Are You Looking At?, pp.28-29

Written by Tony Hirst

February 6, 2014 at 10:31 am

Posted in Anything you want

Tagged with

Is the UK Government Selling You Off?

Not content with selling off public services, is the government doing all it can to monetise us by means other than taxation by looking for ways of selling off aggregated data harvested from our interaction as users of public services?

For example, “Better information means better care” (door drop/junk mail flyer) goes the slogan that masks the notice that informs you of the right to opt out [how to opt out] of a system in which your care data may be sold on to commercial third parties, in a suitably anonymised form of course… (as per this, perhaps?).

The intention is presumably laudable – better health research? – but when you sell to one person you tend to sell to another… So when I saw this story – Data Broker Was Selling Lists Of Rape Victims, Alcoholics, and ‘Erectile Dysfunction Sufferers’ – I wondered whether care.data could end up going the same way?

Despite all the stories about the care.data release, I have no idea which bit of legislation covers it (thanks, reporters…not); so even if I could make sense of the legalese, I don’t actually know where to read what the legislation says the HSCIC (presumably) can do in relation to sale of care data, how much it can charge, any limits on what the data can be used for etc.

I did think there might be a clause or two in the Health and Social Care Act 2012, but if there is it didn’t jump out at me. (What am I supposed to do next? Ask a volunteer librarian? Ask my MP to help me find out which bit of law applies, and then how to interpret it, as well as game it a little to see how far the letter if not the spirit of the law could be pushed in commercially exploiting the data? Could the data make it as far as Experian, or Wonga, for example, and if so, how might it in principle be used there? Or how about in ad exchanges?)

A little more digging around the HSCIC Data flows transition model turned up some block diagrams showing how data used for commissioning could flow around, but I couldn’t find anything similar as far as sale of care.data to arbitrary third parties goes.

NHS commissioning data flows

(That’s another reason to check the legislation – there may be a list of what sorts of company is allowed to access care.data for now, but the legislation may also use Henry VIII’th clauses or other schedule devices to define by what ministerial whim additional recipients or classes of recipient can be added to the list…)

What else? Over on the Open Knowledge Foundation blog (disclaimer: I work for the Open Knowledge Foundation’s School of Data for 1 day a week), I see a guest post from Scraperwiki’s Francis Irving/@frabcus about the UK Government Performance Platform (The best data opens itself on UK Gov’s Performance Platform). The platform reports the number of applications for tax discs over time, for example, or the claims for carer’s allowance. But these headline reports make me think: there is presumably much finer grained data below the level of these reports, presumably tied (for digital channel uptake of this services at least) to Government Gateway IDs. And to what extent is this aggregated personal data sellable? Is the release of this data any different in kind to the release of the other national statistics or personal information containing registers (such as the electoral roll) that the government publish either freely or commercially?

Time was when putting together a jigsaw of the bits and pieces of information you could find out about a person meant doing a big jigsaw with little pieces. Are we heading towards a smaller jigsaw with much bigger pieces – Google, Facebook, your mobile operator, your broadband provider, your supermarket, your government, your health service?

PS related, in the selling off stakes? Sale of mortgage style student loan book completed. Or this ill thought out (by me) post – Confused by Government Spending, Indirectly… – around government encouraging home owners to take out shared ownership deals with UK gov so it can sell that loan book off at a later date?

Written by Tony Hirst

January 21, 2014 at 12:26 pm

Revisiting Emergent Social Positioning

Prompted by an email request, I’ve revisited the code I used to generate emergent social positioning maps in Twitter as an iPython notebook that reuses chunks of code from, as well as the virtual machine used to support, Matthew A. Russell’s Mining the Social Web (2nd edition) [code]).

You can see the notebook here: emergent social positioning iPython notebook [src].

As a reminder, the social positioning maps show folk commonly followed by the followers of a particular twitter user.

Written by Tony Hirst

January 14, 2014 at 11:56 am

Posted in Anything you want

Tagged with

Follow

Get every new post delivered to your Inbox.

Join 757 other followers