Programming Pipes With Delicious and Sharing data.gov.uk SPARQL Queries As A Result

In the post Sharing Linked Data Queries With Mortals… I described a rather clunky pattern for sharing SPARQL queries onto a data.gov.uk sparql endpoint using delicious, along with a Yahoo pipe to generate a SPARQLProxy URI that identified a CSV formatted output from the query that could be consumed in something like Google spreadsheets (e.g. Viewing SPARQLed data.gov.uk Data in a Google Spreadsheet).

In this post, I’ll refine that pattern a little more and show how to use delicious to bookmark a “processed” form of the output of the query, along with all the ingredients needed to generate that output. In a later post (hopefully before Christmas) I’ll try to show how the pattern can be used to share queries into other datastores, such as Google visualization API queries into a Google spreadsheet.

[The post describes two independent operations – firstly, how to generate a sparqlproxy uri from a set of delicious tags; secondly, how to generate a map from sparqlproxy output.]

In what follows, I’m using delicious as a database, and delicious tags as machine tags that can be used as arguments within a Yahoo pipe. The intention is not to suggest that this is even a good way of sharing SPARQL queries and demo outputs, but it does show an improvised way of how to share them,. It also provides just enough raw material to allow UI designers to think how we might capture, share and display sample use cases that go from SPARQL query to human meaningful output.

So here’s what a “finished” delicious bookmark looks like:

The three “machine tags”:
– endpoint:http://services.data.gov.uk/transport/sparql
– output:csv
– query:http://codepad.org/hPo2XIzx/raw.txt
are used as instructions in a Yahoo pipe that generates a query using SPARQLProxy:

A feed of bookmarked queries is pulled in from delicious, and checked to see that all the programming arguments that are required are there. (The sparlproxy_demo tag requirement is just a convenience.)

If all the arguments are available, their values are captured and passed to a routine to construct the SPARQLProxy URIs:

The query bookmarked by the tags is a query onto the transport database that pulls out the location traffic monitoring points on thee M5. The bookmarked URI is a demonstration of how to use the output of that query. In the current example, the bookmarked demo URI looks like this:

http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=
&q=http:%2F%2Fpipes.yahoo.com%2Fpipes%2Fpipe.run%3F_id%3D1c77faa95919df9dbea678a6bf4881c6%26_render%3Dkml
&sll=37.0625,-95.677068&sspn=44.25371,70.751953&ie=UTF8&z=8

That is, it is a bookmark to a Google map that is displaying a KML feed pulled in from a Yahoo pipe.

This is the Google Map:

This is the Yahoo pipe the KML is pulled from:

And this is a peek inside that pipe:

The URI for the fetched CSV file is on that was generated from the bookmarked query tags by the first pipe shown above.

So to recap – this bookmark:

links through to a Google map showing traffic monitoring locations on the M5. The map is an ‘end-user’ demo that shows how Government data may be displayed in a meaningful way. In addition, th tags in the bookmark carry enough information for the user to construct a SPARQL qury that will generate the data that is displayed by the map. A utility Yahoo pipe (ID 22059ac6f967d7117b1262586c92b371) can take a delicious RSS feed containing the bookmark and generate a SPARQLProxy URI that calls the tagged endpoint with the tagged query and generates the tag specified output. There is then a break in the chain. A utility pipe capable of handling SPPARQLProxy generated CSV from data.gov.uk transport data generates a KML version of the data, which is then passed to a Google map.

So what? So I don’t know what… but it has got me thinking that what might be useful is a quick way of sharing:
– info that is sufficient to generate and run a particular SPARQL query.
– along with a link to an ‘end user’ demo showing how that data might be used or displayed
– all in one handy package…

Which is what the above does… sort of.. though it’s still way too complicated for mortals…

PS It occurs to me that it might be possible to define a pipeline using delicious tags too? Maybe something like:
pipeline:22059ac6f967d7117b1262586c92b371?name=psychemedia&tag=transportDemo//
1c77faa95919df9dbea678a6bf4881c6?_render=kml//
gmap

where the first step in the pipeline (2059ac6f967d7117b1262586c92b371?name=psychemedia&tag=transportDemo) says “run the delicious RSS feed from psychemedia/transportDemo (assuming I had recursively bookmarked that record with transportDemo) through the pipe with ID 2059ac6f967d7117b1262586c92b371 to generate the SPARQLProxy URI, then pass that to the pipe with ID 1c77faa95919df9dbea678a6bf4881c6 which should generate a KML output, which in turn should be sent to a Google map?!

Viewing SPARQLed data.gov.uk Data in a Google Spreadsheet

This is a stub post as much as anything to help me keep tabs on a technique I’ve not had any time to properly play with, let alone document: consuming Linked Data in a Google spreadsheet.

First up – why might this be useful? In short, I think that many people who might want to make use of data.gov.uk data are probably comfortable with spreadsheets but not with code, so giving them access to SPARQL and RDF is not necessarily useful.

So whilst the recipe shown here is a hacky one, at least it opens up the playground a little to explore what the issues might be – and what things might be facilitated by – providing fluid, live data routes from RDF triple stores into spreadsheets.

So here’s what I’m aiming for – some data from the education datastore in a spreadsheet:

And how did it get there? Via a CSV import using the =importData formula.

And where did the CSV come from? The sparqlproxy webservice:

As well as giving CSV output, the serivce can also gnerate HTML and a variety of JSON formats, including the MIT Simile and Googl Viz API formats (which means it’s easy to just plug other data into a wide variety of visualisation formats.

To get the data into a Google spreadsheet, simply copy the CSV URI into an =importData(“CSV_URI_HERE”) formula in a spreadsheet cell.

The sparqlproxy service can also pull in and transform queries that have been posted on the web:

So for example, in the above case the query at http://data-gov.tw.rpi.edu/sparql/stat_ten_triples_default.sparql looks like:

What this means is that someone else can write complex queries and mortals can then access the data and display it however they want. (What I’d really like to see is a social site that supports the sharing of endpoint/query pairs for particular queries (I could probably even hack something to do this using delicious?) ;-)

Once the data is in the spreadsheet, it can be played with in the normal way of course. So for example, I can query the spreadsheet using my old prototype Guardian datastore explorer:

In the above example, the route is then:

1) a sparql query onto http://services.data.gov.uk/education/sparql is run through
2) the http://data-gov.tw.rpi.edu/ws/sparqlproxy.php sparqlproxy service to produce a CSV output that is
3) pulled into a Google spreadsheet using an =importData() formula, and in turn
4) queried using my Google Datastore explorer using the Google visualisation API query language and then
5) rendered in a Google Visualisation table widget.

Lurvely… :-)

Mapping Recent School Openings and Closures

Just after I put together the pipework for Getting Started with data.gov.uk, Triplr SPARYQL and Yahoo Pipes, I also cut and pasted some of the code from a previous map based mashup to demo how to make a SPARQL call via a pipe that calls on the UK Gov education Linked Data datastore from within a web page, and then display the geocoded results on a map.

Here’s the demo – School openings and closures in the UK, 1/1/08-1/10/09

If you View Source, you’ll see the code boils down to:

//schools closed between 1/1/08 and 1/10/09
q="SELECT ?school ?name ?opendate ?closedate ?easting ?northing WHERE {?school a sch-ont:School;  sch-ont:establishmentName ?name;sch-ont:easting ?easting; sch-ont:northing ?northing; sch-ont:establishmentStatus sch-ont:EstablishmentStatus_Closed ; sch-ont:closeDate ?closedate ; sch-ont:openDate ?opendate . FILTER (?closedate > '2008-01-01'^^xsd:date && ?closedate < '2009-10-01'^^xsd:date)}"

u=ur+encodeURIComponent(q);
getPipeGeoData(u, 'parseJSON_purple');

In all I make three calls to a pipe that calls on the data.gov.uk education datastore, one for schools opened between 1/1/08 and 1/10/09:
SELECT ?school ?name ?opendate ?easting ?northing WHERE {?school a sch-ont:School; sch-ont:establishmentName ?name;sch-ont:easting ?easting; sch-ont:northing ?northing; sch-ont:openDate ?opendate . FILTER (?opendate > '2008-01-01'^^xsd:date && ?opendate < '2009-10-01'^^xsd:date)}

one for schools closed between 1/1/08 and 1/10/09:
SELECT ?school ?name ?opendate ?closedate ?easting ?northing WHERE {?school a sch-ont:School; sch-ont:establishmentName ?name;sch-ont:easting ?easting; sch-ont:northing ?northing; sch-ont:establishmentStatus sch-ont:EstablishmentStatus_Closed ; sch-ont:closeDate ?closedate ; sch-ont:openDate ?opendate . FILTER (?closedate > '2008-01-01'^^xsd:date && ?closedate < '2009-10-01'^^xsd:date)}

and one for schools proposed to close:
SELECT ?school ?name ?easting ?northing ?opendate WHERE {?school a sch-ont:School; sch-ont:establishmentName ?name;sch-ont:easting ?easting; sch-ont:northing ?northing ; sch-ont:establishmentStatus sch-ont:EstablishmentStatus_Open__but_proposed_to_close; sch-ont:openDate ?opendate . }

(I cribbed how to write these queries from a Talis blog: SPARQLing data.gov.uk: Edubase Data;-)

The results of each call are displayed using the different coloured markers.

(The rest of the code is really horrible. Note to self: get round to learning JQuery.)

Getting Started with data.gov.uk, Triplr SPARYQL and Yahoo Pipes

RDF, SPARQL and the semantic web are too scarey for mortals, right? So here’s a hopefully easier way in for those of us who are put off by the syntactic nightmare that defines the world of formal Linked Data: Yahoo Pipes :-)

A few weeks ago, I was fortunate enough to get one of the developer preview keys for data.gov.uk, the UK answer to Tim Berners Lee’s call to open up public data.

The UK data.gov.uk solution is currently a hybrid – a growing set of separate Linked Data stores (hosted on the Talis Platform, I think?) covering areas such as education, finance and transport; and a set of links to CSV and Excel spreadsheets available for download on a wide variety of Government department websites. (Some of them have nice URLs, some don’t; if you think they should, how does this sound? Designing URI Sets for the UK Public Sector: Machine- and human-readable formats. Most of the data is public too – it’s just the meta-sit – data.gov.uk – that I think is under wraps at the moment?)

I’ve played with online CSV and Excel spreadsheets before (e.g. in the context of the Guardian Datastore), but I’ve always found SPARQL endpoints and RDF a little bit, err, terrifying, so last week I felt it was time to bite the bullet and spend an hour or two trying to do something – anything – with some hardcore Linked Data. Or at least, try to just do something – anything – with some data out of a single data store.

So where to start? Regular readers will know that I try to use free online apps and client side Javascript code wherever possible (I don’t want to have to assume the availability of access to my own web server), so it made sense to look to at Yahoo Pipes :-)

I’ve done the odd demo of how to use SPARQL in a Yahoo Pipe before (Last Week’s Football Reports from the Guardian Content Store API (with a little dash of SPARQL), which is not about the football, right?) but a tweet last week tipped me off to a potentially more abstracted way of writing SPARQL queries in a Pipes environment: Triplr’s SPARQL + YQL = SPARYQL.

Ooh…. the idea is that you can wrap a SPARQL query in a YQL query, which in turn suggests two other things…

Firstly, I now have a way I’m already familiar with of generating and debugging bookmarkable RESTful queries to SPARQL endpoints:

Secondly, the Yahoo Pipes YQL block provides a handy container for making the SPARYQL queries and pulling the results back into a pipes environment.

Here’s the pipework… (based on an original pipe by @hapdaniel)

SPARQL + YQL= SPARYQL pipe http://pipes.yahoo.com/ouseful/sparyql

So what can we do with it? Not being particularly fluent in SPARQL, I had a poke around for some examples I could cut, paste, hack and tinker with and found a few nice examples on the [n]^2 blog: SPARQLing data.gov.uk: Edubase Data

So here’s a quick demo – a pipe runs a query that looks for the 10 schools with the latest opening dates on data.gov.uk’s education datastore:

SELECT ?school ?name ?date ?easting ?northing WHERE {?school a sch-ont:School; sch-ont:establishmentName ?name; sch-ont:openDate ?date ; sch-ont:easting ?easting; sch-ont:northing ?northing . } ORDER BY DESC(?date) LIMIT 10

In order to plot the schools on a map, it’s necessary to convert northings and easting to latitude and longitude. A cry for help on twitter was quickly responded to by @kitwallace who gave me a link to a service that did just the job… almost – for some reason, Pipes didn’t like the output, so I had to run the query through a YQL proxy:

(Note that Kit soon came up with a fix, so I could actually just call the service directly via a Data Source block using a call of the form http://www.cems.uwe.ac.uk/xmlwiki/geo/OS2latlong2.xq?easting=527085&northing=185400.)

Here’s the result:

Note that the KML output from the pipe can be plotted directly in a Google map (simply paste the KML URL into Google Map search box and hit return.)

By writing two or three different queries, and pulling the data separately into a web page via the JSON feed, we can easily create a map that displays the schools that have opened and closed between 1/1/08 and 1/10/09:

If we take the CSV output of the pipe, we can also see how it’s possible to transport the content into a Google spreadsheet (once again thanks to @hapdaniel for pointing out that changing the output switch of a pipe’s RSS feed from rss to csv does the format conversion job nicely):

which gives:

(Note that the CSV import seems to require quite a flat data structure (though it is trying really hard with the more hierarchical data – it’s just not quite managing to catch the data values?), so some renaming within the pipe might be required to make sure that the child attributes of each feed item do not have any children of their own. Empty attributes also need pruning.)

PS I did try importing the XML output from a RESTful YQL query into a Google spreadsheet with an =importXML formula but it didn’t seem to work. Firstly, the RESTful URI was too long (easily solved by rewriting it as a shortenedURI). Secondly, the Google spreadsheet didn’t seem to like output XML :-(

So near, yet so far… but still, it poses the question: could we write containerised queries/topic specific APIs over data.gov.uk SPARQL endpoints that expose the results in a spreadsheet capable of importing XML?

Last Week’s Football Reports from the Guardian Content Store API (with a little dash of SPARQL)

A big :-) from me today – at last I think I’ve started to get my head round this mashup malarkey properly… forget the re-presentation stuff, the real power comes from using one information source to enrich another… but as map demos are the sine qua non of mashup demos, I’ll show you what I mean with a map demo…

So to start, here’s a simple query on the Guardian content store API for football match reports:

http://api.guardianapis.com/content/search?
filter=/football&filter=/global/matchreports&after=20090314&api_key=MYSECRETACTIVATEDKEY

It’s easy enough to construct the query URI using a relative date in the Yahoo pipe, so the query will always return the most recent match reports (in this case, matc h reports since “last saturday”):

It’s easy enough to use these results to generate an RSS feed of the most recent match reports:

Pulling the images in as Media RSS (eg media:group) elements means that things like the Google Ajax slide show control and the Pipes previewer can automatically generate a slideshow for you…

You can also get the straight feed of course:

A little bit of tinkering with the creation of the description element means we can bring the original byline and match score in to the description too:

Inspecting the API query results by eye, you might notice that a lot of the bylines have the form “John Doe at the Oojamaflip Stadium”:

Hmmm…

It’s easy enough to exploit this structural pattern to grab the stadium name using a regular expression or two:

I thien did a little experiment running the name of the stadia, and the name of the stadia plius football ground, UK through the Yahoo Location Extractor block to try to plot the sotries on map locations corresponding to the football ground locations, but the results weren’t that good…

…so I tweeted:

And got a couple of responses…

The XQuery/DBpedia with SPARQL – Stadium locations link looked pretty interesting, so I tweaked the example query on that page to return a list of English football stadia and their locations:

PREFIX p: <http://dbpedia.org/property/&gt;
PREFIX skos: <http://www.w3.org/2004/02/skos/core#&gt;
PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#&gt;
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#&gt;
SELECT * WHERE
{?ground skos:subject <http://dbpedia.org/resource/Category:Football_venues_in_England&gt;.
?ground geo:long ?long.
?ground geo:lat ?lat.
?ground rdfs:label ?groundname.
FILTER (lang(?groundname) ='en').
}

and created a pipe to call dBpedia with that query (dbpedia example – English football stadium location lookup pipe):

Because I don’t know how to write SPARQL, I wasn’t sure how to tweak the query to just return the record for a given stadium name (feel tfree to comment telling me how ;-) – so instead I used a pipe filter block to filter the results instead. (This combination of search and filter can be a very powerful one when you don’t know how to phrase a particular qusry, or when a query language doesn’t support a search limit you want…

It was now a simple matter to add this pipe in to geocode the locations of the appropriate stadium for each match report:

So let’s recap – we call the Guardian content API for match reports since “last saturday” and construct a nice RSS feed from it, with description text that includes the byline and match score, as well as the match report. Then we pull out the name of stadium each match was played at (relying on the convention that seems to work much of the time that the byline records the stadium) and pass it through another pipe that asks DBpedia for a list of UK football stadium locations, and then filters out the one we want.

Tweak the location data to a form Yahoo pipes likes (which means it will create a nice geoRSS or KML feed for us) and what do we get? Map based match reports:

As I’ve show in this blog many times before, it’s easy enough to grab a KML feed from the More options pipe output and view the results elsewhere:

(Click on a marker on the google map and it will pop up the match report.)

So what do we learn from this? Hmmm – that I need to learn to speak SPARQL, maybe?!

PS @kitwallace has come up trumps with a tweak to the SPARQL query that will do the query by stadium name in one:
FILTER (lang(?groundname) =’en’ && regex(?groundname,’Old Trafford’)). Ta, muchly :-)