Getting Started with, Triplr SPARYQL and Yahoo Pipes

RDF, SPARQL and the semantic web are too scarey for mortals, right? So here’s a hopefully easier way in for those of us who are put off by the syntactic nightmare that defines the world of formal Linked Data: Yahoo Pipes :-)

A few weeks ago, I was fortunate enough to get one of the developer preview keys for, the UK answer to Tim Berners Lee’s call to open up public data.

The UK solution is currently a hybrid – a growing set of separate Linked Data stores (hosted on the Talis Platform, I think?) covering areas such as education, finance and transport; and a set of links to CSV and Excel spreadsheets available for download on a wide variety of Government department websites. (Some of them have nice URLs, some don’t; if you think they should, how does this sound? Designing URI Sets for the UK Public Sector: Machine- and human-readable formats. Most of the data is public too – it’s just the meta-sit – – that I think is under wraps at the moment?)

I’ve played with online CSV and Excel spreadsheets before (e.g. in the context of the Guardian Datastore), but I’ve always found SPARQL endpoints and RDF a little bit, err, terrifying, so last week I felt it was time to bite the bullet and spend an hour or two trying to do something – anything – with some hardcore Linked Data. Or at least, try to just do something – anything – with some data out of a single data store.

So where to start? Regular readers will know that I try to use free online apps and client side Javascript code wherever possible (I don’t want to have to assume the availability of access to my own web server), so it made sense to look to at Yahoo Pipes :-)

I’ve done the odd demo of how to use SPARQL in a Yahoo Pipe before (Last Week’s Football Reports from the Guardian Content Store API (with a little dash of SPARQL), which is not about the football, right?) but a tweet last week tipped me off to a potentially more abstracted way of writing SPARQL queries in a Pipes environment: Triplr’s SPARQL + YQL = SPARYQL.

Ooh…. the idea is that you can wrap a SPARQL query in a YQL query, which in turn suggests two other things…

Firstly, I now have a way I’m already familiar with of generating and debugging bookmarkable RESTful queries to SPARQL endpoints:

Secondly, the Yahoo Pipes YQL block provides a handy container for making the SPARYQL queries and pulling the results back into a pipes environment.

Here’s the pipework… (based on an original pipe by @hapdaniel)


So what can we do with it? Not being particularly fluent in SPARQL, I had a poke around for some examples I could cut, paste, hack and tinker with and found a few nice examples on the [n]^2 blog: SPARQLing Edubase Data

So here’s a quick demo – a pipe runs a query that looks for the 10 schools with the latest opening dates on’s education datastore:

SELECT ?school ?name ?date ?easting ?northing WHERE {?school a sch-ont:School; sch-ont:establishmentName ?name; sch-ont:openDate ?date ; sch-ont:easting ?easting; sch-ont:northing ?northing . } ORDER BY DESC(?date) LIMIT 10

In order to plot the schools on a map, it’s necessary to convert northings and easting to latitude and longitude. A cry for help on twitter was quickly responded to by @kitwallace who gave me a link to a service that did just the job… almost – for some reason, Pipes didn’t like the output, so I had to run the query through a YQL proxy:

(Note that Kit soon came up with a fix, so I could actually just call the service directly via a Data Source block using a call of the form

Here’s the result:

Note that the KML output from the pipe can be plotted directly in a Google map (simply paste the KML URL into Google Map search box and hit return.)

By writing two or three different queries, and pulling the data separately into a web page via the JSON feed, we can easily create a map that displays the schools that have opened and closed between 1/1/08 and 1/10/09:

If we take the CSV output of the pipe, we can also see how it’s possible to transport the content into a Google spreadsheet (once again thanks to @hapdaniel for pointing out that changing the output switch of a pipe’s RSS feed from rss to csv does the format conversion job nicely):

which gives:

(Note that the CSV import seems to require quite a flat data structure (though it is trying really hard with the more hierarchical data – it’s just not quite managing to catch the data values?), so some renaming within the pipe might be required to make sure that the child attributes of each feed item do not have any children of their own. Empty attributes also need pruning.)

PS I did try importing the XML output from a RESTful YQL query into a Google spreadsheet with an =importXML formula but it didn’t seem to work. Firstly, the RESTful URI was too long (easily solved by rewriting it as a shortenedURI). Secondly, the Google spreadsheet didn’t seem to like output XML :-(

So near, yet so far… but still, it poses the question: could we write containerised queries/topic specific APIs over SPARQL endpoints that expose the results in a spreadsheet capable of importing XML?

Mapping Recent School Openings and Closures

Just after I put together the pipework for Getting Started with, Triplr SPARYQL and Yahoo Pipes, I also cut and pasted some of the code from a previous map based mashup to demo how to make a SPARQL call via a pipe that calls on the UK Gov education Linked Data datastore from within a web page, and then display the geocoded results on a map.

Here’s the demo – School openings and closures in the UK, 1/1/08-1/10/09

If you View Source, you’ll see the code boils down to:

//schools closed between 1/1/08 and 1/10/09
q="SELECT ?school ?name ?opendate ?closedate ?easting ?northing WHERE {?school a sch-ont:School;  sch-ont:establishmentName ?name;sch-ont:easting ?easting; sch-ont:northing ?northing; sch-ont:establishmentStatus sch-ont:EstablishmentStatus_Closed ; sch-ont:closeDate ?closedate ; sch-ont:openDate ?opendate . FILTER (?closedate > '2008-01-01'^^xsd:date && ?closedate < '2009-10-01'^^xsd:date)}"

getPipeGeoData(u, 'parseJSON_purple');

In all I make three calls to a pipe that calls on the education datastore, one for schools opened between 1/1/08 and 1/10/09:
SELECT ?school ?name ?opendate ?easting ?northing WHERE {?school a sch-ont:School; sch-ont:establishmentName ?name;sch-ont:easting ?easting; sch-ont:northing ?northing; sch-ont:openDate ?opendate . FILTER (?opendate > '2008-01-01'^^xsd:date && ?opendate < '2009-10-01'^^xsd:date)}

one for schools closed between 1/1/08 and 1/10/09:
SELECT ?school ?name ?opendate ?closedate ?easting ?northing WHERE {?school a sch-ont:School; sch-ont:establishmentName ?name;sch-ont:easting ?easting; sch-ont:northing ?northing; sch-ont:establishmentStatus sch-ont:EstablishmentStatus_Closed ; sch-ont:closeDate ?closedate ; sch-ont:openDate ?opendate . FILTER (?closedate > '2008-01-01'^^xsd:date && ?closedate < '2009-10-01'^^xsd:date)}

and one for schools proposed to close:
SELECT ?school ?name ?easting ?northing ?opendate WHERE {?school a sch-ont:School; sch-ont:establishmentName ?name;sch-ont:easting ?easting; sch-ont:northing ?northing ; sch-ont:establishmentStatus sch-ont:EstablishmentStatus_Open__but_proposed_to_close; sch-ont:openDate ?opendate . }

(I cribbed how to write these queries from a Talis blog: SPARQLing Edubase Data;-)

The results of each call are displayed using the different coloured markers.

(The rest of the code is really horrible. Note to self: get round to learning JQuery.)

Viewing SPARQLed Data in a Google Spreadsheet

This is a stub post as much as anything to help me keep tabs on a technique I’ve not had any time to properly play with, let alone document: consuming Linked Data in a Google spreadsheet.

First up – why might this be useful? In short, I think that many people who might want to make use of data are probably comfortable with spreadsheets but not with code, so giving them access to SPARQL and RDF is not necessarily useful.

So whilst the recipe shown here is a hacky one, at least it opens up the playground a little to explore what the issues might be – and what things might be facilitated by – providing fluid, live data routes from RDF triple stores into spreadsheets.

So here’s what I’m aiming for – some data from the education datastore in a spreadsheet:

And how did it get there? Via a CSV import using the =importData formula.

And where did the CSV come from? The sparqlproxy webservice:

As well as giving CSV output, the serivce can also gnerate HTML and a variety of JSON formats, including the MIT Simile and Googl Viz API formats (which means it’s easy to just plug other data into a wide variety of visualisation formats.

To get the data into a Google spreadsheet, simply copy the CSV URI into an =importData(“CSV_URI_HERE”) formula in a spreadsheet cell.

The sparqlproxy service can also pull in and transform queries that have been posted on the web:

So for example, in the above case the query at looks like:

What this means is that someone else can write complex queries and mortals can then access the data and display it however they want. (What I’d really like to see is a social site that supports the sharing of endpoint/query pairs for particular queries (I could probably even hack something to do this using delicious?) ;-)

Once the data is in the spreadsheet, it can be played with in the normal way of course. So for example, I can query the spreadsheet using my old prototype Guardian datastore explorer:

In the above example, the route is then:

1) a sparql query onto is run through
2) the sparqlproxy service to produce a CSV output that is
3) pulled into a Google spreadsheet using an =importData() formula, and in turn
4) queried using my Google Datastore explorer using the Google visualisation API query language and then
5) rendered in a Google Visualisation table widget.

Lurvely… :-)

Sharing Linked Data Queries With Mortals…

So I know, I know, I b****y well know how important the whole RDF thing is for Linked Data, and I know we’re not there yet with respect to actual people using data pulled from* sources, and I’m starting to be persuaded that maybe is only there to feed the growth of semantic web developer capacity but ultimately, ultimately, it will probably be folk who can’t cope with anything other than a spreadsheet who are going to have to use this data…

…so the spirit in which this is offered is one of just trying to protect the interests of potential future end users while the geeky tech developer astronauts (astronauts being @iand’s term;-) do the groundwork’n’spadework and make design decisions whose full impact may not otherwise be realised for a little while yet…

So what am I offering…? A quick’n’dirty way of sharing bookmarks into the sparqlproxy Web Service that I posted about in Viewing SPARQLed Data in a Google Spreadsheet.

So how does it work? Like this…

The geeky tech SPARQL speaking astronaut writes their SPAQRL query and posts it into codepad:

They grab the link to the raw text and bookmark it in delicious; the SPARQL endpoint for the query is pasted into the description, and a brief description of the query into the title; the required output is identified using an output: machine tag (e.g. output:, output:sparql, output:html, output:csv, output:xml, output:exhibit or output:gvds):

(An alternative might be to have the endpoint as the title, and the description as the description, or a brief description as a title, a full description asa description, and a endpoint: “machine tag” for the endpoint, but this was just a proof of concept, right? ;-)

The following pipe constructs the SPARQLProxy query for each bookmark using the specified query, endpoint and output type (at the moment, the pipe also requires a sparqlproxy_demo tag to be present):

A link to the result of the query, suitably transformed, is then rewritten as the link in the output feed.

A bit of tidying up on the pipe lets you specify a delicious user and/or tag as the origin of the bookmarked links…

So there we have it, an easy way to share SPARQLed queries and get access to “human usable” outputs…

PS there’s no reason, in the recipe above, not to also use the sparql endpoint URI in a tag or machine tag too, to allow for queries run over the same bookmark to be collected together and pulled out of delicious by tag/endpoint…

Programming Pipes With Delicious and Sharing SPARQL Queries As A Result

In the post Sharing Linked Data Queries With Mortals… I described a rather clunky pattern for sharing SPARQL queries onto a sparql endpoint using delicious, along with a Yahoo pipe to generate a SPARQLProxy URI that identified a CSV formatted output from the query that could be consumed in something like Google spreadsheets (e.g. Viewing SPARQLed Data in a Google Spreadsheet).

In this post, I’ll refine that pattern a little more and show how to use delicious to bookmark a “processed” form of the output of the query, along with all the ingredients needed to generate that output. In a later post (hopefully before Christmas) I’ll try to show how the pattern can be used to share queries into other datastores, such as Google visualization API queries into a Google spreadsheet.

[The post describes two independent operations – firstly, how to generate a sparqlproxy uri from a set of delicious tags; secondly, how to generate a map from sparqlproxy output.]

In what follows, I’m using delicious as a database, and delicious tags as machine tags that can be used as arguments within a Yahoo pipe. The intention is not to suggest that this is even a good way of sharing SPARQL queries and demo outputs, but it does show an improvised way of how to share them,. It also provides just enough raw material to allow UI designers to think how we might capture, share and display sample use cases that go from SPARQL query to human meaningful output.

So here’s what a “finished” delicious bookmark looks like:

The three “machine tags”:
– endpoint:
– output:csv
– query:
are used as instructions in a Yahoo pipe that generates a query using SPARQLProxy:

A feed of bookmarked queries is pulled in from delicious, and checked to see that all the programming arguments that are required are there. (The sparlproxy_demo tag requirement is just a convenience.)

If all the arguments are available, their values are captured and passed to a routine to construct the SPARQLProxy URIs:

The query bookmarked by the tags is a query onto the transport database that pulls out the location traffic monitoring points on thee M5. The bookmarked URI is a demonstration of how to use the output of that query. In the current example, the bookmarked demo URI looks like this:

That is, it is a bookmark to a Google map that is displaying a KML feed pulled in from a Yahoo pipe.

This is the Google Map:

This is the Yahoo pipe the KML is pulled from:

And this is a peek inside that pipe:

The URI for the fetched CSV file is on that was generated from the bookmarked query tags by the first pipe shown above.

So to recap – this bookmark:

links through to a Google map showing traffic monitoring locations on the M5. The map is an ‘end-user’ demo that shows how Government data may be displayed in a meaningful way. In addition, th tags in the bookmark carry enough information for the user to construct a SPARQL qury that will generate the data that is displayed by the map. A utility Yahoo pipe (ID 22059ac6f967d7117b1262586c92b371) can take a delicious RSS feed containing the bookmark and generate a SPARQLProxy URI that calls the tagged endpoint with the tagged query and generates the tag specified output. There is then a break in the chain. A utility pipe capable of handling SPPARQLProxy generated CSV from transport data generates a KML version of the data, which is then passed to a Google map.

So what? So I don’t know what… but it has got me thinking that what might be useful is a quick way of sharing:
– info that is sufficient to generate and run a particular SPARQL query.
– along with a link to an ‘end user’ demo showing how that data might be used or displayed
– all in one handy package…

Which is what the above does… sort of.. though it’s still way too complicated for mortals…

PS It occurs to me that it might be possible to define a pipeline using delicious tags too? Maybe something like:

where the first step in the pipeline (2059ac6f967d7117b1262586c92b371?name=psychemedia&tag=transportDemo) says “run the delicious RSS feed from psychemedia/transportDemo (assuming I had recursively bookmarked that record with transportDemo) through the pipe with ID 2059ac6f967d7117b1262586c92b371 to generate the SPARQLProxy URI, then pass that to the pipe with ID 1c77faa95919df9dbea678a6bf4881c6 which should generate a KML output, which in turn should be sent to a Google map?!

Bookmarking and Sharing Open Data Queries

Over the last few months, I had several aborted attempts at trying to get to grips with SPARQL’n’RDF, two key ingredients in the Linked Data initiative. So as the sort of self-directed learner who often relies on learning by example, I’ve put together a Google form to collect together example SPARQL and Google Spreadsheet (aka Guardian Datastore) queries that I can remix and reuse for my own purposes.

Here’s an example of part of the form:

The form collects a description of the query, its endpoint, and ontologies used in the query, the query itself, and optionally a link to an example output from the query, as well as other bits of info (e.g. there’s a place for a link to a blog post describing the query).

Here are some of the bookmarked queries:

At the moment, the saved queries can only be viewed in the spreadsheet, but with time allowing I hope to build a front end/explorer that will allow you to run the queries, see preview results of the queries etc etc. (Note this is intended as a tool for !astronauts to get started with/teach about/learn about/explore various datasets. Folk who would be put off by first being exposed to the RDF’n’SPARQL, rather than seeing the data in a table, plotted on a chart, etc etc. Remember, most people who see lat/long data in a table do not see in their mind’s eye a map with corresponding markers on it; they see a list of largely meaningless numbers…)

You can find the form here: QUERY Sharing Form and the results here: Example Queries [UPDATE: to reun a query, you need to find the correct endpoint and then use the PREFIX column entries, along with the SPARQL/SELECT common stuff, to run the query

e.g. this is should form the basis of a valid query in the form here:

PREFIX road: <>
PREFIX rdf: <>
PREFIX geo: <>
PREFIX wgs84: <>
PREFIX xsd: <>
SELECT ?point ?lat ?long WHERE {
  ?x a road:Road.
  ?x road:number "M5"^^xsd:NCName.
  ?x geo:point ?point.
  ?point wgs84:lat ?lat.
  ?point wgs84:long ?long.

There aren’t many examples in there at the moment, but the form is an open one and you can use it if you like… I’m also taking suggestions for how to improve the form so that other folk might b tempted into using it… ;-) (Note that I intend to keep tweaking the spreadsheet as I use it in order to make it more useful for me, if no-one else…)

The spreadsheet that collects the results should also be open as a read only document (let me know if you try to us it and have any problems doing so) so fel free to browse through the examples. I intend to put a front end of sorts onto the spreadsheet at some point using the Google Visualisation API,(cf. my original (and currently stalled) Guardian DataStore Explorer), but if you want to beat me to it, go for it :-)

Hackable SPARQL Queries: Parameter Spotting Tutorial

Whenever I come across a new website or search tool, one of the first things I do is have a look at the URIs of resource pages and search results to see: a) whether I can make sense of them (that is, are they in any sense human readable), and b) whether they are “hackable”, to the extent that I can change certain parts of the URI in particular way and have a pretty good idea what the resulting page will look like.

If the URI is hackable, then it often means that it can be parameterised, in the sense that I can construct valid URIs from some sort of template within which part of the URI path, or one of the URI arguments, is replaced using a variable that can be assigned a particular value as required.

So for example, a search for the term ouseful in Google delivers the results page with URI that looks like:

Comparing the search term that I entered (ouseful) with the URI, it’s easy to see how the search term is used in order to create the results page URI:

This technique applies equally to looking at SPARQL search queries, so here’s a worked through example that makes use of a query on the Talis n2 blog (I tend to use SparqlProxy for running SPARQL queries):
#List the uri, latitude and longitude for road traffic monitoring points on the M5
PREFIX road: <;
PREFIX rdf: <;
PREFIX geo: <;
PREFIX wgs84: <;
PREFIX xsd: <;
SELECT ?point ?lat ?long WHERE {
?x a road:Road.
?x road:number "M5"^^xsd:NCName.
?x geo:point ?point.
?point wgs84:lat ?lat.
?point wgs84:long ?long.

Looking carefully at the descriptive comment:

#List the uri, latitude and longitude for road traffic monitoring points on the M5

and the query:

?x road:number "M5"^^xsd:NCName.

we see how it is possible to parameterise the query such that we can replace the “M5” string with a variable and use it to pass in the details of (presumably) any UK road number.

In Yahoo Pipes, here’s what the parameterisation looks like – we construct the query string and pass in a value for the desired road number from a user text input (split the query string after ?x road:number “):

The rest of the pipe is built around the SPARYQL pattern that I have described before (e.g. Getting Started with, Triplr SPARYQL and Yahoo Pipes):

By renaming the latitude and longitude value elements as and y:location.lon, the pipe infrastructure can do itself and provide us with a map based preview of the pipe output, as well as a KML output that can be viewed in Google maps (simply paste thee KML URI into the Google maps search box and use it as the search term) or Google Earth, for example:

Inspection of he the pipe’s KML output URL:

shows that is is also hackable. Can you see how to change it so that it will return the traffic monitoring points on the A1, bearing in mind it currently refers to the M5?

So there we have it – given an example SPARQL query for road traffic monitoring locations on thee M5, we can parameterise the query by observation and construct a pipe that gives a map based preview, as well as a KML version of the output, all in less time than it takes to document how it was done… :-)

Here’s another example. This time the original query comes from @tommyh (geeky related stuff here;-); the query pulls a list of motorway service station locations from dbpedia:

PREFIX rdfs: <;
PREFIX dbpprop: <;
PREFIX yago-class: <;
PREFIX geo: <;
SELECT ?services ?label ?road ?lat ?long
?services dbpprop:wikiPageUsesTemplate <; .
?services rdfs:label ?label
?services dbpprop:road ?road .
?services dbpprop:lat ?lat .
?services dbpprop:long ?long .
} .
FILTER (isIRI(?road)) .
ORDER BY ASC(?label)

The results look like:

So how can we weak the original query to search for motorway services on the M1? By inspection of the query, we see the search is looking for services on any ?road (and more than that, on any isIRI(?road), whatever that means?!;-) Looking at the results, we see that the roads are identified in the form:

So we can tweak the query with an additional condition that requires a particular road. For example:

?services dbpprop:wikiPageUsesTemplate <; .
?services rdfs:label ?label .
?services dbpprop:road <;
?services dbpprop:lat ?lat .
?services dbpprop:long ?long .

(I think we can drop the original FILTER too?)

To parameterise this query, we just ned to feed in the desired road number here:


Alternatively, we can hack in a regular expression to filter the results by road number – e.g. using the M1 again:

?services dbpprop:wikiPageUsesTemplate <; .
?services rdfs:label ?label .
?services dbpprop:road ?road
?services dbpprop:lat ?lat .
?services dbpprop:long ?long .
} .
FILTER (isIRI(?road) && regex(?road,"M1_")) .

This time, the parametrisation would occur here:
<em FILTER (isIRI(?road) && regex(?road,"ROADNUMBER_”))

Note that if we just did the regular expression on “M1” rather than “M1_” we’d get back results for the M11 etc as well…

In the spirit of exploration, let’s se if we can guess at/pattern match towards a little bit more. (Note guessing may or may not work – but if it doesn’t, you won’t break anything!)

The line:
?services rdfs:label ?label
would seem to suggest that human readable labels corresponding to URI identifiers may be recorded using the rdfs:label relation. So let’s see:

Create a ?roadname variable in the query and see if ?road rdfs:label ?roadname manages to pull out a useful label:
SELECT ?services ?label ?roadname ?road ?lat ?long
?services dbpprop:wikiPageUsesTemplate <; .
?services rdfs:label ?label .
?services dbpprop:road ?road .
?road rdfs:label ?roadname

Ooh… that seems to work (in this case, at least… maybe it’s a dbpedia convention, maybe it’s a general convention, who knows?!:-)

But it’s a little messy, with different language variants also listed. However, another trick in my toolbox is memory. I remember seeing a filter option in a query once before:
&& lang(?someLabel)=’en’

Let’s try it – change the filter terms to:
FILTER (isIRI(?road) && regex(?road,”M1_”) && lang(?roadname)=’en’) .
and see what happens:

So now I have a query that I can use to find motorway service station locations on a particular UK motorway, and get the name of the motorway back as part of the results. And all with only a modicum of knowledge/understanding of SPARQL… Instead, I relied on pattern matching, a memory of a fragment of a previous query and a bit of trial and error…

PS If you want to try out hacking around with a few other SPARQL quries, I’ve started collecting some likely candidates: Bookmarking and Sharing Open Data Queries

First Dabblings With Pipelinked Linked Data

One of the promises of the Linked Data lobby is the ability to combine data from different datasets that share common elements, although this ability is not limited to Linked Data (see, for example, Mash/Combining Data from Three Separate Sources Using Dabble DB). In this post, I’ll describe a quick experiment in using Yahoo Pipes to combine data from two different data sources and briefly consider the extent to which plug’n’play data can lower the barriers to entry for exploring the potential of Linked Data.

The datasets I’ll join are both Linked Data datstores – the transport datastore and the Edubase/Education datastore. The task I’ve set myself is to look for traffic monitoring points in the vicinity of one or more schools and to produce a map that looks something like this:

So to get started, let’s grab a list of schools… The Talis blog post SPARQLing Edubase Data contains several example queries over the education datastore. The query I’ll use is derived trivially from one of those examples; in particular, it grabs the name and location of the two newest schools in the UK:
prefix sch-ont: <;
prefix geo: <;
SELECT ?school ?name ?date ?lat ?long WHERE {
?school a sch-ont:School;
sch-ont:establishmentName ?name;
sch-ont:openDate ?date;
geo:lat ?lat;
geo:long ?long.

Pasting the query into the SPARYQL Pipe -map previewer shows a couple of points on a map, as expected.

So how can we look for traffic monitoring points located in the same area as a school? One of the big problems I have with Linked Data is finding out what the shared elements are between data sets (I don’t have a rule of thumb for doing this yet) so it’s time for some detective work – looking through example SPARQL queries on the two datasets, ploughing through the Google group, and so on. Searching based on lat/long location data, e.g. within bounding box, is one possibility, but it’d be neater, to start with at least, to try to used a shared “area”, such as the same parish, or other common administrative area.

After some digging, here’s what I came up with: this snippet from a post to the Google group relating to the transport datastore:
#If you’re prepared to search by (local authority) area instead of by a bounding box,
geo:long ?long ;
<; <;;
traffic:count ?count .

and this one from the aforementioned Talis Edubase post relating to the education datastore:
prefix sch-ont:
SELECT ?name ?lowage ?highage ?capacity ?ratio WHERE {
?school a sch-ont:School;
sch-ont:districtAdministrative >; .

The similar format of the area codes, and the similarity in language (“prepared to search by (local authority) area” and “id/local-authority-district/”) suggest to me that this two things actually refer to the same thing (I asked @jenit … it seems they do…)

So, here’s a recipe for searching for traffic monitoring locations in the same local authority district as a recently opened school. Firstly, modify the SPARQL query shown above so that it also returns the local authority area:

SELECT ?school ?name ?date ?district ?lat ?long WHERE {
?school a sch-ont:School;
sch-ont:establishmentName ?name;
sch-ont:openDate ?date;
sch-ont:districtAdministrative ?district;
geo:lat ?lat;
geo:long ?long.

The result looks something like this:

Secondly, construct a test query on the transport datastore ( to pull out traffic monitoring points, along with their locations, using a local area URI as the search key:

PREFIX rdf: <;
PREFIX traffic: <;
PREFIX geo: <;
PREFIX area: <;
PREFIX xsd: <;
SELECT ?point ?lat ?long WHERE
{ ?point a traffic:CountPoint ;
geo:lat ?lat ;
geo:long ?long ;
<; <;. }

We can create a pipe based around this query that takes an adminstrative area identifier, runs the query through a SPARYQL pipe, (SPARQL and YQL pipe) and returns the traffic monitoring points in that area:

The regular expression block is a hack used to put the region identifier into the form that is required by the transport endpoint if it is passed in using the form required by the education datastore.

Now we’re going to take results from the recent schools query and then look up the traffic monitoring points in that area via the pipe shown above:

The SPARYQL query at the top of the pipe runs the Edubase query and is then split – the same items are passed into ach of th two parts of the pipe, but thy are processed differently. In the left hand branch, we treat the lat and long elements from the Edubase query in order to create y:location elements that the pipe knows how to process as go elements (e.g. in the creation of a KML output from the pipe).

The right hand branch does something different: the loop block works through the list of recently opened schools on school at a time, and for each one looks up the region identifier and passes it to the traffic monitoring points by region pipe. The school item is then replaced by the list of traffic monitoring points in that region.

You can try the pipe out here: traffic monitoring near most recently opened schools

So that’s one way of doing it. Another way is to take the lat/long of each school and pass that information to a pipe that looks up the traffic monitoring points within a bounding box centered on the original location co-ordinates. This gives us a little more control over the notion of ‘traffic monitoring points in the vicinity of a school’.

Again we see a repeat of the fork and merge pattern used above, although this time th right hand branch is passed to a pip that looks up points within a bounding box specified by the latitude and longitude of each school. A third parameter specifies the size of the bounding box:

Notice from the preview of the pipe output how we have details from the left hand branch – the recently opened schools – as well as the right hand branch – the neighbouring traffic monitoring points. Here’s the result again:

As with any map previewing pipe, a KML feed is available that allows the results to be displayed in a(n embeddable) Google map:

(Quick tip: if a Google map chokes on a Yahoo pipes KML URI, use a URL shortener like TinyURL or rto get a shortened version of the Yahoo Pipes KML URL, and then post that into the Google maps search box:-)

So there we have it – my take on using Yahoo Pipes to “join” two, err, Linked Data datasets on :-) I call it pipelinked data :-)

PS some readers may remember how services like Google Fusion Tables can also be used to “join” tabular datasets sharing common columns (e.g. Data Supported Decision Making – What Prospects Does Your University Offer). Well, it seems as if the Google folks have just opened up an API to Google Fusion Tables. Now it may well be that Linked Data is the one true path to enlighentment, but don’t forget that there are many more mortals than there are astronauts…)

PPS for the promised bit on “lower[ing] the barriers to entry for exploring the potential of Linked Data”, that’ll have to wait for another post…

Visualising Traffic Count Data from

In a couple of posts last year (Hackable SPARQL Queries: Parameter Spotting Tutorial and First Dabblings With Pipelinked Linked Data) I started to explore how it might be possible to use Yahoo Pipes as an environment for sharing – and chaining together (albeit inefficiently) – queries to the open transport data datastore.

Those posts concentrated on querying the datastore in order to find the location of traffic monitoring points according to various search criteria. In this post, I’ll show you one way of visualising traffic count data from a traffic count point using Many Eyes Wikified.

The first thing we need to do is come up with a query that will pull traffic monitoring data back from the transport datastore. My first point of call for finding a query to get me started is usually to search over the Google group archive in my mailbox. As ever, @jenit had posted a ‘getting started’ solution:-)
PREFIX rdf: <;
PREFIX traffic: <;
PREFIX geo: <;
PREFIX xsd: <;

SELECT ?direction (SUM(?value) AS ?total)
traffic:count ?count .
?count a traffic:Count ;
traffic:category <; ;
traffic:direction ?direction ;
traffic:hour ?hour ;
rdf:value ?value .
REGEX(str(?hour), "^2008-")
GROUP BY ?direction

I tweaked this a little (using guesswork and pattern matching, rather than understanding, Chinese Room style;-) to come up with a tweaked query that appears to pull out traffic count data for different categories of vehicle on a particular day from a particular monitoring point:

SELECT ?vehicle ?direction ?hour (SUM(?value) AS ?total)
traffic:count ?count .
?count a traffic:Count ;
traffic:category ?vehicle ;
traffic:direction ?direction ;
traffic:hour ?hour ;
rdf:value ?value .
REGEX(str(?hour), "^2008-05-02T") )
GROUP BY ?vehicle ?direction ?hour

I then turned this into a Yahoo Pipes query block, using the date and traffic monitoring point as input parameters:

Here’s how to do that:

Having got the query into the pipes ecosystem, I knew I should be able to get the data out of the pipe as JSON data, or as a CSV feed, which could then be wired into other web pages or web applications. However, to get the CSV output working, it seemed like I needed to force some column headings by defining attributes within each feed item:

To tidy up the output a little bit more, we can sort it according to time and day and traffic count by vehicle type:

It’s then easy enough to grab the CSV output of the pipe (grab the RSS or JSON URI and just change the &_render=rss or &_render=json part of the URI to &_render=csv) and wire it into somewhere else – such as into Many Eyes WIkified:

Doing a couple of quick views over the data in Many Eyes wikified, it seemed as if there was some duplication of counting, in that the numbner of motor vehicles appeared to be the sum of a number of more specific vehicle types:

Data in Many Eyes WIkified

Looking at the car, van, bus, HGV and motorbike figures we get:

SUmmed totals

So I made the judgement call to remove the possibly duplicate motor-vehicle data from the data feed and reimported the data into Many Eyes WIkified (by adding some nonsense characters (e.g. &bleurghh to the feed URI so that Many Eyes thought it was a new feed.)

It was then easy enough to create some interactive visualisations around the traffic data point. SO for example, here we have a bubble chart:

Do you spot anything about traffic flow going North versus South at 8am compared to 5pm?

Lets explore that in a little more detail with a matrix chart:

Traffic data matrix chart

This shows us the relative counts for different vehicle types, again by time of day. Notice the different distribution of van traffic compared to car traffic over the course of the day.

A treemap gives a slightly different take on the same data – again, we can see how there is a difference between North and South flowing volumes at different times of day within each category:

One thing that jumps out at me from the treemap is how symmetrical everything seems to be at noon?!

All the above visualisations are interactive, so click through on any of the images to get to the interactibve version (Java required).

As to how to find traffic monitoring point IDs – try this.

PS a disadvantage of the above recipe is that to generate a visualisation for a different traffic point, I’d need to use the desired parameters when grabbing the CSV feed from the pipe, and then create new Many Eyes Wikified data pages and visualisation pages. However, using nothing more than a couple of web tools, I have managed to prototype a working mockup of a visualisation dashboard for traffic count data that could be given to a developer as a reference specification for a “proper” application. And in the meantime, it’s still useful as a recipe… maybe?

PPS While I was playing with this over the weekend, it struck me that if school geography projects ever do traffic monitoring surveys, it’s now possible for them to get hold of “real” data. If there are any school geography teachers out there who’d like to bounce around ways of making this data useful in a school context, please get in touch via a comment below :-)

Using Data From Linked Data Datastores the Easy Way (i.e. in a spreadsheet, via a formula)

Disclaimer: before any Linked Data purists say I’m missing the point about what Linked Data is, does, or whatever, I don’t care, okay? I just don’t care. This is for me, and me is someone who can’t get to grips with writing SPARQL queries, can’t stand the sight of unreadable <rdf> <all:over the=”place”>, can’t even work out how to find things are queryable in a Linked Data triple store, let alone write queries that link data from one data store with data from another data store (or maybe SPARQL can’t do that yet? Not that I care about that either, because I can, in Yahoo Pipes, or Google Spreadsheets, and in a way that’s meaningful to me…)

In Mulling Over =datagovukLookup() in Google Spreadsheets, I started wondering about whether or not it would be useful to be able to write formulae to look up “live facts” in various datastores from within a spreadsheet (you know, that Office app that is used pretty much universally in workplace whenever there is tabular data to hand. That or Access of course…)

Anyway, I’ve started tinkering with how it might work, so now I can do things like this:

The formulae in columns G, H and I are defined according to a Google Apps script, that takes a school ID and then returns something linked to it in the education datastore, such as the name of the school, or its total capacity.

Formulae look like:

  • =datagovuk_education(A2,”name”)
  • =datagovuk_education(A2,”opendate”)
  • =datagovuk_education(A2,”totcapacity”)

and are defined to return a single cell element. (I haven’t worked out how to return several cells worth of content from a Google Apps Script yet!)

At the moment, te script is a little messy, taking the form:

function datagovuk_education(id,typ) {
  var ret=""; var args=""
  switch (typ) {
    case 'totcapacity':
      args= _datagovuk_education_capacity_quri(id);
      //hack something here;
  var x=UrlFetchApp.fetch('',{method: 'post', payload: args});
  var ret=x.getContentText();
  var xmltest=Xml.parse(ret);

  return ret;
function _datagovuk_education_capacity_quri(id){
  return "{%0D%0A%3Fschool+a+sch-ont%3ASchool%3B%0D%0Asch-ont%3AuniqueReferenceNumber+"+id+"%3B%0D%0Asch-ont%3AschoolCapacity+%3FschoolCapacity.%0D%0A}+ORDER+BY+DESC%28%3Fdate%29+LIMIT+1&output=xml&callback=&tqx=&";

The datagovuk_education(id,typ) function takes the school ID and the requested property, uses the case statement to create an appropriate query string, and then fetches the data from the education datastore, returning the result in an XML format like this. The data is pulled from the datastore via Sparqlproxy, and the query string URIs generated (at the moment) by adding the school ID number into a query string generated by running the desired SPARQL query on Sparqlproxy and then grabbing the appropriate part of the URI. (It’s early days yet on this hack!;-)

By defining appropriate Apps script functions, I can also create formulae to call other datastores, run queries on Google spreadsheets (e.g. in the Guardian datastore) and so on. I assume similar sorts of functionality would be supported using VB Macros in Excel?

Anyway – this is my starter for ten on how to make live datastore data available to the masses. It’ll be interesting to see whether this approach (or one like it) is used in favour of getting temps to write SPARQL queries and RDF parsers… The obvious problem is that my approach can lead to an explosion in the number of formulae and parameters you need to learn; the upside is that I think these could be quite easily documented in a matrix/linked formulae chart. The approach also scales to pulling in data from CSV stores and other online spreadsheets, using spreadsheets as a database via the =QUERY() formula (e.g. Using Google Spreadsheets Like a Database – The QUERY Formula), and so on. There might also be a market for selling prepackaged or custom formulae as script bundles via a script store within a larger Google Apps App store

PS I’m trying to collect example SPARQL queries that run over the various end points because: a) I’m really struggling in getting my head round writing my own, not least because I struggle to make sense of the ontologies, if I can find them, and write valid queries off the back of them; even (in fact, especially) really simple training/example queries will do! b) coming up with queries that either have interesting/informative/useful results, or clearly demonstrate an important ‘teaching point’ about the construction of SPARQL queries, is something I haven’t yet got a feel for. If you’ve written any, and you’re willing to share, please post a gist to github and add a link in a comment here.

PPS utility bits, so I don’t lose track of them:
education datastore ontology
– Apps script XML Element class

PPPS HEre’s how to dump a 2D CSV table into a range of cells: Writing 2D Data Arrays to a Google Spreadsheet from Google Apps Script Making an HTTP POST Request for CSV Data