Dashboard Views as Data Source Directories: Open Data Communities

Publishing open data is one thing, reusing it quite another. Firstly, you’re faced with a discovery problem – finding a reliable source of the data you need. Secondly, you need to actually find a way of getting a copy of the data you need into the application or tool you want to use it with. Whilst playing around with the Open Data Communities Local Authority Dashboard, a recently launched user facing view over a wealth of Linked Data published by the Department for Communities and Local Government (DCLG) on the OpenDataCommunities website (New PublishMyData Features: Parameterised and Named Queries), I noticed that they provide a link to the data source for each “fact” on the dashboard:

One of the ideas I keep returning to is that it should be possible to “View Source” on a chart or data report to see the route back, via a query, to the dataset from whence it came:

So it’s great to see the Local Authority Dashboard doing just this by exposing the SPARQL query used to return the data from the Open Data Communities datastore:

You can also run the query to preview its output:

Conveniently, a permalink is also provided to the query:


This is actually an example of a “Named Query” that the platform provides in the form of a parameterisd ‘shortcut’ URL – changing the authority name and/or service code allows you to use the same base URL pattern to get back finance data in this case relating to other authorities and/or service codes as required.

The query view is also editable, which means you can use exposed query as a basis for writing your own queries. Once customised, queries can be called programmatcially via a GET request of the form


Custom queries can also support user defined parameter values by including %{tokens} in the original SPQARQL queries, and providing values for the tokens on the url query string:


As well as previewing the output of a query, we can generate a variety of output formats from a tweak to the URL (add .suffix before the ?), including JSON:

  "head": {
    "vars": [ "spend_per_household" ]
  } ,
  "results": {
    "bindings": [
        "spend_per_household": { "datatype": "http://www.w3.org/2001/XMLSchema#decimal" , "type": "typed-literal" , "value": "115.838709677419354838709677" }


<?xml version="1.0"?>
<sparql xmlns="http://www.w3.org/2005/sparql-results#">
    <variable name="spend_per_household"/>
      <binding name="spend_per_household">
        <literal datatype="http://www.w3.org/2001/XMLSchema#decimal">115.838709677419354838709677</literal>

and CSV:


Having access to the data in this form means we can then pull it into something like a Google Spreadsheets. For example, we can use the =importData(URL) formual to pull in CSV data from the linked query URL:

And here’s the result:

Note: it might be quite handy to be able to suppress the header in the returned CSV so that we could directly use =importData() formula to pull actual values into particular cells, as for example described in Viewing SPARQLed data.gov.uk Data in a Google Spreadsheet and Using Data From Linked Data Datastores the Easy Way (i.e. in a spreadsheet, via a formula). This loss of metadata in the query response is potentially risky, although I would argue the loss of context about what the data relates to is mitigated by seeing the “unpacked” named query (i.e. the SPARQL query it aliases) and the returned data as a system/atom.

This ability to see the data, then get the data (or “See the data in context – then get the data you need”) is really powerful I think, and offers a way of providing direct access to data via a contextualised view fed from a trusted source.

Author: Tony Hirst

I'm a lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...