OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education

Open Data Processes: the Open Metadata Laundry

Another quick note from yesterday’s mini-mash at Cambridge, hosted by Ed Chamberlain, and with participation from consultant Owen Stephens, Lincoln’s Paul Stainthorp and his decentralised developers, and Sussex’s Chris Keene. This idea came from the Lincoln Jerome project (I’m not sure if this has been blogged on the Jerome project blog?), and provides a way of scrubbing MARC based records to free the metadata up from license restrictions.

The recipe goes along the lines of reconciling the record for each item with openly licensed equivalents, and creating a new record for each item where data fields are populated with content that is know to be openly licensed. In part, this relies on having a common identifier. One approach that was discussed was generating hashes based on titles with punctuation removed. This feels a bit arbitrary to me…? I’d probably reduce all the letters to the same case at the very least in an attempt to normalise the things we might be trying to hash?

I wonder if Ed’s mapping of metadata ownership might also have a role to play in developing a robust laundry service? (e.g. “Ownership” of MARC-21 records and Where exactly DOES a record come from?).

We also discussed recipes where different libraries, each with their own MARC records for a work, might be compared field by field to identify differences between the ways similar items might be catalogued differently. As well as identifying records that maybe contain errors, this approach might also enhance discovery, for example through widening a set of keywords or classification indices.

One of the issues we keep returning to is why it might be interesting to release lots of open data in a given context. Being able to pivot from a resource in one context to a resource in another context is a general/weak way of answering this question, but here are a couple of more specific issues that came up in conversation:

1) having unique identifiers is key, and becomes useful when people use the same identifier, or same-as’d identifiers, to refer to the same thing;

2) we need tool support to encourage people creating metadata to start linking in to a recognised/shared identifier spaces. I wonder if there might be value in institutions starting to publish reconciliation services that can be addressed from tools like Google Refine. (For example, How to use OpenCorporates to match companies in Google Refine or Google Refine Reconciliation Service API). Note that it might make sense for reconciliation services to employ various string similarity heuristics as part of the service.

3) we still don’t have enough compelling use cases about the benefits of linked IDs, or tools that show why it’s powerful. (I think of linked identifier spaces that are rich enough to offer benefits as if they were (super)saturated solutions, where it’s easy to crystallise out interesting things…) One example I like is how Open Corporates use reconciliation to allow you to map companies names in local council accounts to specific corporate entities. In time, one can imagine mapping company directors and local council councillors onto person entities and then starting to map these councillor-corporate-contract networks out…;-)

Finally, something Owen mentioned that resonates with some of my thinking on List Intelligence: Superduping/Work Superclusters, in which we take an ISBN, look at its equivalents using ThingISBN or xISBN, and then for each of those alternatives, look at their ThingISBN/xISBN alternatives, until we reach a limit set. (cf my approaches for looking at lists a Twitter UserID is included on, looking at the other members of the same lists, then finding the other lists they are mentioned on, etc. Note in the case of Twitter lists, this doesn’t necessarily hit a limit without the use of thresholding!)

Written by Tony Hirst

August 9, 2011 at 12:19 pm

3 Responses

Subscribe to comments with RSS.

  1. […] Open Data Processes: the Open Metadata Laundry (N.B. this one relates specifically to Jerome – in particular, our notion of ‘scrubbing’ dodgy MARC records by taking only the identifiers plus the bare citation-only fields, and using that minimal set to grab additional free and Open data from the web, automatically creating new full versions of records that are inherently Open. ‘Metadata laundry’, me like.) […]

  2. […] originally in the context of scrubbing rights tainted records from library catalogue metadata: http://blog.ouseful.info/2011/08/09/open-data-processes-the-open-metadata-laundry/ Rate this: Share this:Like this:LikeBe the first to like this […]

  3. […] The sense in which I first came across the term was whilst discussing a data laundry process that could replace metadata records or fields with metadata records in library catalogues that are tainted with commercial license restrictions with data of equivalent of higher quality, known provenance and open license terms (Open Data Processes: the Open Metadata Laundry). […]


Comments are closed.

Follow

Get every new post delivered to your Inbox.

Join 866 other followers

%d bloggers like this: