A Month or Two of New Horizons – Arcadia Fellowship

When I first started blogging, the content was dominated by posts about library hacks and info skills related musings, and for the next ten weeks or so that theme is going to be uppermost in my mind as I work as an Arcadia Fellow with the Cambridge University Library.

The arcadia@cambridge project is “a three-year programme funded by a generous grant from the Arcadia Fund to Cambridge University Library … to explore the role of academic libraries in a digital age” and I’m sincerely grateful for the opportunity to be able to contribute to this activity.

So I’ve spent the last two days in Cambridge, based in Wolfson College, and have already benefited from the Twitter Effect in getting coffee meetups sorted:-) (I’ll work on the Cambridge Twitter network diagrams when I get a chance ;-)

I was intending to blog a lot of my project related activity here, but I’ve also set up another blog on blogspot to act as a repository for quick hacks that make use of HTML forms, simple javascript, and all those sorts of embed code that WordPress.com strips out – you can find it here: Arcadia Mashups Blog.

I’ll also be posting to the official Arcadia Project Blog.

If you subscribe to the full fat feedburner feed from this blog, I’ll pop links to my posts on those other blogs in my delicious feedthru bookmarks, and maybe also put together an occasional roundup post. So for example, today I posted:

For project related posts here on OUseful.info, I’ll be adding them to the Arcadia category, so an Arcadia feed will be available from here too. I’m also using the arcadia tag on delicious, the #arcadia hashtag on twitter, and I’ve set up an Arcadia set on my flickr account for project related screenshots. Anything that makes it to Youtube will also get an appropriate tag…

Camsis Codes…

So as one of the things on my Arcadia project to do list, I’ve started looking for consistent identifiers that might act as useful pivot points between various bits of the Cambridge’s online offerings (the public stuff on http://www.cam.ac.uk, as well as the Raven authenticated, password protected stuff on the de facto VLE, Camtools.

Ideally, I’d like to find to some Crown Jewels, something like OU course codes, for example, but I fear that is not going to be possible…

Anyway, it’s still early days yet, so as w have a meting with the Management Services Information Division, MISD tomorrow, to see whether or not they have data that we might use to generate affiinity strings for users of the Newton Library Catalogue, et al., I thought I’d have a look at whether different bits of their Student Administration and Records: CamSIS Coding Manual link together at all:

pivot2

The diagram was created by cut’n’pasting data from the coding scheme web pages, then using Graphviz to chart the links.

For what it’s worth, here’s the dot file:
graph G {

"A01" [fontcolor = red];
"B01" [fontcolor = red];
"B03" [fontcolor = red];
"D01" [fontcolor = red];
"D03" [fontcolor = red];
"D05" [fontcolor = red];
"D06" [fontcolor = red];
"D07" [fontcolor = red];
"D08" [fontcolor = red];
"F01" [fontcolor = red];
"F02" [fontcolor = red];
"H01" [fontcolor = red];
"H03" [fontcolor = red];
"J01" [fontcolor = red];
"J02" [fontcolor = red];
"K01" [fontcolor = red];
"M01" [fontcolor = red];
"S01" [fontcolor = red];
"Z02" [fontcolor = red];
"Z04" [fontcolor = red];
"Z05" [fontcolor = red];
"Z06" [fontcolor = red];
"Z07" [fontcolor = red];

"A01" -- "Cambridge Colleges";
"B01" -- "County Codes";
"B03" -- "Country Codes";
"D01" -- "Current UCAS Courses";
"D03" -- "Current/Archived UCAS Courses";
"D05" -- "Academic Careers";
"D06" -- "Academic Programs";
"D07" -- "Academic Plan Types";
"D08" -- "Academic Plans";
"F01" -- "Awarding Bodies";
"F02" -- "GCSE Subject Codes";
"H01" -- "Subject (Tripos) Codes";
"H03" -- "Examination Paper Codes";
"J01" -- "Grading Codes";
"J02" -- "Further to Class Codes";
"K01" -- "Degrees";
"M01" -- "Faculties and Departments";
"S01" -- "Source of Fees";
"Z02" -- "Ethnicity Indicators";
"Z04" -- "Disability Indicators";
"Z05" -- "Program Status Codes";
"Z06" -- "Program Action Codes";
"Z07" -- "Program Reason Codes";
"A01" -- "College Code";
"A01" -- "College Description";
"B01" -- "County Code";
"B01" -- "County Description";
"B03" -- "Country Code";
"B03" -- "Country Description";
"D01" -- "Course Code";
"D01" -- "Course Description";
"D03" -- "Course Code";
"D03" -- "Course Description";
"D05" -- "Academic Careers Code";
"D05" -- "Academic Careers Description";
"D06" -- "Academic Program";
"D06" -- "Academic Program Description";
"D06" -- "Academic Careers Code";
"D07" -- "Academic Plan Type";
"D07" -- "Academic Plan Type Description";
"D08" -- "Academic Plan";
"D08" -- "Academic Plan Description";
"D08" -- "Academic Plan Type";
"F01" -- "Awarding Body Year";
"F01" -- "Awarding Body Sitting";
"F01" -- "Awarding Body";
"F01" -- "Awarding Body Description";
"F02" -- "GCSE Subject Code";
"F02" -- "GCSE Subject Code Description";
"F02" -- "EBL Subject Code";
"H01" -- "Subject Code";
"H01" -- "Department Name";
"H01" -- "Department Code";
"H03" -- "Subject Code";
"H03" -- "Exam Catalogue Number";
"H03" -- "Exam Title";
"J01" -- "Grading Scheme";
"J01" -- "Grading Basis";
"J01" -- "Grading Code";
"J01" -- "Grading Description";
"J02" -- "Subject Code";
"J02" -- "Further to Class Code";
"J02" -- "Further Class Description";
"K01" -- "Degree Code";
"K01" -- "Degree Code Description";
"K01" -- "Degree Short Description";
"M01" -- "Department Code";
"M01" -- "Department name";
"S01" -- "Fees Source Code";
"S01" -- "Fees Source Description";
"Z02" -- "Ethnicity Code";
"Z02" -- "Ethnicity Description";
"Z04" -- "Disability Code";
"Z04" -- "Disability Description";
"Z05" -- "Status Code";
"Z05" -- "Status Description";
"Z06" -- "Programme Action Code";
"Z06" -- "Programme Action Description";
"Z07" -- "Programme Action Code";
"Z07" -- "Programme Action Reason";
"Z07" -- "Programme Action Reason Description";
}

The next step is to see what else we can link into this, and maybe also draw boundaries around various clumps according to which unit owns those particular sets of data (MISD, the Computing Service, the Library, Caret/Camtools, the Departments, Cambridge University Press etc etc.). After all, even if we can find one data set that does manage to key into another, political or data protection boundaries may make it…. difficult to link those data sets and get the data flowing…

Free Association Around Ranganathan’s Five Laws of Library Science

Picking up briefly on Peter Murray Rust’s exhortation to the keynote attendees at ILI2009 that libraries must rediscover Ranganathan’s Five Laws of Library Science to their heart if they are to survive:

Vodpod videos no longer available.

I thought I post some free association thoughts on what the five laws say to me. Note that I’m not a librarian, have never studied library science and don’t normally work for the library, though I currently am on an Arcadia Fellowship with the Cambridge University Library. Which is to say, my interpretation may not be the conventional, or accepted one…

So here we go:

Books are for use.
Hmmm… Books are for use… they are they to be used… they exist to be read… they exist to impart knowledge, information, emotion. They exist to communicate. As such, maybe they are social objects? But maybe also, they contain information or knowledge that enables things to be done, ideas to be understood? Maybe they are the next step in helping us do something, achieve something?

In a 2003 blog post outlining ideas for what was to become The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture, John Battelle describes Google’ssearch operation as a database of intentions:

The Database of Intentions is simply this: The aggregate results of every search ever entered, every result list ever tendered, and every path taken as a result. It lives in many places, but three or four places in particular hold a massive amount of this data (ie MSN, Google, and Yahoo). This information represents, in aggregate form, a place holder for the intentions of humankind – a massive database of desires, needs, wants, and likes that can be discovered, supoenaed, archived, tracked, and exploited to all sorts of ends. Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward. This artifact can tell us extraordinary things about who we are and what we want as a culture.

That is, every search we make is an expression of some sort of intention. There is a point to every search.

So maybe in the same way, a book might be able to satisfy some intention? Or maybe I’m getting ahead of myself, because second up we have:

Every reader his [or her] book.
So at any point in time, there is a book that I need, that will somehow “help”? This ties back to a book that can satisfy an intention I have, perhaps? My current problem, or situation, is unlikely to be one that has never been met before, never been addressed by someone, somewhere, in some particular book?

Every book its reader.
And conversely, at any point in time, for every book there is someone who would benefit from reading that book? The book is a satisfaction of some intention? There is someone who would benefit from being recommended that book, maybe? (The ideal search engine would be an answer engine, would return only the single answer you need for a particular query, maybe?)

Save the time of the User.
Which means what? Give them the book that they need, in a timely fashion? Make it easy for them to discover the right book, or the right part of the book, that they need, with the minimum of fuss, or noise in the recommendations? Give them full text search, extended indexes in the form of semantic tags and on-demand access, maybe?!;-)

The library is a growing organism.
The library is a living thing. As a living thing, it must adapt to survive. As a living thing, it inhabits an ecosystem, a network, a network characterised by the making and breaking of new and old connections, by the flow of resources across those connections.

Hmmm… so how are these laws actually interpreted by the Library Science community, I wonder? And to what extent do they apply in the context of search engine queries, results and the resources pointed to by those results? Would it be fair to say that it is Google, rather than Library, that has taken these laws to its heart? Would it be fair to say that several of the laws at least hint at making effective recommendations to users, as Lorcan Dempsey suggests in Recommendation and Ranganathan?

Ramblings on SciComm

Although I’m now half way through my Arcadia Fellowship (sigh….:-(, it wasn’t until last weekend that I spent my first weekend in Cambridge, and finally got around to doing some culture stuff (a couple of galleries, a recital, an excellent lunch in Michaelhouse (thanks for the tip, Huw:-), and so on…

It also made me realise how I haven’t really got into the swing of making the most of my time here, so over the next few weeks I intend to check out the various Cambridge events calendars (of which there are several – more about that in an Arcadia post somewhen…) and start getting some events in…

In fact, I’ve already started, writing this as I am having just got back from a talk tonight by science communicator (and presenter of Material World, Thursdays, 4.30 pm, BBC Radio 4, also on podcast ;-), Quentin Cooper.

This (public) talk, on public perceptions of scientists, was one in a series arranged by CSAR, the Cambridge Society for the Application of Research (events listing), and just one of many dozens of public talks listed on the talks.cam website (again, I’ll write more about that in a forthcoming Arcadia post).

Ever an entertaining speaker, Quentin described the various stereotyped views of “scientists” (lab coat, mad hair, glasses, a crazy smile, and bubbling test tubes and bunsen burners everywhere), as well as suggesting a little experiment for us all to try at home: search for the word scientist in Google image search…

(Turning Safe Search seems to have very little effect (on the front page of results, at least…). Trying the same thing in locale specific versions of Google image search using the local word for scientist is apparently also illunimating…!)

You can also try it with “face search” switched on:

(Just by the by, here are image searches for engineer (face search), technologist (face search).)

Another interesting observation came from a BA web survey that had asked people to name their favourite on-screen scientists. The ambiguity in the question, unsuspected when it was first posted, lead to the majority of answers relating to fictional scientists rather than science/scientist presenters (Why Dr Who beats Einstein these days).

How scientists portray their own work was also on the agenda – and as I’ve long believed, sometimes a little help from the arts can help. One particular set of examples came from the Cape Farewell project, a “cultural response to climate change”, in which various cohorts of (notable) scientists, artists and musicians went off to see the effects glacial melting for themselves. Sometimes it’s the most obvious things that catch you completely by surpise – like the observation that as glaciers retreat, they might uncover islands that have been previously unmapped, an idea picked up by artist Alex Hartley in his piece Nowehere Island.

Anyway, here are a couple of random thoughts I came away from the event with…

When’s someone going to write a drama like This Life or The Office (or pushing it, No Angels, Teachers, Party Animals, A Very Peculiar Practice etc etc) based in a lab/hi-tech factory, where a bunch of “scientists” (as in sci/tech/eng/maths) folk just get on with the everydayness of their working life in a home and work context? Or has there been one and I’ve missed it?

Folk attending the talk were given the option of taking away an attendance certificate for CPD purposes. If I was an informal learner, could I use such an attendance certificate in partial fulfilment of a more formal academic something?

All in all, a good night out; and another one upcoming tomorrow [i.e. on Tues Nov 3rd 2009]: Thinking Like a Dandelion: Cory Doctorow on copyright, Creative Commons and creativity.

Meanwhile, Over on the Arcadia Blog(s)…

So it feels as if I haven’t been posting that much on this blog over the last few weeks, but I have been blogging elsewhere, 2-3 times a week, in fact, on:

– the Arcadia Project Blog;

– the Arcadia Mashups Blog.

Here’s a quick round up of some of the more notable posts that you can find over there that I would, in the normal course of events, have probably posted here on OUseful.info:

They’re all Library related, so if that’s your thang, maybe worth a read…?

Create Your Own Google Custom News Sections

For many years now, it’s been possible to subscribe to persistent (“saved”) Google News searches and so build up your own custom dashboard views of news… Indeed, it was over three years ago now that I hacked together a demo news feed roller (Persistent News Search OPML Feed Roller) that let users bundle up a roll of feeds in an OPML file (sort of!) for easy viewing elsewhere.

https://i0.wp.com/ouseful.open.ac.uk/blogarchive/newsOPML.jpg

And if OPML isn’t your thing, then services like Netvibes or Pageflakes let you easily wire up your own news dashboard:

But we all know in our heart of hearts that RSS and Atom feed subscriptions are just not popular widespread as a consumer technology. Folk aren’t knowingly using feeds, and they not unknowingly using them directly either. (But feeds are being used as wiring/plumbing behind the scenes, so RSS is not dead yet, okay?!;-)

(In the Library world, as well as the wider news reading world, this failure to engage with feed subscriptions can be seen (in part) by the lack of significant uptake of RSS alerts.)

So when Google announced last week that you can now Create and Share custom News sections, it struck me that they were getting round the exposed plumbing problem that subscribing to a feed implies, and instead making it easy to create a custom view (the output of which can also be subscribed to) with the appearance of having to do much plumbing at all – How to Create Your Own Google Custom News Section (Tutorial):

You can search the directory of already created news sections – as well as find a link to a page that lets you create your own news sections, here: Google News: Custom sections directory.

So for example, here are a few I have already made:
UK Higher Education News
Isle of WIght News
UK Broadcasting News
Formula One News

The extent to which you can create a finely tuned view of the news is, admittedly, limited. You can’t, for example, limit the search to specified publications (which you can do in a Google news advanced/search limited search) – filtering is limited to keywords and locale (I’m not sure of the extent to which the order in which you enter the keywords affects things?). But if you already know how to create that sort of filtered search, you probably also know how to set up a new search alert, wire up an feed powered dashboard of your own, and so on. And if the Google Custom News sections editor was any more complicated, I dare say it would put off the users I imagine Google are reaching out to…

Under the Radar…

Here’s a quick post from under the radar… Apparently, folk from Cam Libraries get together every so often for an informal but issues related brown bag lunch somewhere… It seems like the where and whenabouts of these events is a closely guarded secret.

I think I’m ‘presenting’ at a brown bag lunch session next week, Nov 27th, but I don’t have access to the mailing list the announcement went out on so don’t know any more details than that.

i did, however, manage to grab a bootleg of a trailer for the what may or may not be this event based on what I think I said I could talk about if I managed to get an invite:

If the event is on, I guess I’ll be told immediately before the event and taken to the location blindfolded (presumably using a brown paper bag?)

Just in case, best keep this hush hush, okay? ;-)

Using JISCPress/Digress.it for Reading List Publication

One of the things I’ve been doodling with but not managing to progress much thinking wise (not enough dog walking time lately!) is how we might be able to use the digress.it WordPress theme to support various course related functions in ways that exploit the disaggregating features of the theme.

Chatting with Huw Jones last week about his upcoming Arcadia seminar on “The Problem of Reading Lists” (this coming Tuesday, Nov 24th – all welcome;-) I started thinking again about the potential for using digress.it as a means of publishing, and collecting comments on, reading lists.

So for example, over on the doodlings WriteToReply site I’ve posted an example of how a reading list posted under the theme is automatically disaggregated into separate, uniquely identified references:

The reading list was generated simply by copying and pasting a PDF based reading list into a WordPress blog post. Looking at the format of the list, one could imagine adding further comments or notes relating to each reference using a blog comment. Given that the basis of each paragraph is a citation to a particular work, it might be possible to parse out enough information to generate a link to a search on the University OPAC for the corresponding work (and if so, pull back an indication of the availability of the book as, for example, my Library Traveler script used to do for books viewed on Amazon).

Under the current in-testing digress.it theme, each paragraph on the page can be made available as a separate item in an RSS feed; that is, as well as the standard ‘single item’ RSS page feed that WordPress generates automatically, we can get an N-item feed from the page for the N-paragraphs contained on a page.

Which in terms means that to generate an itemised RSS feed version of a reading list, all I need to do is paste the reading list – with each reference in a separate paragraph – into a single blog post. (the same is true for disaggregating/feed itemising previous exam papers, for example, or I guess video links in order to generate a DeliTV programme bundle…?!)

(For more details of the various ways in which digress.it can automatically disaggregate/atomise a document, see Open Data: What Have We Got?.)

PS just a reminder again – Huw’s Reading List project talk, which is about far more than just reading lists, is on Tuesday in the Old Combination Room, Wolfson College, Cambridge, at 6pm.

Google/Feedburner Link Pollution

Just a quick observation…

If you run a blog (or any other) RSS feed through Feedburner, the title links in the feed point to a Feedburner proxy for the link.

If you use Google Reader, and send a post to delicious:

the Feedburner proxy link is the link that you’ll bookmark:

(Hmmmm, methinks it would be handy if Delicious gave you the option to bookmark the ‘terminal’ URI rather than a proxied or short URI? Maybe by getting Google proxied links into Delicious, Google is amassing data about social bookmarking behaviour from RSS feeds on Delicious? So how about this for a scenario: you wake up tomorrow to find the Goog has bought Delicious off Yahoo, and all your bookmarked links are suddenly rewritten in the form: http://deliproxy.google.com/~r/gamesetwatch/~3/Yci8wJb49yk/fighting_fantasy_flowcharts.php)

If you click on the link to take you through to the actual linked page, and the actual page URI, you may well get something like this:

http://www.gamesetwatch.com/2009/11/fighting_fantasy_flowcharts.php?
utm_source=feedburner&utm_medium=feed
&utm_campaign=Feed%3A+gamesetwatch+%28GameSetWatch%29

That is, a URI with Google Analytics tracking info attached automagically by Feedburner (see Google Analytics, Feedburner and Google Reader for more on this).

Here, then, are a couple of good examples of why you might not want to use (Google) Feedburner for your RSS feeds:

1) it can pollute your links, first by appending them with Google Analytics tracking codes, then by rewriting the link as a proxied link;
2) you have no idea what future ‘innovations’ the Goog will introduce to pollute your feed even further.

(Bear in mind that Google Feedburner also allows you to inject ads into a feed you have burned using AdSense for Feeds.)

Time for a University Prepress?

When I first joined the OU as a lecturer, I was self-motivated, research active, publishing to peer reviewed academic conferences outside of the context of a formal research group. That didn’t last more than a couple of years, though… In that context, and at that time, one of the things that struck me about the OU was that research active academics were expected to produce written work for publication in two ways: for research, through academic conferences and journals; and for teaching, via OU course materials.

The internal course material production route was, and still is, managed through a process of course team review in the authoring stage and then supported by editors, artists and picture researchers for publication, although I don’t remember so much involvement from media project managers ten years or so ago, if they even existed then? Pagination and layout was managed elsewhere, and for authors who struggled to use the provided document templates, the editor was at hand for technical review as well as typos and grammar, as well as reference checking, and a course secretary could be brought in to style the document appropriately. Third party rights were handled by the course manager, and so on.

In contrast, researchers had to research and write their papers, produce images, charts, tables as required, and style the document as a camera ready document using a provided style sheet. In addition, published researchers would also review (and essentially help edit) works submitted to other journals and conferences. Th publisher contributed nothing except perhaps project management and the production and distribution of the actual print material (though I seem to remember getting offprints, receiving requests for them, and mailing them out with an OU stamp on an OU envelope).

Although I haven’t published research formally for some time, I suspect the same is still largely true nowadays…

Given that the OU is a publication house, publishing research and teaching materials as a way of generating income, I wonder if there is an opportunity for the Library to support the research publication process providing specialist support for research authors, including optimising them for discovery!

At the current time, many academic libraries host their institution’s repository, providing a central location within which are lodge copies of academic research publications produced by members of that institution. Some academic publishers even offer an ‘added value’ service in their publication route whereby a published article, as written, corrected, layed out, paginated, rights cleared, and rights waived by the author (and reviewed for free by one or more of their peers) will be submitted back to the institution’s repository.

[Cue bad Catherine Tate impression]: what a f*****g liberty… [!]

So as the year ends, here’s a thought I’ve ranted to several people over the year: academic libraries should seize the initiative from the academic publishers, adopt the view that the content being produced by the academy is valuable to publishers as well as academics, that the reputation of journals is in part built on the reputation of the institutions and academics responsible for producing the research papers, and set up a system in which:

– academics submit articles to the repository using an institutional XML template (no more faffing around with different style sheets from different publishers), at which point they are released using a preview stylesheet as a preprint;

– journals to which articles are to be submitted are required to collect the articles from the repository. Layout and pagination is for them to do, before getting it signed off by the author;

– optionally, journal editors might be invited to bid for the right to publish an article formally. The benefit of formal publication for the publisher is that when a work is cited, the journal gets the credit for having published the work.

That is all… ;-)

PS RAE/REF style accounting could also be used in part to set journal pricing and payments. Crap journals that no-on cites content in would get nothing. Well cited journals would be recompensed more generously… There would of course bee opportunities for gaming the system, but addressing this would be similar in kind to implementing measures that search engines based on PageRank style algorithms take against link farms, etc.