OU on the Telly…

Ever since the Open University was founded, a relationship with the BBC has provided the OU with a route to broadcast through both television and radio. Some time ago, I posted a recipe for generating a page that showed current OU programmes on iPlayer (all rotted now…). Chatting to Liam last night, I started wondering about resurrecting this service, as well as pondering how I could easily begin to build up an archive of programme IDs for OU/BBC co-pros, so that whenever the fancy took me I could go to a current and comprehensive “OU on iPlayer” page and see what OU co-pro’d content was currently available to watch again.

Unfortunately, there doesn’t seem to be an obvious feed anywhere that gives access to this information, nor a simple directory page listing OU co-pros with links even to the parent series page or series identifier on the BBC site. (This would be lovely data to have in the OU’s open linked data store;-)

OU on the telly...

What caught my attention about this feed is that it’s focussed on growing audience around live broadcasts. This is fine if you’re tweeting added value* along with the live transmission and turning the programme into an event, but in general terms? I rarely watch live television any more, but I do watch a lot of iPlayer…

(* the Twitter commentary feed can than also be turned into expert commentary subtitles/captions, of course, using Martin Hawksey’s Twitter powered iPlayer subtitles recipe..)

There is also a “what’s on” feed available from OpenLearn (via a link – autodiscovery doesn’t seem to be enabled?), but it is rather horrible and it doesn’t contain BBC programme/series IDs (and I’m not sure the linked to pages necessarily do so, either?)

OU openlearn whats on feed (broken)

So – what to do? In the short term, as far as my tinkering goes, nothing (holidays…:-) But I think with a nice feed available, we could make quite a nice little view over OU co-pro’d content currently on iPlayer, and also start to have a think about linking in expert commentary, as well as linking out to additional resources…

See also:
Augmenting OU/BBC Co-Pro Programme Data With Semantic Tags
Linked Data Without the SPARQL – OU/BBC Programmes on iPlayer [this actually provides a crude recipe for getting access to OU/BBC programmes by bookmarking co-pro’d series pages on delicious…]

PS from @liamgh: “Just noticed that Wikipedia lists both BBC & OU as production co e.g. en.wikipedia.org/wiki/The_Virtu… RH Panel readable with dbpedia.” Interesting… so we should be able to pull down some OU/BBC co-pros by a query onto DBPedia…

PPS also from Liam – a handy recipe for generating an HTML5 leanback UI for video content identified via a SPARQL query: An HTML5 Leanback TV webapp that brings SPARQL to your living room

Fragments: Accessing YouTube Account Data in Google Spreadsheets via OAuth

If you’re running a Youtube account, how might you collect Insights data for all your videos as spreadsheet entries that can be used in the preparation of reports about your social media effectiveness?

One way might be to go to each video in turn and download the separate CSV data files created for each video. Alternatively, you can grab the data via the YouTube/GData API (http://code.google.com/apis/youtube/2.0/developers_guide_protocol_insight.html).

I haven’t actually got round to getting any data out of my YouTube account and into a Google spreadsheet yet, but I have dome the first step, which is to set up the authentication using OAuth. Here’s the Google Apps script I used…

function youtube(){
  // Setup OAuthServiceConfig
  var oAuthConfig = UrlFetchApp.addOAuthService("youtube");
  oAuthConfig.setAccessTokenUrl("https://www.google.com/accounts/OAuthGetAccessToken");
  oAuthConfig.setRequestTokenUrl("https://www.google.com/accounts/OAuthGetRequestToken?scope=http%3A%2F%2Fgdata.youtube.com%2F");
  oAuthConfig.setAuthorizationUrl("https://www.google.com/accounts/OAuthAuthorizeToken");
  oAuthConfig.setConsumerKey("anonymous");
  oAuthConfig.setConsumerSecret("anonymous");

  // Setup optional parameters to point request at OAuthConfigService.  The "twitter"
  // value matches the argument to "addOAuthService" above.
  var options =
    {
      "oAuthServiceName" : "youtube",
      "oAuthUseToken" : "always"
    };

  var result = UrlFetchApp.fetch("http://gdata.youtube.com/feeds/api/users/default/favorites?v=2&alt=json", options);
  var o  = Utilities.jsonParse(result.getContentText());
  Logger.log(o)
}

[Gist here: https://gist.github.com/1067283]

The first time you run the script, it should request access from your YouTube account…

The next step is to work out what to pull from Youtube, and how to actually store it in the spreadsheet…

PS a couple more Youtube snippets of interest:
YouTube documentation wizard: customise your YouTube API documentation view
interactive YouTube API explorer

Marussia Virgin Racing F1 Factory Visit

Yesterday, I had the good fortune to visit the F1 Marussia Virgin Racing factory at Dinnington, near Sheffield, as a result of “winning” a luck dip competition run via GoMotorSport (part of a series of National Motorsport week promotions being run by the F1 teams based in the UK).

Marussia Virgin F1 Factory
[Thanks to @markhendy for the pic…]

Thanks to Finance Director Mark Hendy and engineer Shakey for the insight into the team’s operations:-)

Over the next few days and weeks, I’ll try to pick up on a few of the things I learned from the tour on the F1DataJunkie blog, tying them in to the corresponding technical regulations and other bits and pieces, but for now, here are some of the noticings I came away with…

– the engines aren’t that big, weighing 90kg or so and looking small than the engine in my own car…

– wheels are slotted onto the axles using a 3 pin mount on the front and a six(?) pin mount on the rear. (The engines are held on using a 6(?) point fixing.)

– the drivers aren’t that heavy either, weight wise (not that we met either of the drivers: neither Timo Glock nor Jerome D’Ambrosio are frequent visitors to the Dinnington factory, where the team’s cars are prepared fro before, and overhauled after, each race…): 70 kg or so. With cars prepared to meet racing weight regulations to a tolerance of 0.5kg or so, a large mixed grill and a couple of pints can make a big difference… (Hmm, I guess it would be easy enough to calculate the “big dinner weight effect” penalty on laptime?!)

I’m not sure if this was a “right-handed vs left-handed spanner” remark, but a comment was also made that the adhesive sponsor sticker can have a noticeable effect on the car’s aerodynamics as the corners become unstuck and start to flap. (Which made me wonder, of that is the case, is the shape of stickers taken into account? Is a leading edge on a label with a point/right angled corner rather than a smooth curve likely to come unstuck more easily, for example?!) Cars also need repainting every few races (stripping back to the carbon, and repainting afresh) because of pitting and chipping and other minor damage than can affect smooth airflow.

– side impact tubes are an integral part of the safety related design of the car:

– to track the usage of tyres during a race weekend, an FIA official scans a barcode on each tyre as it is used on the car:

The data junkie in me in part wonders whether this data could be made available in a timely fashion via the Pirelli website (or a Pirelli gadget on each team’s website) – or would that me giving away too much race intelligence to the other teams? That way, we could get an insight into the tyre usage over the course weekend…

– IT plays an increasingly important part of the the pit garage setup; local area networks (cabled and wifi?) are set up by each team for the weekend, the data engineers sitting behind the screen and viewing area in the garage (rather than having a fixed set up in one of the 5(?) trucks that attends each race.).

– the cars are rigged up with 60 or sensors; there is only redundancy on throttle and clutch sensors. Data analysis is in part provided through engineers provided by parts suppliers (McLaren Electronics, who supply the car’s ECU (and telemetry box(?)) provide a dedicated person(?) to support the team; data analysis is, in part, carried out using the Atlas (9?) Advanced Telemetry Linked Acquisition System from McLaren Electronic Systems. Data collected during a stint is transmitted under encryption back to the the pits, as well as being logged on the car itself. A full data dump is available to the team and the FIA scrutineers via an umbilical/wired connection when the car is pitted.

UST Global, one of the teams partners, also provide 3(?) data analysts to support the team during a race (presumably using UST Global’s “Race Management System”?).

– for design and testing, weekly reporting is required that conforms to a trade-off between the number of hours per week that each team can spend on wind tunnel testing (60 hours per week) and and CFD (“can’t find downforce”;-) simulation (40 teraflops per week). My first impression there was that efficient code could effectively mean more simulation testing?! (CFD via CSC? CSC expands relationship with Marussia Virgin Racing, doubling computing power for the team’s 2011 formula 1 season, or are things set to change with the replacement of Nick Wirth by Pat Symonds…?)

– the resource restriction agreement also limits the number of people who can work on the chassis. For a race weekend, teams are limited to 50 (47?) people. We were given a quick run down of at least (8?) engineer roles assigned to each car, but I forget them…

So – that’s a quick summary of some of the things I can remember off the top of my head…

…but here are a couple of other things to note that may be of interest…

Marussia Virgin are making the most of their Virgin partnership over the Silverstone race weekend with a camping party/Virgin Experience at Stowe School (Silverstone Weekend) and a hook-up with Joe Saward’s “An Audience With Joe“… (If you don’t listen to @sidepodcast’s An Aside With Joe podcast series, you should…;-)

The team has also got en education thing going with race ticket sweeteners for folk signing up to the course: Motorsport Management Online Course.

I can’t help thinking there may be a market for a “hardcore fans” course on F1 that could run over a race season and run as an informal, open online course… I still don’t really know how a car works, for example ;-)

Anyway – that’s by the by: thanks again to the GoMotorsport and the Marussia Virgin Racing team (esp. Mark Hendy and Shakey) for a great day out :-)

PS I think the @marussiavirgin team are trying to build up their social media presence too… to see who they’re listening to, here’s how their friends connect:

How friends of @marussiavirgin connect

;-)

News, Analysis, Academia and Demand Education

Some threads that I can see tangling:

  • as Google starts to fight back against content farms such as Demand Media (e.g. New York Times on Google’s War on Nonsense), the Digger seems keen to get into education: Murdoch signals push into education;
  • for a long time I’ve imagined some sort of sensemaking spectrum that leads from news stories, through analysis and feature articles, to a more academic take on subject (if I can get my act together, I’d like to try to pull a workshop together in the Autumn between media and education folk to look at this…); I’m not necessarily suggesting a bigger role for “celebrity academics”, more a consideration of how academics can make content available to the media to add depth and deepened engagement to a story, and how the media can provide timeliness and news hooks to education as a way of adding contextual relevance. Here are two short (2 minute) takes on it, one from Martin Bean, the OU VC, in hist ALT-C 2010 keynote, and the other from Guardian editor, Alan Rusbridger, on the Radio 4 Media Show:
  • the OU starts a new sort of campaign: Youtube learning campaigns, such as this one on The History of English

So where’s all this going? And what role might openly licensed content created by academics as part of their daily duties have to play in it?

Open Book Talk

“A booktalk in the broadest terms is what is spoken with the intent to convince someone to read a book.” Wikipedia

Whilst putting together yesterday’s post about personal art collections online (for a wider take on this, see Mia Ridge’s The rise of the non-museum (and death by aggregation), which offers all manner of food for thought around personal collection building…), I started thinking again about how we might use recorded discussions or book talks focussing on particular books as a component in the “content scaffolding” around works that might be used as resources in an informal learning context.

(For an earlier foray in to the book talk world, see my post on BBC “In Our Time” Reading List using Linked Data.)

So the (really simple and obvious) idea is this (and I fully appreciate other sites out there may already exist that do this: if so, please let me know in the comments): how about we build a lookup service that allows you to search by author, book title, ISBN (or cross ISBN), and it returns details for the book as well as links to audio or video recordings of book talks around the book.

I’ve started trying to cobble together a few resources around this, setting up (a not yet complete set of) scrapers (in various states of disrepair) on Scraperwiki to collate books and book talk audio links from:

It might also be appropriate to try to pull in “quality” book reviews* to annotate book listings, given that part of my idea at least is to find ways of enriching reading book references with discussion around them that can help folk make sense of the big ideas contained within the book, as well as maybe encouraging them to buy the book (the all required sustainability model: in this case, Amazon referral fees! Note that several of the sites use Amazon referrals as part of their own sustainability model. So it would only be fair to use their affiliate codes at least part of the time if their playable audio content was embedded on the site (even if that content is openly licensed… Share and share alike, right?! That is, trickle back a portion of any income you do make off the work of others, even if it is openly licensed for commercial use;-)

Another strand to all of this, of course, is sensemaking annotations around books pulled from “OERs” (what is is about education that makes the sector want its content to be somehow regarded as “special” and deserving of all sorts of qualification?!;-)

*Maybe the Guardian Platform API or one of the New York Times APIs could play a role here?

So, as ever, I’ve made a start, and as ever, that’ll probably be the end of it…. Sigh… Nice thought while it lasted though…

PS if I were to do next steps, it would probably to take the scraped data and try to normalise it in some ad hoc way in a triple store, maybe on the Talis platform? Note that in the current incarnation, some of the scraped BBC data contains multiple book references in a single record, and thise should be spearated out; also note that a lot of book references are informal (author/title), though I did manage to grab ISBNs (I think?!) from the IT COnversations/Tech Nation pages.

PPS In passing, I note that some of the older archived episodes of A Good Read have been split into chapters covering the different books reviewed in the programme? Was this some sort of experimental enrichment, or just the start of a more general roll out of chapterisation…?

Confused About Scope: Art Online

A few months ago, the art discovery website Artfinder appeared on the scene, providing a place to go to view art (online) from galleries around the world, build your own collections, receive recommendations about other artworks you might like to see (and maybe go and visit for real) and so on. A “Magic Tour” feature allows you to select three art works you like from sets of four, and then view a personalised art collection based on recommendations derived from your selection. Where quality prints of a work are available, there is an option to buy the print (for example, via MemoryPrints).

A couple of other related things that have crossed my radar over the course of the year include the Google Art Project, which offers very high definition reproductions of artworks from galleries around the world, and the JISC funded OpenART project, “a partnership between the University of York, the Tate and technical partners, Acuity Unlimited, will design and expose linked open data for an important research dataset entitled ‘The London Art World 1660-1735′”.

Today, I noticed the launch of a new BBC site, Your Paintings [announcement], which offers you the ability to create art collections, locate artworks by physical gallery location and so on… Hmmmm… (As yet, the URLs don’t seem to support content negotiation as a result of adding a .json or .xml suffix to pircture or gallery page; that is, as yet, the service doesn’t appear to be offering linkable data (hyperdata?) views over the content).

There was a time when Microsoft used to be charged with unfairly influencing the market, announcing it was about to release some feature or product that a rival was trying to market, effectively stifling competition through brand and market dominance. If you read the tech blogs, Google, Facebook, Apple, Twitter, et al. currently find themselves in a regular situation where the services, applications or features they release are heralded as being likely to wipe out competition in a niche discovered, created, or developed by a startup elsewhere (only in many cases it doesn’t quite work out that way…Bit.ly surviced Twitter’s shortener, Google Buzz threatened no-one, Facebook Places or Google Latitude haven’t squashed Foursquare, etc.).

The BBC has itself faced challenges regarding “anticompetitive”/fair trading behaviour, for example in local online news (local news video), catchup services/internet TV (Canvas) or (BBC Jam).

Now I’m generally a fan of the BBC, but I do wonder what additional value Your Paintings brings, especially given that it’s not apparently being launched with any additional technical capacity building features (i.e. it’s not (yet?) making metadata freely available for others to build on, though a couple of recent tweets suggest this may be on the timeline…)?

Having come across aNobii today (via @maireadoconnor), a service that offers “an online reading community built by readers for readers allowing you to shelve, find and share books”, I wonder: is this another area where the BBC could just “step in”, presumably as a way of building community around the wide variety of programming it offers that have good hooks in to books?

[Disclaimer: I’ve ranted before about the BBC not making more use of structured markup around book identifiers, but if they were to get into reading groupsm this would presumably provide the technical underpinnings…? (e.g. BBC “In Our Time” Reading List using Linked Data.) So I maybe should be careful what I wish for…]

So the point of this post? Just to note my confusion about what it is the BBC actually does, and how it does it… I know that it’s not just about the telly and the radio, but I’m not sure what it is about when it comes to the web?

And it’s not just confusion about the BBC’s role. It also extends to the public facing role of the OU, which I personally view as having more a “public service education” remit than the rest of the UK HE sector (whether this is a view than can survive the increasingly businesslike culture of higher education I don’t know…). In other words: to what extent should the OU be doing more in the way of education related online public service broadcasting?

PS so I wonder:

SO how much does the BBC spend on AdWords?

How much has the BBC allocated to its opening salvo on a Your Paintings AdWords campaign…?

Filter Bubbles, Google Ground Truth and Twitter EchoChambers

As the focus for this week’s episode [airs live Tues 21/6/11 at 19.32 UK time, or catch it via the podcast] in the OU co-produced season of programmes on openness with Click (radio), the BBC World Service radio programme formerly known as Digital Planet, we’re looking at one or two notions of diversity

If you’re a follower of pop technology, over the last week or two you will probably have already come across Eli Pariser’s new book, The Filter Bubble: What The Internet Is Hiding From You, or his TED Talk on the subject:


Eli Pariser, :The Filter Bubble”, TED Talks

It could be argued that this is the Filter Bubble in action… how likely is it, for example, that a randomly selected person on the street would have heard of this book?

To support the programme, presenter Gareth Mitchell has been running an informal experiment on the programmes Facebook page: Help us with our web personalisation experiment!! The idea? To see what effect changing personalisation settings on Google has on a Google search for the word “Platform”. (You can see results of the experiment from Click listeners around the world on the Facebook group wall… Maybe you’d like to contribute too?)

It might surprise you to learn that Google results pages – even for the same search word – do not necessarily always give the same results, something I’ve jokingly referred to previously as “the end of Google Ground Truth”, but is there maybe a benefit to having very specifically focussed web searches (that is, very specific filter bubbles)? I think in certain circumstances there may well be…

Take education, or research, for example. Sometimes, we want to get the right answer to a particular question. In times gone by, we might have asked a librarian for help, if not to such a particular book or reference source, at least to help us find one that might be appropriate for our needs. Nowadays, it’s often easier to turn to a web search engine than it is to find a librarian, but there are risks in doing that: after all, no-one really knows what secret sauce is used in the Google search ranking algorithm that determines which results get placed where in response to a particular search request. The results we get may be diverse in the sense that they are ranked in part by the behaviour of millions of other search engine users, but from that diversity do we just get – noise?

As part of the web personalisation/search experiment, we found that for many people, the effects of changing personalisation settings had no noticeable effect on the first page of results returned for a search on the word “platform”. But for some people, there were differences… From my own experience of making dozens of technology (and Formula One!) related searches a day, the results I get back for those topics hen I’m logged in to Google are very different to when I have disabled the personalised reslults. As far as my job goes, I have a supercharged version of Google that is tuned to return particular sorts of results – code snippets, results from sources I trust, and so on. In certain respects, the filter bubble is akin to my own personal librarian. In this particular case, the filter bubble (I believe), works to my benefit.

Indeed, I’ve even wondered before whether a “trained” Google account might actually be a valuable commodity: Could Librarians Be Influential Friends? And Who Owns Your Search Persona?. Being able to be an effective searcher requires several skills, including the phrasing of the search query itself, the ability to skim results and look for signals that suggest a result is reliable, and the ability to refine queries. (For a quick – and free – mini-course on how to improve your searching, check out the OU Library’s Safari course.) But I think it will increasingly rely on personalisation features…which means you need to have some idea about how the personalisation works in order to make the most of its benefits and mitigate the risks.

To take a silly example: if Google search results are in part influenced by the links you or your friends share on Twitter, and you follow hundreds of spam accounts, you might rightly expect your Google results to be filled with spam (because your friends have recommended them, and you trust your friends, right? That’s one of the key principles of why social search is deemed to be attractive.)

As well as the content we discover through search engines, content discovered through social networks is becoming of increasing importance. Something I’ve been looking at for some time is the structure of social networks on Twitter, in part as a “self-reflection” tool to help us see where we might be situated in a professional social sense based on the people we follow and who follow us. Of course, this can sometimes lead to incestuous behaviour, where the only people talking about a subject are people who know each other.

For example, when I looked at the connection of people chatting on twitter about Adam Curtis’ All Watched Over By Machines of Loving Grace documentary, I was surpised to see it defined a large part of the UK’s “technology scene” that I am familiar with from my own echochamber…

#awobmolg echochamber
#awobmolg echo chamber

So what do I mean by echochamber? In the case of Twitter, I take it to refer to a group of people chatting around a topic (as for example, identified by a hashtag) who are tightly connected in a social sense because they all follow one another anyway… (To see an example of this, for a previous OU/Click episode, I posted a simple application (it’s still there), to show the extent to which people who had recently used the #bbcClickRadio hashtag on Twitter were connected.)

As far as diversity goes, if you follow people who only follow each other, then it might be that the only ideas you come across are ideas that keep getting recycled by the same few people… Or it might be the case that a highly connected group of people shows a well defined special interest group on a particular topic….

To get a feel for what we can learn about our own filter bubbles in Twitterspace, I had a quick look at Gareth Mitchell’s context (@garethm on Twitter). One of the dangers of using public apps is that anyone can do this sort of analysis of course, but the ethics around my using Gareth as a guinea pig in this example is maybe the topic of another programme…!

So, to start with, let’s see how tightly connected Gareth’s Twitter friends are (that is, to what extent do the people Gareth follows on Twitter follow each other?):

@garethm Twitter friendsThe social graph showing how @garethm’s friends follow each other

The nodes represent people Gareth follows, and they have been organised into coloured groups based on a social network analysis measure that tries to identify groups of tightly interconnected individuals. The nodes are sized according to a metric known as “Authority”, which reflects the extent to which people are followed by other members of the network.

A crude first glance at the graph suggests a technology (purple) and science (fluorine-y yellowy green) cluster to me, but Gareth might be able to label those groups differently.

Something else I’ve started to explore is the extent to which other people might see us on Twitter. One way of doing this is to look at who follows you; another is to have a peek at what lists you’ve been included on, along with who else is on those lists. Here’s a snapshot of some of the lists (that actually have subscribers!) that Gareth is listed on:

@garethm listspace

The flowers are separate lists. People who are on several lists are caught on the spiderweb threads connecting the list flowers… In a sense, the lists are filter bubbles defined by other people into which Gareth has been placed. To the left in the image above, we see there are a few lists that appear to share quite a few members: convergent filters?!

In order to try to looking outside these filter bubbles, we can get an overview of the people that Gareth’s friends follow that Gareth doesn’t follow (these are the people Gareth is likely to encounter via retweets from his friends):

WHo @garethm's friends follow that he doesn't..
Who @garethm’s friends follow that @garethm doesn’t follow…

My original inspiration for this was to see whether or not this group of people would make sense as recommendations for who to follow, but if we look at the most highly followed people, we see this may not actually make sense (unless you want to follow celebrities!;-)

Recommnendations based on friends of @Garethm's friends
Popular friends of Gareth’s that he doesn’t follow…

By way of a passing observation, it’s also worth noting that the approach I have taken to constructing the “my friends friends who aren’t my friends” graph tends to place “me” at the centre of the universe, surrounded by folk who are a just a a friend of a friend away…

For extended interviews and additional material relating to the OU/Click series on openness, make sure you visit Click (#bbcClickRadio) on OpenLearn.

BBC Click Radio – Openness Special on “Privacy”: Jeff Jarvis vs. Andrew Keen

This week saw the latest episode in the OU/BBC World Service Click (radio) co-produced season on openness, with a focus this week on privacy… You can hear an extended version of the discussion between entrepeneurial journalism and openness advocate, Jeff Jarvis, and professional contrarian, Andrew Keen: Privacy in a connected world

Unfortunately, the episode aired just too early to pick up up on this week’s “Who needs privacy?!” news, and in particular the new iPhone’s “secret” location logging behaviour: iPhone keeps record of everywhere you go; (find out how to see where your iPhone thinks you’ve been here: Got an iPhone or 3G iPad? Apple is recording your moves); but the discussion is a great one, so I encourage you to listen to it…(I’ll be asking questions later!;-)

The programme also saw the launch of its new hashtag: #bbcClickRadio

Whilst the Digital PlanetClick twitter audience is still dwarfed by the Digital Planet Listeners’ Facebook group, I’m keen to see if we can try to grow it… one way might be to show who’s recently been tweeting about the programme, and encourage people to start following each other and chatting about the issues raised in the programme a little bit more – something Gareth Mitchell (@garethm) can now pick up on at least on the first airing, as Click now goes out live…. So to that end, I’m going to try to work up a special version of my Twtter friendviz application that shows connections between folk who’ve recently tweeted a particular term, and in this case, the #bbcClickRadio hashtag. To see the map, visit http://bit.ly/bbcclickradiocommunity.

As a tease, here’s a rather more polished version of a map I grabbed recently…

Snapshot of #bbcClickRadioCommunity - http://bit.ly/bbcclickradiocommunity

(Unfortunately, the live one is unlikely to ever look like this!)

PS I wonder if the investigation into the iPhone tracking was inspired by the recent story about German politician Malte Spitz who managed to obtain a copy of the data his phone provider had stored about his location… Zeit Online: Tell-all telephone (If you want to play with the data, it’s available from there…)

BBC Click Radio – SXSW Interview With Andrew Keen

Tomorrow (today??? Err, Tuesday…) sees (hears?! Err… airs) the next in the OU/BBC Click radio (ex-Digital Planet) co-produced season on “openness”.

Click (radio) (err, as was Digital Planet) now airs live and direct, comin’ atcha on Tuesday’s at, err, it’s not easy to find out from the programme page, is it??? Err, 19.32 (UK time???) on Tuesday (the science slot on World Service). (See the full upcoming schedule.)

Anyway, given all that confusion, why not take a break, sit back, and have a listen to this exclusive interview between Click’s Gareth Mitchell and Andrew Keen on “The Squeezed Midlist“.

And for more exclusive and extended interviews, check out the Information and Communication Technologies area on OpenLearn…

PS and for listeners of the BBC World Service Radio programme formally know as Digital Planet Click who are on Twitter, the hashtag is now #bbcClickRadio

BBC “In Our Time” Reading List using Linked Data

If you’re a regular listener of BBC Radio 4, you will almost certainly have come across In Our Time, a weekly, single topic discussion programme (with a longstanding archive of listen again material) hosted by Melvyn Bragg on matters scientific, philosophical, historical and cultural. In certain respects, In Our Time may be thought of as discussion based audio encyclopedia. The format sees a panel of three experts (made up of academics, commentators and critics knowledgeable on the topic for that week) teaching the host about the topic. A diligent student, he will of course have done some background reading, and posted links to the references consulted on the programme’s web page.

I’ve already had a quick play with the In Our Time data, looking to see how easy it is to relate programmes to expert academics from various UK universities (Visualising OU Academic Participation with the BBC’s “In Our Time”), but I also wondered whether it would be possible to do anything with the book references, such as using them to identify courses that may be related to a particular programme; (this is reminiscent of a couple of MOSAIC competition entries that looked at ways of recommending books based on courses, and courses based on books using @daveyp’s data from Huddersfield University library that associated course codes with the books borrowed by students taking those courses).

Being a lazy sort, I posted an idea to the OKF Ideas Incubator suggesting that it might be worth considering extracting references from In Our Time programme pages and then reconciling them with Linked Data representations of the corresponding book data.

And then, as if by magic, a solution appeared, from Orangeaurochs: “In Our Time” booklist which describes a method for parsing out the book data and then getting a Linked Data resource reference back from Bibliographica.

The original recipe suggested screenscraping the raw book references from the page HTML, but I posted a comment (at the time of writing, still in the moderation queue) which suggests:

Hi
Great to see you taking this challenge on. Re your step 2 – obtaining the reading list – a possibly more structured way of doing this is to get the appropriate section out of the xml or json representation of the programme page (eg http://www.bbc.co.uk/programmes/b00xhz8d.xml or http://www.bbc.co.uk/programmes/b00xhz8d.json).

I wonder if the BBC will start to structure the data even more – for example by adding explicitly marked up biblio data to book references?

Anyway, you can see an example of the results at pages with URLs of the form http://www.aurochs.org/inourtime_booklist/inourtime_booklist_v1.php?http://www.bbc.co.uk/programmes/b00xhz8d – just add the appropriate IOT programme page URL to extract the data from it.

There are a few hit and misses, but it’s a great start, and something that can be used as a starting point for thinking about how to annotate programme related booklists with structured bibliographic data and exploring what that might mean in a world of linked educational resources that can also reference linked BBC content… :-)

PS Hmmm, I wonder what other programmes are associated with books? A Good Read and Desert Island Discs certainly…