Tinkering With Scraperwiki – The Bottom Line, OpenCorporates Reconciliation and the Google Viz API

Having got to grips with adding a basic sortable table view to a Scraperwiki view using the Google Chart Tools (Exporting and Displaying Scraperwiki Datasets Using the Google Visualisation API), I thought I’d have a look at wiring in an interactive dashboard control.

You can see the result at BBC Bottom Line programme explorer:

The page loads in the contents of a source Scraperwiki database (so only good for smallish datasets in this version) and pops them into a table. The searchbox is bound to the Synopsis column and and allows you to search for terms or phrases within the Synopsis cells, returning rows for which there is a hit.

Here’s the function that I used to set up the table and search control, bind them together and render them:

    google.load('visualization', '1.1', {packages:['controls']});

    google.setOnLoadCallback(drawTable);

    function drawTable() {

      var json_data = new google.visualization.DataTable(%(json)s, 0.6);

    var json_table = new google.visualization.ChartWrapper({'chartType': 'Table','containerId':'table_div_json','options': {allowHtml: true}});
    //i expected this limit on the view to work?
    //json_table.setColumns([0,1,2,3,4,5,6,7])

    var formatter = new google.visualization.PatternFormat('<a href="http://www.bbc.co.uk/programmes/{0}">{0}</a>');
    formatter.format(json_data, [1]); // Apply formatter and set the formatted value of the first column.

    formatter = new google.visualization.PatternFormat('<a href="{1}">{0}</a>');
    formatter.format(json_data, [7,8]);

    var stringFilter = new google.visualization.ControlWrapper({
      'controlType': 'StringFilter',
      'containerId': 'control1',
      'options': {
        'filterColumnLabel': 'Synopsis',
        'matchType': 'any'
      }
    });

  var dashboard = new google.visualization.Dashboard(document.getElementById('dashboard')).bind(stringFilter, json_table).draw(json_data);

    }

The formatter is used to linkify the two URLs. However, I couldn’t get the table to hide the final column (the OpenCorporates URI) in the displayed table? (Doing something wrong, somewhere…) You can find the full code for the Scraperwiki view here.

Now you may (or may not) be wondering where the OpenCorporates ID came from. The data used to populate the table is scraped from the JSON version of the BBC programme pages for the OU co-produced business programme The Bottom Line (Bottom Line scraper). (I’ve been pondering for sometime whether there is enough content there to try to build something that might usefully support or help promote OUBS/OU business courses or link across to free OU business courses on OpenLearn…) Supplementary content items for each programme identify the name of each contributor and the company they represent in a conventional way. (Their role is also described in what looks to be a conventionally constructed text string, though I didn’t try to extract this explicitly – yet. (I’m guessing the Reuters OpenCalais API would also make light work of that?))

Having got access to the company name, I thought it might be interesting to try to get a corporate identifier back for each one using the OpenCorporates (Google Refine) Reconciliation API (Google Refine reconciliation service documentation).

Here’s a fragment from the scraper showing how to lookup a company name using the OpenCorporates reconciliation API and get the data back:

ocrecURL='http://opencorporates.com/reconcile?query='+urllib.quote_plus("".join(i for i in record['company'] if ord(i)<128))
    try:
        recData=simplejson.load(urllib.urlopen(ocrecURL))
    except:
        recData={'result':[]}
    print ocrecURL,[recData]
    if len(recData['result'])>0:
        if recData['result'][0]['score']>=0.7:
            record['ocData']=recData['result'][0]
            record['ocID']=recData['result'][0]['uri']
            record['ocName']=recData['result'][0]['name']

The ocrecURL is constructed from the company name, sanitised in a hack fashion. If we get any results back, we check the (relevance) score of the first one. (The results seem to be ordered in descending score order. I didn’t check to see whether this was defined or by convention.) If it seems relevant, we go with it. From a quick skim of company reconciliations, I noticed at least one false positive – Reed – but on the whole it seemed to work fairly well. (If we look up more details about the company from OpenCorporates, and get back the company URL, for example, we might be able to compare the domain with the domain given in the link on the Bottom Line page. A match would suggest quite strongly that we have got the right company…)

As @stuartbrown suggeted in a tweet, a possible next step is to link the name of each guest to a Linked Data identifier for them, for example, using DBPedia (although I wonder – is @opencorporates also minting IDs for company directors?). I also need to find some way of pulling out some proper, detailed subject tags for each episode that could be used to populate a drop down list filter control…

PS for more Google Dashboard controls, check out the Google interactive playground…

PPS see also: OpenLearn Glossary Search and OpenLearn LEarning Outcomes Search

OU Social Media Strategy is a Blast to the Past?!

Readers over a certain age, ex-pats included, will probably remember (hopefully with fondness) a time when the only TV programmes on air in the early hours or on weekend mornings were OU broadcast items on the BBC:

From time to time, (eg OERs: Public Service Education and Open Production), I’ve thought that was the actual heyday of OU broadcasting in terms of get “authentic” Higher Education level teaching content to large audiences, nothwithstanding the popularity of some of the more recent flagship co-produced programming the OU has worked with the BBC on. (For a view of OU/BBC co-produced content currently on iPlayer, see OU/BBC co-pros currently on iPlayer; and for clips from co-pro programmes: clips from OU/BBC co-pros currently on iPlayer.)

As well as the BBC content, there’s also a wealth of OU video material on both YouTube and iTunesU. A great way into this content is through some of the OU’s YouTube playlists, such as 60 Second Adventures in Thought or Seven Wonders of the Microbe World. (See also this full list of OU Learn playlists on YouTube.)

ANyway, one thing that seems (to me at least) to be lacking is a social media strategy (on Twitter at least) relating to broadcast events – academic commentaries or OpenLearn links being tweeted alongside a live OU/BBC co-pro broadcast, for example – that could be used to help drive a second screen experience or community.

But then I realised I was looking in the wrong place – or at least, the wrong time… because it seems the lessons from the past are being heeded… and the @OUpahParr account is actually tweeting out links to OU content to a variety of hashtag streams throughout the early hours, picking up not only the global audience but the UK’s insomniacs and shift workers. It seems that as well as what are presumably scheduled tweets to content, there’s also someone from the comms team (^AF), staffing the account for anybody who wants to chat, or learn more…

Good stuff ;-)

OU/BBC Co-Pros Currently on iPlayer, via ScraperWiki

A quick update to yesterday’s post on OU/BBC Co-Pros Currently on iPlayer: I’ve popped the first draft of a daily scraper onto Scraperwiki that looks at my delicious bookmark list of OU/BBC series co-pros and tries to find corresponding programmes that are currently available on iPlayer: OU BBC Co-pros on iPlayer Scraperwiki

This is probably not the most efficient solution, but at least it provides some sort of API to at least some relevant iPlayer data.

I’ve also popped up a quick Scraperwiki view over the data OU BBC Co-pros on iPlayer (Scraperwiki HTML View); note that this data is unsorted (I need to think about how best to do that?)

[I’ve added a couple more columns since that screenshot was grabbed; please feel free to work on the scraper, or the view, to improve them further; if you grab a copy of the view to work on your own, please add a link back to it in the comments below, along with a brief description of what you’re trying to achieve with your view…]

PS hmm, maybe I should pop the academics on In Our Time code onto Scraperwiki too?

PPS for a more recent view, see: OU/BBC co-pros – bootstrap experiment

OU/BBC Co-Pros Currently on iPlayer

Given the continued state of presentational disrepair of the OpenLearn What’s On feed, I assume I’m the only person who subscribes to it?

Despite its looks, though, I have to say I find it *really useful* for keeping up with OU/BBC co-pros.

The feed displays links to OpenLearn pages relating to programmes that are scheduled for broadcast in the next 24 hours or so (I think?). This includes programmes that are being repeated, as well as first broadcast. However, clicking through some of the links to the supporting programme pages on OpenLearn, I notice a couple of things:

Firstly, the post is timestamped around the time of the original broadcast. This approach is fine if you want to root a post in time, but it makes the page look out-of-date if I stumble onto either from a What’s On feed link or from a link on the supporting page on the corresponding BBC /programme page. I think canonical programme pages for individual programmes have listings of when the programme was broadcast, so it should also be possible to display this information?

Secondly, as a piece of static, “archived” content, there is not necessarily any way of knowing that the programme is currently available. I grabbed the above screenshot because it doesn’t even appear toprovide a link to the BBC programme page for the series, let alone actively promote the fact that the programme itself, or at least, other programmes from the same series, are currently: 1) upcoming for broadcast; 2) already, or about to be, available on iPlayer. Note that as well as full broadcasts, many programmes also have clips available on BBC iPlayer. Even if the full programmes aren’t embeddable within the OpenLearn programme pages (for rights reasons, presumably, rather than techincal reasons?), might we be able to get the clips locally viewable? Or do we need to distniguish between BBC “official” clips, and the extra clips the OU sometimes gets for local embedding as part of the co-pro package?

If the OU is to make the most of repeat broadcasts of OU-BBC co-pro, then I think OpenLearn could do a couple of things in the short term, such as create a carousel of images on the homepage that link through to “timeless” series or episode supporting programmes. The programme support pages should also have a very clearly labelled, dynamically generated, “Now Available on iPlayer” link for programmes that are currently available, along with other available programmes from the same series. The next step would be to find some way of making more of persistent clips on iPlayer?

Anyway – enough of the griping. To provide some raw materials for anyone who would like to have a play around this idea, or maybe come up with a Twitter Bootstrap page that promotes OU/BBC co-pro programmes currently on iPlayer, here’s a (very) raw example: a simple HTML web page that grabs a list of OU/BBC co-pro series pages I’ve been on-and-off maintaining on delicious for some time now (if there are any omissions, please let me know;-), extracts the series IDs, pulls down the corresponding list of series episodes currently on iPlayer via a YQL JSON-P proxy, and then displays a simple list of currently available programmes:

Here’s the code:

<html><head>
<title></title>

<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js">
</script>

<script type="text/javascript">
//Routine to display programmes currently available on iPlayer given series ID
// The output is attached to a uniquely identified HTML item

var seriesID='b01dl8gl'
// The BBC programmes series ID

//The id of the HTML element you want to contain the displayed feed
var containerID="test";

//------------------------------------------------------

function cross_domain_JSON_call(seriesID){
 // BBC json does not support callbacks, so use YQL as JSON-P proxy
 
 var url = 'http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20json%20where%20url%3D%22http%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2F' + seriesID + '%2Fepisodes%2Fplayer.json%22%20and%20itemPath%20%3D%20%22json.episodes%22&format=json&callback=?'
 
 //fetch the feed from the address specified in 'url'
// then call "myCallbackFunction" with the resulting feed items
 $.getJSON(
   url,
   function(data) { myCallbackFunction(data.query.results); }
 )
}

// A simple utility function to display the title of the feed items
function displayOutput(txt){
  $('#'+containerID).append('<div>'+txt+'</div>');
}

function myCallbackFunction(items){
  console.log(items.episodes)
  items=items.episodes
  // Run through each item in the feed and print out its title
  for (prog in items){
    displayOutput('<img src="http://static.bbc.co.uk/programmeimages/272x153/episode/' + items[prog].programme.pid+'.jpg"/>' + items[prog].programme.programme.title+': <a href="http://www.bbc.co.uk/programmes/' + items[prog].programme.pid+'">' + items[prog].programme.title+'</a> (' + items[prog].programme.short_synopsis + ', ' + items[prog].programme.media.availability + ')');
  }
}

function parseSeriesFeed(items){
  for (var i in items) {
    seriesID=items[i].u.split('/')[4]
    console.log(seriesID)
    if (seriesID !='')
      cross_domain_JSON_call(seriesID)
  }
}

function getSeriesList(){
  var seriesFeed = 'http://feeds.delicious.com/v2/json/psychemedia/oubbccopro?count=100&callback=?'
  $.getJSON(
   seriesFeed,
   function(data) { parseSeriesFeed(data); }
 )
}

// Tell JQuery to call the feed loader when the page is all loaded
//$(document).ready(cross_domain_JSON_call(seriesID));
$(document).ready(getSeriesList())
</script>

</head>

<body>
<div id="test"></div>
</body>

</html>

If you copy the (raw) code to a file and save it as an .html file, you should be able to preview it in your own browser.

I’ll try to make any updated versions of the code available on github: iplayerSeriesCurrProgTest.html

If you have a play with it, and maybe knock up a demo, please let me know via a comment;-)

PS seems I should have dug around the OpenLearn website a bit more – there is a What’s on this week page, linked to from the front page, that lists upcoming transmissions/broadcasts:

I’m guessing this is done as a Saturday-Friday weekly schedule, in line with TV listings magazines, but needless to say I have a few issues with this approach!;-)

For example, the focus is on linear schedules of upcoming broadcast content in the next 0-7 days, depending when the updated list is posted. But why not have a rolling “coming up over the next seven days” schedule, as well as a “catch-up” service linking to to content currently on iPlayer from programmes that were broadcast maybe last Thursday, or even longer ago?

The broadcast schedule is still a handy thing for viewers who don’t have access to digital on-demand services, but it also provides a focus for “event telly” for folk who do typically watch on-demand content. I’m not sure any OU-BBC co-pro programmes have made a point of running an online, realtime social media engagement exercise around a scheduled broadcast (and I think second screen experiments have only been run as pilots?), but again, it’s an opportunity that doesn’t seem to be being reflected anywhere?

Guardian Telly on Google TV… Is the OU There, Yet?

A handful of posts across several Guardian blogs brought my attention to the Guardian’s new Google TV app (eg Guardian app for Google TV: an introduction (announcement), Developing the Google TV app in Beta (developer notes), The Guardian GoogleTV project, innovation & hacking (developer reflection)). Launched for the US, initially, “[i]t’s a new way to view [the Guardian’s] latest videos, headlines and photo galleries on a TV.”

The OU has had a demo Google TV app for several months now, courtesy of ex-of-the-OU, now of MetaBroadcast, Liam Green HughesAn HTML5 Leanback TV webapp that brings SPARQL to your living room:

@liamgh's OU leanback TV app demo

[Try the demo here: OU Google TV App [ demo ]]

Liam’s app is interesting for a couple of reasons: first, it demonstrates how to access data – and then content – from the OU’s open Linked Data store (in a similar way, the Guardian app draws on the Guardian Platform API, I think?); secondly, it demonstrates how to use the Google TV templates to get put a TV app together.

(It’s maybe also worth noting that the Google TV wasn’t Liam’s first crack at OU-TV – he also put together a Boxee app way back when: Rising to the Boxee developer challenge with an Open University app.)

As well as video and audio based course materials, seminar/lecture recordings, video shorts (such as the The History of the English Language in Ten Animated Minutes series (I couldn’t quickly find a good OU link?)), the OU also co-produces broadcast video with both the BBC (now under the OU-BBC “sixth agreement”), as well as Channel 4 (eg The Secret Life of Buildings was an OU co-pro).

Many of the OU/BBC co-pro programmes have video clips available on BBC iPlayer via the corresponding BBC programmes sites (I generate a quite possibly incomplete list through this hack – Linked Data Without the SPARQL – OU/BBC Programmes on iPlayer (here’s the current clips feed – I really should redo this script in something like Scraperwiki…); as far as I know, there’s no easy way of getting any sort of list of series codes/programme codes for OU/BBC co-pros, let alone an authoritative and complete one). The OU also gets access to extra clips, which appear on programme related pages on one of the OpenLearn branded sites (OpenLearn), but again, there’s no easy way of navigating these clips, and, erm, no TV app to showcase them.

Admittedly, Google TV enabled TVs are still in the minority and internet TV is still to prove itself with large audiences. I’m not sure what the KPIs are around OU/BBC co-pros (or how much the OU gives the BBC each year in broadcast related activity?), but I can’t for the life of me understand why we aren’t engaging more actively in beta styled initiatives around second screen in particular, but also things like Google TV. (If you think of apps on internet TV platforms such as Google TV or Boxee as channels that you can programme linearly or as on-demand services, might it change folks’ attitude towards them?)

Note that I’m not thinking of apps for course delivery, necessarily… I’m thinking more of ways of making more of the broadcast spend, increasing it’s surface area/exposure, and (particularly in the case of second screen) enriching broadcast materials and providing additional academic/learning journey value. Second screen activity might also as contribute to community development and brand enhancement through online social media engagement in an OU-owned and branded space parallel to the BBC space. Or it might not, of course…;-)

Of course, you might argue that this is all off-topic for the OU… but it isn’t if your focus is the OU’s broadcast activities, rather than formal education. If a fraction of the SocialLearn spend had gone on thinking about second screen applications, and maybe keeping Boxee/Google TV app development ticking over to see what insights it might bring about increasing engagement with broadcast materials, I also wonder if we might have started to think our way round to how second screen and leanback apps could also be used to support actual course delivery and drive innovation in that area?

PS two more things about the Guardian TV app announcement; firstly, it was brought to my attention through several different vectors (different blog subscriptions, Twitter); secondly, it introduced me to the Guardian beta minisite, which acts as an umbrella over/container for several of the Guardian blogs I follow… Now, where was the OU bloggers aggregated feed again? Planet OU wasn’t it? Another @liamgh initiative, I seem to remember…

PPS via a tweet from @barnstormed, I am reminded of something I keep meaning to blog about – OU Playlists on Youtube. For example, Digital Nepal or 60 Second Adventures in Thought, as well as The History of English in Ten Minutes. Given those playlists, one question might be: how might you build an app round them?!

PPPS via @paulbradshaw, it seems that the Guardian is increasingly into the content business, rather than just the news busines: Guardian announces multimedia partnerships with prestigious arts institutions [doh! of course it is….!] In this case, “partnering with Glyndebourne, the Royal Opera House, The Young Vic, Art Angel and the Roundhouse the Guardian [to] offer all more arts multimedia content than ever before”. “Summits” such as the recent Changing Media Summit are also candidate content factory events (eg in the same way that TED, O’Reilly conference and music festival events generate content…)

Media, Erm, Studies?

Over the weekend, I noticed an advert in the Guardian Review for a course on creative writing operated by the Guardian but accredited by the UEA: UEA-Guardian Masterclasses. A little dig around and I see the Guardian are actually offering a whole host masterclasses in a variety of subjects: Guardian Masterclasses. They are also offering their first(? more to come) masterclass with General Assembly (“a campus for technology, design, and entrepreneurship based in New York City”) on Understanding the Digital Economy; of note here is the additional comment that “General Assembly will be opening a campus in London at the end of 2012.” Campus; not hackspace or officespace, or workspace (though that may well be what it actually is): but campus.

[Update: via @jukesie, I’m also reminded of the Guardian’s teacher resources site, learnthings/learn.co.uk; for completeness, maybe also worth mentioning other innovations the Guardian is up to publishing-wise, eg wrt eboks: second half of A Tinkerer’s Toolbox….]

Alongside this, we have Condé Nast announcing a College of Fashion and Design to start from 2013 (as described in If Courses are About Content, We Have Competition…) and accredited by, erm, Vogue.

Educators in the area of IT will be well aware of the preponderance of vendor certification, where (arguably justifiably) vendors create a training curriculum that covers the key principles relating to one or more of their products. Institutions renowned for their training in certain areas have also been know to make their content available, as for example via the BBC College of Journalism.

In the OU, we’ve had a couple of rapidly produced courses* that wrap a pre-existing vendor qualification with an academic wrapper and academic assessment, and then provide the student an opportunity to earn both a vendor certificate and formal academic credit using the same vehicle. (See also: Towards Vendor Certification on the Open Web? Google Training Resources and Due Out Soon – The Google “Qualified Developer Program”.)

*For example, CCNA/Cisco Networking; T155 Linux: An Introduction provides a route to CompTIA accreditation, and T189 Digital Photography is “recognised by The Royal Photographic Society (RPS) as suitable preparatory work and a foundation for a Licentiateship Distinction (LRPS) in still photography”. And if you want badges, then try iSpot…;-)

The OU has also, in the past, produced short courses around broadcast television programmes co-produced with the BBC: S180 Life in the Oceans around Blue Planet, for example; (was S198 Exploring Mars tied to a TV series?; or A178 Perspectives on Leonardo da Vinci?). I’m not sure about the extent to which the OU is allowed to make use of BBC archive footage (could someone let me have a peek of the Sixth Agreement? Discretion assured/NDA signed if required; or is it FOIable?!;-) but I keep on wondering about how we might be able to make more of co-pro’d content, especially content that had courses developed around it (and which may or may not already be on OpenLearn?) (NB it’s worth noting that OU strategy appears at the moment to be focussed on competing for full time, younger students with other HEI entrants into the distance learning market, and moving away from shorter “leisure learning” courses which is a market that the media appear to be encroaching on. I can’t help wondering what might have happened if the OU had hooked up with the Guardian two or three years ago…[Disclaimer: this post barely represents my own beliefs, let alone those of my employer… etc etc…])

And finally, in Learning around F1…?!;-), I commented on how private equity owned learndirect are sponsoring a Formula One motor racing team; and so it goes…

Something is happening; but even if we can’t figure out what, at the very least we need to identify where higher education is placed in it all and what value it adds and what unique service(s) it offers… (See also: So What Do Universities Sell?, incl. comments.)

PS I think I need to read the Innovator’s Dilemma, and consequent books, again; wasn’t one of the claims that new entrants could pick some of the long hanging fruit (short courses, leisure learning, partnered accreditation and accreditation scheme/trust development) and then slowly build up capacity to take on the incumbents (longer form courses; credit + experience equivalents)?

PPS In passing, I notice that the Economist offers a suite of courses: Economist Education: Courses. The FT suggests ways of Enhanc[ing] your curriculum with the Financial Times) as well as branding a series of Pearson published textbooks (FT Publishing). Publishers such as O’Reilly are big in the conference organisation area (O’Reilly Conferences), and the Guardian (again) has also made in-roads into this area of content and buzz generation through things like the Activate Summit or the (CPD Certified) Higher Education Summit (note to self: does anyone else use the word summit for this sort of offering?)

A Tinkerer’s Toolbox…

A couple of days ago, I ran a sort of repeated, 3 hour, Digital Sandbox workshop session to students on the Goldsmiths’ MA/MSc in Creating Social Media (thanks to @danmcquillan for the invite and the #castlondon students for being so tolerant and engaged ;-)

I guess the main theme was how messy tinkering can be, and how simple ideas often don’t work as you expect them to, often requiring hacks, workarounds and alternative approaches to get things working at all, even if not reliably (which is to say: some of the demos borked;-)

Anyway… the topics covered were broadly:

1) getting data into a form where we can make it flow, as demonstrated by “my hit”, which shows how to screenscrape tabular data from a Wikipedia page using Google spreadsheets, republish it as CSV (eventually!), pull it into a Yahoo pipe and geocode it, then publish it as a KML feed that can be rendered in a Google map and embedded in an arbitrary web page.

2) getting started with Gephi as a tool for visualising and interactively having a conversation with a network represented data set.

To support post hoc activities, I had a play with a Delicious stack as a way of aggregating a set of tutorial like blog posts I had laying around that were related to each of the activities:

Delicious stack

I’d been quite dismissive of Delicious stacks when they first launched (see, for example, Rediscovering playlists), but I’m starting to see how they might actually be quite handy as a way of bootstrapping my way into a set of uncourses and/or ebooks around particular apps and technologies. There’s nothing particularly new about being able to build ordered sets of resources, of course, but the interesting thing for me is that even if I don’t get as far as editing a set of posts into a coherent mini-guide, a well ordered stack may itself provide a useful guide to a particular application, tool, set of techniques or topic.

As to why a literal repackaging of blog posts around a particular tool or technology as an ebook may not be such a good idea in and of itself, see Martin Belam’s post describing his experiences editing a couple of Guardian Shorts*: “Who’s Who: The Resurrection of the Doctor”: Doctor Who ebook confidential and Editing the Guardian’s Facebook ebook

* One of the things I’ve been tracking lately is engagement by the news media in alternative ways of trying to sell their content. A good example of this is the Guardian, who have been repackaging edited collections of (medium and long form) articles on a particular theme as “Guardian Shorts“. So for example, there are e-book article collection wrappers around the breaking of the phone hacking story, or investigating last year’s UK riots. If you want a quick guide to jazz or an overview of the Guardian datastore approach to data journalism, they have those too. (Did I get enough affiliate links in there, do you think?!;-)

This rethinking of how to aggregate, reorder and repackage content into saleable items is something that may benefit content producing universities. This is particularly true in the case of the OU, of course, where we have been producing content for years, and recently making it publicly available through a variety of channels, such as OpenLearn, or, err, the other OpenLearn, via iTunesU, or YouTube, OU/BBC co-productions and so on. It’s also interesting to note how the OU is also providing content (under some sort of commercial agreement…?) to other publishers/publications, such as the New Scientist:

OU youtube ads being in New Scientist context

There are other opportunities too, of course, such as Martin Weller’s suggestion that it’s time for the rebirth of the university press, or, from another of Martin’s posts, the creation of “special issue open access journal collections” (Launching Meta EdTech Journal), as well as things like The University Expert Press Room which provides a channel for thematic content around a news area and which complements very well, in legacy terms, the sort of model being pursued via Guardian Shorts?

Tinkering with the Guardian Platform API – Tag Signals

Given a company or personal name, what’s a quick way of generating meaningful tags around what it’s publicly known for, or associated with?

Over the last couple of weeks or so, I’ve been doodling around a few ideas with Miguel Andres-Clavera from the JWT (London) Innovation Lab looking for novel ways of working out how brands and companies seem to be positioned by virtue of their social media followers, as well as their press mentions.

Here’s a quick review of one of those doodles: looking up tags associated with Guardian news articles that mention a particular search term (such as a company, or personal name) as a way of getting a crude snapshot of how the Guardian ‘positions’ that referent in its news articles.

It’s been some time since I played with the Guardian Platform API, but the API explorer makes it pretty easy to automatically generate some (the Python library for the Guardian Platform API appears to have rotted somewhat with various updates made to the API after its initial public testing period).

Guardian OpenPlatfrom API

Here’s a snapshot over recent articles mentioning “The Open University” (bipartite article-tag graph):

Open university - article-tag graph

Here’s a view of the co-occurrence tag graph:

'OPen University

The code is available as a Gist: Guardian Platform API Tag Grapher

As with many of my OUseful tools and techniques, this view over the data is intended to be used as a sensemaking tool as much as anything. In this case, the aim is to help folk get an idea of how, for example, “The Open University” is emergently positioned in the context of Guardian articles. As with the other ‘discovering social media positioning’ techniques I’m working on, I see the approach useful not so much for reporting, but more as a way of helping us understand how communities position brands/companies etc relative to each other, or relative to particular ideas/concepts.

It’s maybe also worth noting that the Guardian Platform article tag positioning view described above makes use of curated metadata published by the Guardian as the basis of the map. (I also tried running full text articles through the Reuters OpenCalais service, and extracting entity data (‘implicit metadata’) that way, but the results were generally a bit cluttered. (I think I’d need to clean the article text a little first before passing it to the OpenCalais service.)) That is, we draw on the ‘expert’ tagging applied to the articles, and whatever sense is made of the article during the tagging process, to construct our own sensemaking view over a wider set of articles that all refer to the topic of interest.

PS would anyone from the Guardian care to comment on the process by which tags are applied to articles?

PPS a couple more… here’s how the Guardian position JISC recently…

JISC Positioning... Guardian

And here’s how “student fees” has recently been positioned:

In the context of tuition fees - openplatform tag-tag graph

Hmmm…

OERs: Public Service Education and Open Production

I suspect that most people over a certain age have some vague memory of OU programmes broadcast in support of OU courses taking over BBC2 at at various “off-peak” hours of the day (including Saturday mornings, if I recall correctly…)

These courses formed an important part of OU courses, and were also freely available to anyone who wanted to watch them. In certain respects, they allowed the OU to operate as a public service educator, bringing ideas from higher education to a wider audience. (A lot has been said about the role of the UK’s personal computer culture in the days of the ZX Spectrum and the BBC Micro in bootstrapping software skills development, and in particular the UK computer games industry; but we don’t hear much about the role the OU played in raising aspiration and introducing the very idea of what might be involved in higher education through free-to-air broadcasts of OU course materials, which I’m convinced it must have played. I certainly remember watching OU maths and physics programmes as a child, and wanting to know more about “that stuff” even if I couldn’t properly follow it at the time.)

The OU’s broadcast strategy has evolved since then, of course, moving into prime time broadcasts (Child of Our Time, Coast, various outings with James May, The Money Programme, and so on) as well as “online media”: podcasts on iTunes and video content on Youtube, for example.

The original OpenLearn experiment, which saw 10-20hr extracts of OU course material being released for free continues, but as I understand it, is now thought of in the context of a wider OpenLearn engagement strategy that will aggregate all the OU’s public output (from open courseware and OU podcasts to support for OU/BBC co-produced content) under a single banner: OpenLearn

I suspect there will continue to be forays into the world of “social media”, too:

A great benefit of the early days of OU programming on the BBC was that you couldn’t help but stumble across it. You can still stumble across OU co-produced broadcasts on the BBC now, of course, but they don’t fulfil the same role: they aren’t produced as academic programming designed to support particular learning outcomes and aren’t delivered in a particularly academic way. They’re more about entertainment. (This isn’t necessarily a bad thing, but I think it does influence the stance you take towards viewing the material.)

If we think of the originally produced TV programmes as “OERs”, open educational resources, what might we say about them?

– they were publicly available;
– they were authentic, relating to the delivery of actual OU courses;
– the material was viewed by OU students enrolled on the associated course, as well as viewers following a particular series out of general interest, and those who just happened to stumble by the programme;
– they provided pacing, and the opportunity for a continued level of engagement over a period of weeks, on a single academic topic;
– they provided a way of delivering lifelong higher education as part of the national conversation, albeit in the background. But it was always there…

In a sense, the broadcasts offered a way for the world to “follow along” parts of a higher education as it was being delivered.

In many ways, the “Massive Open Online Courses” (MOOCs), in which a for-credit course is also opened up to informal participants, and the various Stanford open courses that are about to start (Free computer science courses, new teaching technology reinvent online education), use a similar approach.

I generally see this as a Good Thing, as universities engaging in public service education whilst at the same time delivering additional support, resources, feedback, assessment and credit to students formally enrolled on the course.

What I’m not sure about is that initiatives like OpenLearn succeed in the “public service education” role, in part because of the discovery problem: you couldn’t help but stumble across OU/BBC Two broadcasts at certain times of the day. Nowadays, I’d be surprised if you ever stumbled across OpenLearn content while searching the web…

A recent JISC report on OER Impact focussed on the (re)use of OERs in higher education, identifying a major use case of OERs as enhancing teaching practice.

(NB I would have embedded the OER Impact project video here, but WordPress.com doesn’t seem to support embeds from Blip…; openness is not just about the licensing, it’s also about the practical ease of (re)use;-)

However, from my quick reading of the OER impact report, it doesn’t really seem to consider the “open course” use case demonstrated by MOOCs, the Stanford courses, or mid-70s OU course broadcasts. (Maybe this was out of scope…!;-)

Nor does it consider the production of OERs (I think that was definitely out of scope).

For the JISC OER3 funding call, I was hoping to put in a bid for a project based around an open “production-in-presentation” model of resource development targeted to a specific community. For a variety of reasons, (not least, I suspect, my lack of project management skills…) that’s unlikely to be submitted in time, so I thought I’d post the main chunk of the bid here as a way of trying to open up the debate a little more widely about the role of OERs, the utility of open production models, and the extent to they can be used to support cross-sector curriculum innovation/discovery as well as co-creation of resources and resource reuse (both within HE and into a target user community).

Outline
Rapid Resource Discovery and Development via Open Production Pair Teaching (ReDOPT) seeks to draft a set of openly licensed resources for potential (re)use in courses in two different institutions … through the real-time production and delivery of an open online short-course in the area of data handling and visualisation. This approach subverts the more traditional technique of developing materials for a course and then retrospectively making them open, by creating the materials in public and in an openly licensed way, in a way that makes them immediately available for informal study as well as open web discovery, embedding them in a target community, and then bringing them back into the closed setting for formal (re)use. The course will be promoted to the data journalism and open data communities as a free “MOOC” (Massive Online Open Course)/P2PU style course, with a view to establishing an immediate direct use by a practitioner community. The project will proceed as follows: over a 10-12 week period, the core project team will use a variant of the Pair Teaching approach to develop and publish an informal open, online course hosted on an .ac.uk domain via a set of narrative linked resources (each one about the length of a blog post and representing 10 minutes to 1 hour of learner activity) mapping out the project team’s own exploration/learning journey through the topic area. The course scope will be guided by a skeleton curriculum determined in advance from a review of current literature, informal interviews/questionnaires and perceived skills and knowledge gaps in the area. The created resources will contain openly licensed custom written/bespoke material, embedded third party content (audio, video, graphical, data), and selected links to relevant third party material. A public custom search engine in the topic area will also be curated during the course. Additional resources created by course participants (some of whom may themselves be part of the project team), will be integrated into the core course and added to the custom search engine by the project team. Part-time, hourly paid staff will also be funded to contribute additional resources into the evolving course. A second phase of the project will embed the resources as learning resources in the target community through the delivery of workshops based around and referring out to the created resources, as well as community building around the resources. Because of timescales involved, this proposal is limited to the production of the draft materials and embedding them as valuable and appropriate resources in the target community, and does not extend as far as the reuse/first formal use case. Success metrics will therefore be limited to impact evaluation, volume and reach of resources produced, community engagement with the live production of the materials, the extent to which project team members intend to directly reuse the materials produced as a result.

The Proposal
1. The aim of the project is to produce a set of educational resources in a practical topic area (data handling and visualisation), that are reusable by both teachers (as teaching resources) and independent learners (as learning resources), through the development of an openly produced online course in the style of an uncourse created in real time using a Pair Teaching approach as opposed to a traditional sole author or OU style course team production process, and to establish those materials as core reusable educational resources in the target community.

3. … : Extend OER through collaborations beyond HE: the proposal represents a collaboration between two HEIs in the production and anticipated formal (re)use of the materials created, as well as directly serving the needs of the fledgling data-driven journalism community and the open public data communities.

4. … : Addressing sector challenges (ii Involving academics on part-time, hourly-paid contracts): the open production model will seek to engage /part time, hourly paid staff/ in creating additional resources around the course themes that they can contribute back to the course under an open license and that cover a specific issue identified by the course lead or that the part-time staff themselves believe will add value to the course. (Note that the course model will also encourage participants in the course to create and share relevant resources without any financial recompense.) Paying hourly rate staff for the creation of additional resources (which may include quizzes or other informal assessment/feedback related resources), or in the role of editors of community produced resources, represents a middle ground between the centrally produced core resources and any freely submitted resources from the community. Incorporating the hourly paid contributor role is based on the assumption that payment may be appropriate for sourcing course enhancing contributions that are of a higher quality (and may take longer to produce) than community sourced contributions, as well as requiring the open licensing of materials so produced. The model also explores a model under which hourly staff can contribute to the shaping of the course on an ad hoc basis if they see opportunities to do so.

5. … Enhancing the student experience (ii Drawing on student-produced materials): The open production model will seek to engage with the community following the course and encourage them to develop and contribute resources back into the community under an open license. For example, the use of problem based exercises and activities will result in the production of resources that can be (re)used within the context of the uncourse itself as an output of the actual exercise or activity.

6. … The project seeks to explore practical solutions to two issues relating to the wider adoption of OERs by producers and consumers, and provide a case study that other projects may draw on. In the first case, how to improve the discoverablity and direct use of resources on the web by “learners” who do not know they are looking for OERs, or even what OERs are, through creating resources that are published as contributions to the development and support of a particular community and as such are likely to benefit from “implicit” search engine optimisation (SEO) resulting from this approach. In the second case, to explore a mechanism that identifies what resources a community might find useful through curriculum negotiation during presentation, and the extent to which “draft” resources might actually encourage reuse and revision.

7. Rather than publishing an open version of a predetermined, fixed set of resources that have already been produced as part of a closed process and then delivered in a formal setting, the intention is thus to develop an openly licensed set of “draft” resources through the “production in presentation” delivery of an informal open “uncourse” (in-project scope), and at a later date reuse those resources in a formally offered closed/for-credit course (out-of-project scope). The uncourse will not incorporate assessment elements, although community engagement and feedback in that context will be in scope. The uncourse approach draws on the idea of “teacher as learner”, with the “teacher” capturing and reflecting on meaningful learning episodes as they explore a topic area and then communicate these through the development of materials that others can learn from, as well as demonstrating authentic problem solving and self-directed learning behaviours that model the independent learning behaviours we are trying to develop in our students.

8. The quality of the resources will be assured at least to the level of fit-for-purpose at the time of release by combining the uncourse production style with a Pair Teaching approach. A quality improvement process will also operate through responding to any issues identified via the community based peer-review and developmental testing process that results from developing the materials in public.

9. The topic area was chosen based on several factors: a) the experience and expertise of the project team; b) the observation that there are no public education programmes around the increasing amounts of open public data; c) the observation that very few journalism academics have expertise in data journalism; d) the observation that practitioners engaged in data journalism do not have time or interest in to become academics, but do appear willing to share their knowledge.

10. The first uncourse will run over a 6-8 week period and result in the central/core development of circa 5 to 10 blog posts styled resources a week, each requiring 20-45 minutes of “student” activity, (approx. 2-6 hours study time per week equivalent) plus additional directed reading/media consumption time (ideally referencing free and openly licensed content). A second presentation of the uncourse will reuse and extend materials produced during the first presentation, as well as integrating resources, where possible, developed by the community in the first phase and monitoring the amount of time taken to revise/reversion them, as required, compared to the time taken to prepare resources from scratch centrally. Examples of real-time, interactive and graphical representations of data will be recorded as video screencasts and made available online. Participants will be encouraged to consider the information design merits of comparative visualisation methods for publication on different media platforms: print, video, interactive and mobile. In all, we hope to deliver up to 50 hours of centrally produced, openly licensed materials by the end of the course. The uncourse will also develop a custom search engine offering coverage of openly licensed and freely accessible resources related to the course topic area.

11. The course approach is inspired to a certain extent by the Massive Online Open Course (MOOC) style courses pioneered by George Siemens, Stephen Downes, Dave Cormier, Jim Groom et al. The MOOC approach encourages learners to explore a given topic space with the help of some wayfinders. Much of the benefit is derived from the connections participants make between each other and the content by sharing, reflecting, and building on the contributions of others across different media spaces, like blogs, Twitter, forums, YouTube, etc.

12. The course model also draws upon the idea of a uncourse, as demonstrated by Hirst in the creation of the Digital Worlds game development blog [ http://digitalworlds.wordpress.com ] that produced a series of resources as part of an openly blogged learning journey that have since been reused directly in an OU course (T151 Digital Worlds); and the Visual Gadgets blog ( http://visualgadgets.blogspot.com ) that drafted materials that later came to be reused in the OU course T215 Communication and information technologies, and then made available under open license as the OpenLearn unit Visualisation: Visual representations of data and information [ http://openlearn.open.ac.uk/course/view.php?id=4442 ]

13. A second phase of the project will explore ways of improving the discovery of resources in an online context, as well as establishing them as important and relevant resources within the target community. Through face-to-face workshops and hack days, we will run a series of workshops at community events that draw on and extend the activities developed during the initial uncourse, and refer participants to the materials. A second presentation of the uncourse will be offered as a way of testing and demonstrating reuse of the resources, as well as providing an exit path from workshop activities. One possible exit path from the uncourse would be entry into formal academic courses.

14. Establishing the resources within the target community is an important aspect of the project. Participation in community events plays an important role in this, and also helps to prove the resources produced. Attendance at events such as the Open Government Data camp will allow us to promote the availability of the resources to the appropriate European community, further identify community needs, and also provide a backdrop for the development of a promotional video with vox pops from the community hopefully expressing support for the resources being produced. The extent to which materials do become adopted and used within the community will be form an important part of the project evaluation.

15. … By embedding resources in the target community, we aim to enhance the practical utility of the resources within that community as well as providing an academic consideration of the issues involved. A key part of the evaluation workpackage, …, will be to rate the quality of the materials produced and the level of engagement with and reuse of them by both educators and members of the target community.

Note that I am still keen on working this bid up a bit more for submission somewhere else…;-)

[Note that the opinions expressed herein are very much my own personal ones…]

PS see also COL-UNESCO consultation: Guidelines for OER in Higher Education – Request for comments: OER Guidelines for Higher Education Stakeholders

OU Badged BBC Class Clips?

I’m on holiday, but I can’t stop pondering (again) how to make more of an OU flavoured collection of content currently on BBC iPlayer… Whilst bookmarking a few more BBC/OU co-pro series pages just now, I spotted one series at least has had clips posted in to the (new to me) BBC Learning Zone Class Clips:

BBC Class clips

Which got me wondering: if OU does fully funded co-pros of content that ends up in the Learning Zone Class Clips area, wouldn’t it be good if the clips listings also displayed the OU logo…?

Or maybe if the OU got a mention on the actual clips pages?

BBC class clips

After all, the OU gets a mention, and a link, on the original programme page:

OU BBC prog page link

And arguably, we could do more to support learning journey related actions and resources at the more detailed, class clips level?

PS Hmmm, I wonder how things like Class Clips fit into OER space???

PPS QUIck note re: bookmarked series pages; there are also occasions when the OU co-pros an occasional episode in a longer running series, as in the case of BBC CLick Radio (World Service), which runs weekly but only has occasional OU co-pro’d episodes? From a series page linking to episode pages, how would I identify the OU co-pro’d programme pages? Or would I have to ignore series pages and just bookmark/index actual co-pro’d episode pages (if they exist?)?

PPPS Ah – this looks interesting (BBC prototype): THe Programme List (“Add entire shows, series or just episodes, See which of your programmes are available today”). So I should be able to add in lists of OU/BBC co-pros, and see a view over episodes that are currently available on iPlayer. Which makes me think: could something like The Programmes List also be used to publish and view 3rd party curated collection lists, opening up “scheduling” of BBC content to all-comers?