Open Educational Resources and the University Library Website

Being a Bear of Very Little Brain, I find it convenient to think of the users of academic library websites falling into one of three ‘deliberate’ and one ‘by chance’ categories:

– students (i.e. people taking at course);
– lecturers (i.e. people creating or supporting a course);
– researchers;
– folk off the web (i.e. people who Googled in who are none of the above).

The following Library website homepage (in this case, from Leicester) is typical:

…and the following options on the Library catalogue are also typical:

So what’s missing…?

How about a link to “Teaching materials”, or “open educational resources”?

After all, if you’re a lecturer looking to pull a new course together, or a student who’s struggling to make head or tail of the way one of your particular lecturers is approaching a particular topic, or a researcher who needs a crash course in a particular method or technique, maybe some lecture notes or course materials are exactly the sort of resource you need?

Trying to kickstart the uptake of open educational materials has not be as easy as might be imagined (e.g. On the Lack of Reuse of OER), but maybe this is because OERs aren’t as ‘legitimately discoverable’ as other academic resources.

If anyone using an academic library website can’t easily search educational resources in that context, what does that say about the status of those resources in the eyes of the Library?

Bearing in mind my crude list of user classes, and comparing them to the sorts of resources that academic libraries do try to support the discovery of, what do we find?

– the library catalogue returns information about books (though full text search is not available) and the titles of journals; it might also tap into course reading lists.
– the e-resources search provides full text search over e-book and journal content.

One of the nice features of the OU wesbite search (not working for me at the moment: “Our servers are busy”, apparently…) is that it is possible to search OU course materials for the course you are currently on (if you’re a student) or across all courses if you are staff. A search over OpenLearn materials is also provided. However, I don’t think these course material searches are available from the Library website?

So here’s a suggestion for the #UKOER folk – see if you can persuade your library to start offering a search over OERs from their website (Scott Wilson at CETIS is building an OER aggregator that might help in this respect, and there are also initiativs like OER Commons).

And, err, as a tip: when they say they already do, a link to the OER Commons site on a page full of links to random resources, buried someowhre deep within the browsable bowels of the library website doesn’t count. It has to be at least as obvious(?!), easy to use(?!) and prominent(?!?) as the current Library catalogue and journal/database searches…

Handling Yahoo Pipes Serialised PHP Output

One of the output formats supported by Yahoo Pipes is a PHP style array. In this post, which describes a way of seeing how well connected a particular Twitter user is to other Twitterers who have recently used a particular hashtag, I’ll show you how it can b used.

The following snippet, (cribbed from Coding Forums) shows how to handle this array:

//Declare the required pipe, specifying the php output
$req = "http://pipes.yahoo.com/ouseful/hashtagtwitterers?_render=php&min=3&n=100&q=%23jiscri";

// Make the request
$phpserialized = file_get_contents($req);

// Parse the serialized response
$phparray = unserialize($phpserialized);

//Here's the raw contents of the array
print_r($phparray);

//Here's how to parse it
foreach ($phparray['value']['items'] AS $key => $val)
	printf("<div><p><a href=\"%s\">%s</a></p><p>%s</p>\n", $val['link'], $val['title'], $val['description']);

The pipe used in the above snippet (http://pipes.yahoo.com/ouseful/hashtagtwitterers) displays a list of people who have recently used a particular hashtag on Twitter a minimum specified number of times.

It’s easy enough to parse out the Twitter ID of each individual, and then for a particular named individual see which of those hashtagging Twitterers they are either following, or are following them. (Why’s this interesting? Well, for any given hashtag community, it can show you how well connected you are with that community).

So let’s see how to do it. First, parse out the Twitter ID:

foreach ($phparray['value']['items'] AS $key => $val) {
	$id=preg_replace("/@([^\s]*)\s.*/", "$1", $val['title']);
	$idList[] = $id; 
}

We have the Twitter screennames, but now we want the actual Twitter user IDs. There are several PHP libraries for accessing the Twitter API. The following relies on an old, rejigged version of the library available from http://github.com/jdp/twitterlibphp/tree/master/twitter.lib.php (the code may need tweaking to work with the current version…), and is really kludged together… (Note to self – tidy this up on day!)

The algorithm is basically as follows, and generates a GraphViz .dot file that will plot the connections a particular user has with the members of a particular hashtagging community:

  • get the list of hashtagger Twitter usernames (as above);
  • for each username, call the Twitter API to get the corresponding Twitter ID, and print out a label that maps each ID to a username;
  • for the user we want to investigate, pull down the list of people who follow them from the Twitter API; for each follower, if the follower is in the hashtaggers set, print out that relationship;
  • for the user we want to investigate, pull down the list of people who they follow (i.e. their ‘friends’) from the Twitter API; for each friend, if the friend is in the hashtaggers set, print out that relationship;
$Twitter = new Twitter($myTwitterID, $myTwitterPwd);

//Get the Twitter ID for each user identified by the hashtagger pipe
foreach ($idList as $user) {
	$user_det=$Twitter->showUser($user, 'xml');
 	$p = xml_parser_create();
	xml_parse_into_struct($p,$user_det,$results,$index);
	xml_parser_free($p);
	$id=$results[$index['ID'][0]][value];
	$userID[$user]=$id;
	//print out labels in the Graphviz .dot format
	echo $id."[label=\"".$user."\"];\r";
}

//$userfocus is the Twitter screenname of the person we want to examine
$currUser=$userID[$userfocus];
 
//So who in the hashtagger list is following them?
$follower_det=$Twitter->getFollowers($userfocus, 'xml');
$p = xml_parser_create();
xml_parse_into_struct($p,$follower_det,$results,$index);
xml_parser_free($p);
foreach ($index['ID'] as $item){
	$follower=$results[$item][value];
	//print out edges in the Graphviz .dot format
	if (in_array($follower,$userID)) echo $follower."->".$currUser.";\r";
}

//And who in the hashtagger list are they following?
$friends_det=$Twitter->getFriends($userfocus, 'xml');
$p = xml_parser_create();
xml_parse_into_struct($p,$friends_det,$results,$index);
xml_parser_free($p);
foreach ($index['ID'] as $item){
	$followed=$results[$item][value];
	//print out edges in the Graphviz .dot format
	if (in_array($followed,$userID)) echo $currUser."->".$followed.";\r";
}

For completeness, here are the Twitter object methods and their associated Twitter API calls that were used in the above code:

function showUser($id,$format){
	$api_call=sprintf("http://twitter.com/users/show/%s.%s",$id,$format);
  	return $this->APICall($api_call, false);
}

function getFollowers($id,$format){
  	$api_call=sprintf("http://twitter.com/followers/ids/%s.%s",$id,$format);
 	return $this->APICall($api_call, false);
}
  
function getFriends($id,$format){
  	$api_call=sprintf("http://twitter.com/friends/ids/%s.%s",$id,$format);
 	return $this->APICall($api_call, false);
}

Running the code uses N+2 Twitter API calls, where N is the number of different users identified by the hashtagger pipe.

The output of the script is almost a minimal Graphviz .dot file. All that’s missing is the wrapper, e.g. something like: digraph twitterNet { … }. Here’s what a valid file looks like:

(The labels can appear either before or after the edges – it makes no difference as far as GraphViz is concernd.)

Plotting the graph will show you who the individual of interest is connected to, and how, in the particular hashtag community.

So for example, in the recent #ukoer community, here’s how LornaMCampbell is connected. First a ‘circular’ view:

ukoerInternalNetLMC2

The arrow direction goes FROM one person TO a person they are following. In the circular diagram, it can be quite hard to see whether a connection is reciprocated or one way.

The Graphviz network diagram uses a separate edge for each connection and makes it easier to spot reciprocated links:

ukoerInternalNetLMC

So, there we have it. Another way of looking at Twitter hashtag networks to go along with Preliminary Thoughts on Visualising the OpenEd09 Twitter Network, A Quick Peek at the IWMW2009 Twitter Network and More Thinkses Around Twitter Hashtag Networks: #JISCRI

Open Training Resources

Some disconnected thoughts about who gives a whatever about OERs, brought on in part by @liamgh’s Why remix an Open Educational Resource? (see also this 2 year old post: So What Exactly Is An OpenLearn Content Remix?). A couple of other bits of context too, to to situate HE in a wider context of educational broadcasting:

Trust partially upholds fair trading complaints against the BBC: “BESA appealed to the Trust regarding three of the BBC’s formal learning offerings on bbc.co.uk between 1997 and 2009. … the Trust considers it is necessary for the Trust to conduct an assessment of the potential competitive impacts of Bitesize, Learning Zone Broadband and the Learning Portal, covering developments to these offerings since June 2007, and the way in which they deliver against the BBC’s Public Purposes. This will enable the Trust to determine whether the BBC Executive’s failure to conduct its own competitive impact assessment since 2007 had any substantive effect. … No further increases in investment levels for Bitesize, Learning Zone Broadband and the Learning Portal will be considered until the Trust has completed its competitive impact assessment on developments since 2007

Getting nearer day by day: “We launched a BBC College of Journalism intranet site back in January 2007 … aimed at the 7,500 journalists in the BBC … A handful of us put together about 1200 pages of learning – guides, tips, advice – and about 250 bits of video; a blog, podcasts, interactive tests and quizzes and built the tools to deliver them. A lot of late nights and a lot of really satisfying work. Satisfying, too, because we put into effect some really cool ideas about informal learning and were able to find out how early and mid career journalists learn best. … The plan always was to share this content with the people who’d paid for it – UK licence fee payers. And to make it available for BBC journalists to work on at home or in parts of the world where a www connection was more reliable than an intranet link. Which is where we more or less are now.” [my emphasis; see also BBC Training and Development]

And this: Towards Vendor Certification on the Open Web? Google Training Resources

So why my jaded attitude? Because I wonder (again) what it is we actually expect to happen to these OERs (how many OER projects re-use other peoples’ bids to get funding? How many reuse each others ‘what are OERs stuff’? How many OER projects ever demonstrate a remix of their content, or a compelling reuse of it? How many publish their sites as a wiki so other people can correct errors? How many are open to public comments, ffs? How many give a worked example of any of the twenty items on Liam’s list with their content, and how many of them mix in other people’s OER content if they ever do so? How many attempt to publish running stats on how their content is being reused, and how many demonstrate showcase examples of content remix and reuse.

That said, there are signs of some sort of use: ‘Self-learners’ creating university of online; maybe the open courseware is providing a discovery context for learners looking for specific learning aids (or educators looking for specific teaching aids)? That is, while use might be most likely at the disaggregated level, discovery will be mediated through course level aggregations (the wider course context providing the SEO, or discovery metadata, that leads to particular items being discovered? Maybe Google turns up the course, and local navigation helps (expert) users browse to the resource they were hoping to discover?)

Early days yet, I know, but how much of the #ukoer content currently being produced will be remixed with, or reused alongside, content from other parts of that project as part of end-of-project demos? (Of course, if reuse/remix isn’t really what you expect, then fine… and, err, what are you claiming, exactly? Simple consumption? That’s fine, but say it; limit yourself to that…)

Ok, rant part over. Deep breath. Here comes another… as academics, we like to think we do the education thing, not the training thing. But for those of you who do learn new stuff, maybe every day, what do you find most useful to support that presumably self-motivated learning? For my own part, I tend to search for tutorials, and maybe even use How Do I?. That is, I look for training materials. A need or a question frames the search, and then being able to do something, make something, get my head round something enough to be able to make use of it, or teach it on, frames the admittedly utilitarian goal. Maybe that ability to look for those materials is a graduate level information skill, so it’s something we teach, right…? (Err… but that would be training…?!)

So here’s where I’m at – OERs are probably [possibly?] not that useful. But open training materials potentially are. (Or maybe not..?;-) Here are some more: UNESCO Training Platform

And so is open documentation.

They probably all could come under the banner of open information resources, but thy are differently useful, and differently likely to be reused/reusable, remixed/remixable, maintained/maintainable or repurposed/repurposeable. Of them all, I suspect that the opencourseware subset of OERs is the least re* of them all.

That is all…

Discuss…

Drafting a Bid Proposal – Comments?

[Note that I might treat this post a bit like a wiki page… Note to self: sort out a personal wiki]

Call is JISC OER3 – here’s the starter for ten (comments appreciated, both positive and negative; letters of support/expressions of interest welcome; comments relating to possible content/themes, declarations of interest in taking the course, etc etc also welcome, though I will be soliciting these more specifically at some point)

Rapid Resource Discovery and Development via Open Production Pair Teaching (ReDOPT) seeks to draft a set of openly licensed resources for potential (re)use in courses in two different institutions through the real-time production and delivery of an open online short-course in the area of data handling and visualisation. This approach subverts the more traditional technique of developing materials for a course and then retrospectively making them open, by creating the materials in public and in an openly licensed way, in a way that makes them immediately available for “study” as well as open web discovery, and then bringing them back into the closed setting for (re)use. The course will be promoted to the data journalism and open data communities as a free “MOOC” (Massive Online Open Course)/P2PU style course, with a view to establishing an immediate direct use by a practitioner community. The project will proceed as follows: over a 10-12 week period, the core project team will use a variant of the Pair Teaching approach to develop and publish an informal open, online course hosted on an .ac.uk domain via a set of narrative linked resources (each one about the length of a blog post and representing 10 minutes to 1 hour of learner activity) mapping out the project team’s own learning journey through the topic area. The course scope will be guided by a skeletal curriculum determined in advance from a review of current literature, informal interviews/questionnaires and perceived skills and knowledge gaps in the area. The created resources will contain openly licensed custom written/bespoke material, embedded third party content (audio, video, graphical, data), and selected links to relevant third party material. A public custom search engine in the topic area will also be curated during the course. Additional resources created by course participants (some of whom may themselves be part of the project team), will be integrated into the core course and added to the custom search engine by the project team. Part-time, hourly paid staff will be funded to contribute additional resources into the evolving course. Because of timescales involved, this proposal is limited to the production of the draft materials, and does not extend as far as the reuse/first formal use case. Success metrics will therefore be limited to volume and reach of resources produced, community engagement with the live production of the materials, and the extent to which project team members intend to directly reuse the materials produced as a result.

OERs: Public Service Education and Open Production

I suspect that most people over a certain age have some vague memory of OU programmes broadcast in support of OU courses taking over BBC2 at at various “off-peak” hours of the day (including Saturday mornings, if I recall correctly…)

These courses formed an important part of OU courses, and were also freely available to anyone who wanted to watch them. In certain respects, they allowed the OU to operate as a public service educator, bringing ideas from higher education to a wider audience. (A lot has been said about the role of the UK’s personal computer culture in the days of the ZX Spectrum and the BBC Micro in bootstrapping software skills development, and in particular the UK computer games industry; but we don’t hear much about the role the OU played in raising aspiration and introducing the very idea of what might be involved in higher education through free-to-air broadcasts of OU course materials, which I’m convinced it must have played. I certainly remember watching OU maths and physics programmes as a child, and wanting to know more about “that stuff” even if I couldn’t properly follow it at the time.)

The OU’s broadcast strategy has evolved since then, of course, moving into prime time broadcasts (Child of Our Time, Coast, various outings with James May, The Money Programme, and so on) as well as “online media”: podcasts on iTunes and video content on Youtube, for example.

The original OpenLearn experiment, which saw 10-20hr extracts of OU course material being released for free continues, but as I understand it, is now thought of in the context of a wider OpenLearn engagement strategy that will aggregate all the OU’s public output (from open courseware and OU podcasts to support for OU/BBC co-produced content) under a single banner: OpenLearn

I suspect there will continue to be forays into the world of “social media”, too:

A great benefit of the early days of OU programming on the BBC was that you couldn’t help but stumble across it. You can still stumble across OU co-produced broadcasts on the BBC now, of course, but they don’t fulfil the same role: they aren’t produced as academic programming designed to support particular learning outcomes and aren’t delivered in a particularly academic way. They’re more about entertainment. (This isn’t necessarily a bad thing, but I think it does influence the stance you take towards viewing the material.)

If we think of the originally produced TV programmes as “OERs”, open educational resources, what might we say about them?

– they were publicly available;
– they were authentic, relating to the delivery of actual OU courses;
– the material was viewed by OU students enrolled on the associated course, as well as viewers following a particular series out of general interest, and those who just happened to stumble by the programme;
– they provided pacing, and the opportunity for a continued level of engagement over a period of weeks, on a single academic topic;
– they provided a way of delivering lifelong higher education as part of the national conversation, albeit in the background. But it was always there…

In a sense, the broadcasts offered a way for the world to “follow along” parts of a higher education as it was being delivered.

In many ways, the “Massive Open Online Courses” (MOOCs), in which a for-credit course is also opened up to informal participants, and the various Stanford open courses that are about to start (Free computer science courses, new teaching technology reinvent online education), use a similar approach.

I generally see this as a Good Thing, as universities engaging in public service education whilst at the same time delivering additional support, resources, feedback, assessment and credit to students formally enrolled on the course.

What I’m not sure about is that initiatives like OpenLearn succeed in the “public service education” role, in part because of the discovery problem: you couldn’t help but stumble across OU/BBC Two broadcasts at certain times of the day. Nowadays, I’d be surprised if you ever stumbled across OpenLearn content while searching the web…

A recent JISC report on OER Impact focussed on the (re)use of OERs in higher education, identifying a major use case of OERs as enhancing teaching practice.

(NB I would have embedded the OER Impact project video here, but WordPress.com doesn’t seem to support embeds from Blip…; openness is not just about the licensing, it’s also about the practical ease of (re)use;-)

However, from my quick reading of the OER impact report, it doesn’t really seem to consider the “open course” use case demonstrated by MOOCs, the Stanford courses, or mid-70s OU course broadcasts. (Maybe this was out of scope…!;-)

Nor does it consider the production of OERs (I think that was definitely out of scope).

For the JISC OER3 funding call, I was hoping to put in a bid for a project based around an open “production-in-presentation” model of resource development targeted to a specific community. For a variety of reasons, (not least, I suspect, my lack of project management skills…) that’s unlikely to be submitted in time, so I thought I’d post the main chunk of the bid here as a way of trying to open up the debate a little more widely about the role of OERs, the utility of open production models, and the extent to they can be used to support cross-sector curriculum innovation/discovery as well as co-creation of resources and resource reuse (both within HE and into a target user community).

Outline
Rapid Resource Discovery and Development via Open Production Pair Teaching (ReDOPT) seeks to draft a set of openly licensed resources for potential (re)use in courses in two different institutions … through the real-time production and delivery of an open online short-course in the area of data handling and visualisation. This approach subverts the more traditional technique of developing materials for a course and then retrospectively making them open, by creating the materials in public and in an openly licensed way, in a way that makes them immediately available for informal study as well as open web discovery, embedding them in a target community, and then bringing them back into the closed setting for formal (re)use. The course will be promoted to the data journalism and open data communities as a free “MOOC” (Massive Online Open Course)/P2PU style course, with a view to establishing an immediate direct use by a practitioner community. The project will proceed as follows: over a 10-12 week period, the core project team will use a variant of the Pair Teaching approach to develop and publish an informal open, online course hosted on an .ac.uk domain via a set of narrative linked resources (each one about the length of a blog post and representing 10 minutes to 1 hour of learner activity) mapping out the project team’s own exploration/learning journey through the topic area. The course scope will be guided by a skeleton curriculum determined in advance from a review of current literature, informal interviews/questionnaires and perceived skills and knowledge gaps in the area. The created resources will contain openly licensed custom written/bespoke material, embedded third party content (audio, video, graphical, data), and selected links to relevant third party material. A public custom search engine in the topic area will also be curated during the course. Additional resources created by course participants (some of whom may themselves be part of the project team), will be integrated into the core course and added to the custom search engine by the project team. Part-time, hourly paid staff will also be funded to contribute additional resources into the evolving course. A second phase of the project will embed the resources as learning resources in the target community through the delivery of workshops based around and referring out to the created resources, as well as community building around the resources. Because of timescales involved, this proposal is limited to the production of the draft materials and embedding them as valuable and appropriate resources in the target community, and does not extend as far as the reuse/first formal use case. Success metrics will therefore be limited to impact evaluation, volume and reach of resources produced, community engagement with the live production of the materials, the extent to which project team members intend to directly reuse the materials produced as a result.

The Proposal
1. The aim of the project is to produce a set of educational resources in a practical topic area (data handling and visualisation), that are reusable by both teachers (as teaching resources) and independent learners (as learning resources), through the development of an openly produced online course in the style of an uncourse created in real time using a Pair Teaching approach as opposed to a traditional sole author or OU style course team production process, and to establish those materials as core reusable educational resources in the target community.

3. … : Extend OER through collaborations beyond HE: the proposal represents a collaboration between two HEIs in the production and anticipated formal (re)use of the materials created, as well as directly serving the needs of the fledgling data-driven journalism community and the open public data communities.

4. … : Addressing sector challenges (ii Involving academics on part-time, hourly-paid contracts): the open production model will seek to engage /part time, hourly paid staff/ in creating additional resources around the course themes that they can contribute back to the course under an open license and that cover a specific issue identified by the course lead or that the part-time staff themselves believe will add value to the course. (Note that the course model will also encourage participants in the course to create and share relevant resources without any financial recompense.) Paying hourly rate staff for the creation of additional resources (which may include quizzes or other informal assessment/feedback related resources), or in the role of editors of community produced resources, represents a middle ground between the centrally produced core resources and any freely submitted resources from the community. Incorporating the hourly paid contributor role is based on the assumption that payment may be appropriate for sourcing course enhancing contributions that are of a higher quality (and may take longer to produce) than community sourced contributions, as well as requiring the open licensing of materials so produced. The model also explores a model under which hourly staff can contribute to the shaping of the course on an ad hoc basis if they see opportunities to do so.

5. … Enhancing the student experience (ii Drawing on student-produced materials): The open production model will seek to engage with the community following the course and encourage them to develop and contribute resources back into the community under an open license. For example, the use of problem based exercises and activities will result in the production of resources that can be (re)used within the context of the uncourse itself as an output of the actual exercise or activity.

6. … The project seeks to explore practical solutions to two issues relating to the wider adoption of OERs by producers and consumers, and provide a case study that other projects may draw on. In the first case, how to improve the discoverablity and direct use of resources on the web by “learners” who do not know they are looking for OERs, or even what OERs are, through creating resources that are published as contributions to the development and support of a particular community and as such are likely to benefit from “implicit” search engine optimisation (SEO) resulting from this approach. In the second case, to explore a mechanism that identifies what resources a community might find useful through curriculum negotiation during presentation, and the extent to which “draft” resources might actually encourage reuse and revision.

7. Rather than publishing an open version of a predetermined, fixed set of resources that have already been produced as part of a closed process and then delivered in a formal setting, the intention is thus to develop an openly licensed set of “draft” resources through the “production in presentation” delivery of an informal open “uncourse” (in-project scope), and at a later date reuse those resources in a formally offered closed/for-credit course (out-of-project scope). The uncourse will not incorporate assessment elements, although community engagement and feedback in that context will be in scope. The uncourse approach draws on the idea of “teacher as learner”, with the “teacher” capturing and reflecting on meaningful learning episodes as they explore a topic area and then communicate these through the development of materials that others can learn from, as well as demonstrating authentic problem solving and self-directed learning behaviours that model the independent learning behaviours we are trying to develop in our students.

8. The quality of the resources will be assured at least to the level of fit-for-purpose at the time of release by combining the uncourse production style with a Pair Teaching approach. A quality improvement process will also operate through responding to any issues identified via the community based peer-review and developmental testing process that results from developing the materials in public.

9. The topic area was chosen based on several factors: a) the experience and expertise of the project team; b) the observation that there are no public education programmes around the increasing amounts of open public data; c) the observation that very few journalism academics have expertise in data journalism; d) the observation that practitioners engaged in data journalism do not have time or interest in to become academics, but do appear willing to share their knowledge.

10. The first uncourse will run over a 6-8 week period and result in the central/core development of circa 5 to 10 blog posts styled resources a week, each requiring 20-45 minutes of “student” activity, (approx. 2-6 hours study time per week equivalent) plus additional directed reading/media consumption time (ideally referencing free and openly licensed content). A second presentation of the uncourse will reuse and extend materials produced during the first presentation, as well as integrating resources, where possible, developed by the community in the first phase and monitoring the amount of time taken to revise/reversion them, as required, compared to the time taken to prepare resources from scratch centrally. Examples of real-time, interactive and graphical representations of data will be recorded as video screencasts and made available online. Participants will be encouraged to consider the information design merits of comparative visualisation methods for publication on different media platforms: print, video, interactive and mobile. In all, we hope to deliver up to 50 hours of centrally produced, openly licensed materials by the end of the course. The uncourse will also develop a custom search engine offering coverage of openly licensed and freely accessible resources related to the course topic area.

11. The course approach is inspired to a certain extent by the Massive Online Open Course (MOOC) style courses pioneered by George Siemens, Stephen Downes, Dave Cormier, Jim Groom et al. The MOOC approach encourages learners to explore a given topic space with the help of some wayfinders. Much of the benefit is derived from the connections participants make between each other and the content by sharing, reflecting, and building on the contributions of others across different media spaces, like blogs, Twitter, forums, YouTube, etc.

12. The course model also draws upon the idea of a uncourse, as demonstrated by Hirst in the creation of the Digital Worlds game development blog [ http://digitalworlds.wordpress.com ] that produced a series of resources as part of an openly blogged learning journey that have since been reused directly in an OU course (T151 Digital Worlds); and the Visual Gadgets blog ( http://visualgadgets.blogspot.com ) that drafted materials that later came to be reused in the OU course T215 Communication and information technologies, and then made available under open license as the OpenLearn unit Visualisation: Visual representations of data and information [ http://openlearn.open.ac.uk/course/view.php?id=4442 ]

13. A second phase of the project will explore ways of improving the discovery of resources in an online context, as well as establishing them as important and relevant resources within the target community. Through face-to-face workshops and hack days, we will run a series of workshops at community events that draw on and extend the activities developed during the initial uncourse, and refer participants to the materials. A second presentation of the uncourse will be offered as a way of testing and demonstrating reuse of the resources, as well as providing an exit path from workshop activities. One possible exit path from the uncourse would be entry into formal academic courses.

14. Establishing the resources within the target community is an important aspect of the project. Participation in community events plays an important role in this, and also helps to prove the resources produced. Attendance at events such as the Open Government Data camp will allow us to promote the availability of the resources to the appropriate European community, further identify community needs, and also provide a backdrop for the development of a promotional video with vox pops from the community hopefully expressing support for the resources being produced. The extent to which materials do become adopted and used within the community will be form an important part of the project evaluation.

15. … By embedding resources in the target community, we aim to enhance the practical utility of the resources within that community as well as providing an academic consideration of the issues involved. A key part of the evaluation workpackage, …, will be to rate the quality of the materials produced and the level of engagement with and reuse of them by both educators and members of the target community.

Note that I am still keen on working this bid up a bit more for submission somewhere else…;-)

[Note that the opinions expressed herein are very much my own personal ones…]

PS see also COL-UNESCO consultation: Guidelines for OER in Higher Education – Request for comments: OER Guidelines for Higher Education Stakeholders

Tune Your Feeds…

I’m so glad we’re at year’s end: I’m completely bored of the web, my feeds contain little of interest, I’m drastically in need of a personal reboot, and I’m starting to find myself stuck in a “seen-it-all-before” rut…

Take the “new” Google Circle’s volume slider, for example… Ooh.. shiny… ooh, new feature…

Yawn… Slider widgets have been around for ages, of course (e.g. Slider Widgets Around the Web) and didn’t Facebook allow you to do the volume control thing on your Facebook news feeds way back when, when Facebook’s feeds were themselves news (Facebook News Mixing Desk)?

Facebook Mixing desk

Does Facebook still offer this service I wonder?

On the other hand, there is the new Google Zeitgeist Scrapbook… I’m still trying to decide whether this is interesting or not… The prmeise is a series of half completed straplines that you can fill in with subheadings that interest you, and reveal a short info paragraph as a result.

Google scrapbook

Google scrapbook

The finished thing is part scrapbook, part sticker book.

Google scrapbook

The reason why I’m not sure whether this is interesting or not is because I can’t decide whether it may actually hint at a mechanic for customising your own newspaper out of content from your favoured news provider. For example, what would it look like if we tried to build something similar around content from the Guardian Platform API? Might different tag combinations be dragged into the story panels to hook up a feed from that tag or section of the “paper”? And once we’ve acted as editor of our own newspaper, might advanced users then make use of mixing desk sliders to tune the volume of content in each section?

This builds on the idea that newspapers provide you with content and story types you wouldn’t necessarily see, whilst still allowing to some degree of control over how weighted the “paper” is to different news sections (something we always had some element of control over before, though at a different level of granularity, for example, by choosing to buy newspapers only on certain days because they came with a supplement you were interested in, though you were also happy to read the rest of the paper since you have it…)

(It also reminds me that I never could decide about Google’s Living Stories either…)

PS in other news, MIT hints at an innovation in the open educational field, in particular with respect to certification… It seems you may soon be able to claim some sort of academic credit, for a fee, if you’ve been tracked through an MITx open course (MIT’s new online courses target students worldwide). Here’s the original news release: MIT launches online learning initiative and FAQ.

So I wonder: a “proven” online strategy is to grab as big an audience as you can as quickly as you can, then worry about how to make the money back. Could MIT’s large online course offereings from earlier this year be seen in retrospect as MIT testing the water’s to see whether or not they could grow an audience around online courses quickly?

I just wonder what would have happened if we’d managed to convert a Relevant Knowldge course to an open course accreditation container for a start date earlier this year, and used it to offer credit around the MIT courses ourselves?!;-) As to what other innovations might there be around open online education? I suspect the OU still has high hopes for SocialLearn… but I’m still of the mind that there’s far more interesting stuff to be done in the area of open course production

Do We Need an OpenLearn Content Liberation Front?

For me, one of the defining attributes of openness relates to accessibility of the machine kind: if I can’t write a script to handle the repetitive stuff for me, or can’t automate the embedding of image and/or video resources, then whatever the content is, it’s not open enough in a practical sense for me to do what I want with it.

So here’s an, erm, how can I put this politely, little niggle I have with OpenLearn XML. (For those of you not keeping up, one of the many OpenLearn sites is the OU’s open course materials site; the materials published on the site as course unit contentful HTML pages are also available as structured XML documents. (When I say “structured”, I mean that certain elements of the materials are marked up in a semantically meaningful way; lots of elements aren’t, but we have to start somewhere ;-))

The context is this: following on from my presentation on Making More of Structured Course Materials at the eSTeEM conference last week, I left a chat with Jonathan Fine with the intention of seeing what sorts of secondary product I could easily generate from the OpenLearn content. I’m in the middle of building a scraper and structured content extractor at the moment, grabbing things like learning outcomes, glossary items, references and images, but I almost immediately hit a couple of problems, first with actually locating the OU XML docs, and secondly locating the images…

Getting hold of a machine readable list of OpenLearn units is easy enough via the OpenLearn OPML feed (much easier to work with than the “all units” HTML index page). Units are organised by topic and are listed using the following format:

<outline type="rss" text="Unit content for Water use and the water cycle" htmlUrl="http://openlearn.open.ac.uk/course/view.php?name=S278_12" xmlUrl="http://openlearn.open.ac.uk/rss/file.php/stdfeed/4307/S278_12_rss.xml"/>

URLs of the form http://openlearn.open.ac.uk/course/view.php?name=S278_12 link to a ‘homepage” for each unit, which then links to the first page of actual content, content which is also available in XML form. The content page URLs have the form http://openlearn.open.ac.uk/mod/oucontent/view.php?id=398820&direct=1, where the ID is one-one uniquely mapped to the course name identifier. The XML version of the page can then be accessed by changing direct=1 in the URL to content=1. Only, we don’t know the mapping from course unit name to page id. The easiest way I’ve found of doing that is to load in the RSS feed for each unit and grab the first link URL, which points the first HTML content page view of the unit.

I’ve popped a scraper up on Scraperwiki to build the lookup for XML URLs for OpenLearn units – OpenLearn XML Processor:

import scraperwiki

from lxml import etree

#===
#via http://stackoverflow.com/questions/5757201/help-or-advice-me-get-started-with-lxml/5899005#5899005
def flatten(el):           
    result = [ (el.text or "") ]
    for sel in el:
        result.append(flatten(sel))
        result.append(sel.tail or "")
    return "".join(result)
#===

def getcontenturl(srcUrl):
    rss= etree.parse(srcUrl)
    rssroot=rss.getroot()
    try:
        contenturl= flatten(rssroot.find('./channel/item/link'))
    except:
        contenturl=''
    return contenturl

def getUnitLocations():
    #The OPML file lists all OpenLearn units by topic area
    srcUrl='http://openlearn.open.ac.uk/rss/file.php/stdfeed/1/full_opml.xml'
    tree = etree.parse(srcUrl)
    root = tree.getroot()
    topics=root.findall('.//body/outline')
    #Handle each topic area separately?
    for topic in topics:
        tt = topic.get('text')
        print tt
        for item in topic.findall('./outline'):
            it=item.get('text')
            if it.startswith('Unit content for'):
                it=it.replace('Unit content for','')
                url=item.get('htmlUrl')
                rssurl=item.get('xmlUrl')
                ccu=url.split('=')[1]
                cctmp=ccu.split('_')
                cc=cctmp[0]
                if len(cctmp)>1: ccpart=cctmp[1]
                else: ccpart=1
                slug=rssurl.replace('http://openlearn.open.ac.uk/rss/file.php/stdfeed/','')
                slug=slug.split('/')[0]
                contenturl=getcontenturl(rssurl)
                print tt,it,slug,ccu,cc,ccpart,url,contenturl
                scraperwiki.sqlite.save(unique_keys=['ccu'], table_name='unitsHome', data={'ccu':ccu, 'uname':it,'topic':tt,'slug':slug,'cc':cc,'ccpart':ccpart,'url':url,'rssurl':rssurl,'ccurl':contenturl})

getUnitLocations()

The next step in the plan (because I usually do have a plan; it’s hard to play effectively without some sort of direction in mind…) as far as images goes was to grab the figure elements out of the XML documents and generate an image gallery that allows you to search through OpenLearn images by title/caption and/or description, and preview them. Getting the caption and description from the XML is easy enough, but getting the image URLs is not

Here’s an example of a figure element from an OpenLearn XML document:

<Figure id="fig001">
<Image src="\\DCTM_FSS\content\Teaching and curriculum\Modules\Shared Resources\OpenLearn\S278_5\1.0\s278_5_f001hi.jpg" height="" webthumbnail="false" x_imagesrc="s278_5_f001hi.jpg" x_imagewidth="478" x_imageheight="522"/>
<Caption>Figure 1 The geothermal gradient beneath a continent, showing how temperature increases more rapidly with depth in the lithosphere than it does in the deep mantle.</Caption>
<Alternative>Figure 1</Alternative>
<Description>Figure 1</Description>
</Figure>

Looking at the HTML page for the corresponding unit on OpenLearn, we see it points to the image resource file at http://openlearn.open.ac.uk/file.php/4178/!via/oucontent/course/476/s278_5_f001hi.jpg:

So how can we generate that image URL from the resource link in the XML document? The filename is the same, but how can we generate what are presumably contextually relevant path elements: http://openlearn.open.ac.uk/file.php/4178/!via/oucontent/course/476/

If we look at the OpenLearn OPML file that lists all current OpenLearn units, we can find the first identifier in the path to the RSS file:

<outline type="rss" text="Unit content for Energy resources: Geothermal energy" htmlUrl="http://openlearn.open.ac.uk/course/view.php?name=S278_5" xmlUrl="http://openlearn.open.ac.uk/rss/file.php/stdfeed/4178/S278_5_rss.xml"/>

But I can’t seem to find a crib for the second identifier – 476 – anywhere? Which means I can’t mechanise the creation of links to actually OpenLearn image assets from the XML source. Also note that there are no credits, acknowledgements or license conditions associated with the image contained within the figure description. Which also makes it hard to reuse the image in a legal, rights recognising sense.

[Doh – I can surely just look at URL for an image in an OpenLearn unit RSS feed and pick the path up from there, can’t I? Only I can’t, because the image links in the RSS feeds are: a) relative links, without path information, and b) broken as a result…]

Reusing images on the basis of the OpenLearn XML “sourcecode” document is therefore: NOT OBVIOUSLY POSSIBLE.

What this suggests to me is that if you release “source code” documents, they may actually need some processing in terms of asset resolution that generates publicly resolvable locators to assets if they are encoded within the source code document as “private” assets/non-resolvable identifiers.

Where necessary, acknowledgements/credits are provided in the backmatter using elements of the form:

<Paragraph>Figure 7 Willes-Richards, J., et al. (1990) ; HDR Resource/Economics’ in Baria, R. (ed.) <i>Hot Dry Rock Geothermal Energy</i>, Copyright CSM Associates Limited</Paragraph>

Whilst OU-XML does support the ability to make a meaningful link to a resource within the XML document, using an element of the form:

<CrossRef idref="fig007">Figure 7</CrossRef>

(which presumably uses the Alternative label as the cross-referenced identifier, although not the figure element id (eg fig007) which is presumably unique within any particular XML document?), this identifier is not used to link the informally stated figure credit back to the uniquely identified figure element?

If the same image asset is used in several course units, there is presumably no way of telling from the element data (or even, necessarily, the credit data?) whether the images are in fact one and the same. That is, we can’t audit the OpenLearn materials in a text mechanised way to see whether or not particular images are reused across two or more OpenLearn units.

Just in passing, it’s maybe also worth noting that in the above case at least, a description for the image is missing. In actual OU course materials, the description element is used to capture a textual description of the image that explicates the image in the context of the surrounding text. This represents a partial fulfilment of accessibility requirements surrounding images and represents, even if not best, at least effective practice.

Where else might content need liberating within OpenLearn content? At the end of the course unit XML documents, in the “backmatter” element, there is often a list of references. References have the form:

<Reference>Sheldon, P. (2005) Earth’s Physical Resources: An Introduction (Book 1 of S278 Earth’s Physical Resources: Origin, Use and Environmental Impact), The Open University, Milton Keynes</Reference>

Hmmm… no structure there… so how easy would it be to reliably generate a link to an authoritative record for that item? (Note that other records occasionally use presentational markup such as italics (or emphasis) tags to presentationally style certain parts of some references (confusing presentation with semantics…).)

Finally, just a quick note on why I’m blogging this publicly rather than raising it, erm, quietly within the OU. My reasoning is similar to the reasoning we use when we tell students to not be afraid of asking questions, because it’s likely that others will also have the same question… I’m asking a question about the structure of an open educational resource, because I don’t quite understand it; by asking the question in public, it may be the case that others can use the same questioning strategy to review the way they present their materials, so when I find those, I don’t have to ask similar sorts of question again;-)

PS sort of related to this, see TechDis’ Terry McAndrew’s Accessible courses need and accessibilty-friendly schema standard.

PPS see also another take on ways of trying to reduce cognitive waste – Joss Winn’s latest bid in progress, which will examine how the OAuth 2.0 specification can be integrated into a single sign on environment alongside Microsoft’s Unified Access Gateway. If that’s an issue or matter of interest in your institution, why not fork the bid and work it up yourself, or maybe even fork it and contribute elements back?;-) (Hmm, if several institutions submitted what was essentially the same bid from multiple institutions, how would they cope during the marking process?!;-)