Drafting a Bid Proposal – Comments?

[Note that I might treat this post a bit like a wiki page… Note to self: sort out a personal wiki]

Call is JISC OER3 – here’s the starter for ten (comments appreciated, both positive and negative; letters of support/expressions of interest welcome; comments relating to possible content/themes, declarations of interest in taking the course, etc etc also welcome, though I will be soliciting these more specifically at some point)

Rapid Resource Discovery and Development via Open Production Pair Teaching (ReDOPT) seeks to draft a set of openly licensed resources for potential (re)use in courses in two different institutions through the real-time production and delivery of an open online short-course in the area of data handling and visualisation. This approach subverts the more traditional technique of developing materials for a course and then retrospectively making them open, by creating the materials in public and in an openly licensed way, in a way that makes them immediately available for “study” as well as open web discovery, and then bringing them back into the closed setting for (re)use. The course will be promoted to the data journalism and open data communities as a free “MOOC” (Massive Online Open Course)/P2PU style course, with a view to establishing an immediate direct use by a practitioner community. The project will proceed as follows: over a 10-12 week period, the core project team will use a variant of the Pair Teaching approach to develop and publish an informal open, online course hosted on an .ac.uk domain via a set of narrative linked resources (each one about the length of a blog post and representing 10 minutes to 1 hour of learner activity) mapping out the project team’s own learning journey through the topic area. The course scope will be guided by a skeletal curriculum determined in advance from a review of current literature, informal interviews/questionnaires and perceived skills and knowledge gaps in the area. The created resources will contain openly licensed custom written/bespoke material, embedded third party content (audio, video, graphical, data), and selected links to relevant third party material. A public custom search engine in the topic area will also be curated during the course. Additional resources created by course participants (some of whom may themselves be part of the project team), will be integrated into the core course and added to the custom search engine by the project team. Part-time, hourly paid staff will be funded to contribute additional resources into the evolving course. Because of timescales involved, this proposal is limited to the production of the draft materials, and does not extend as far as the reuse/first formal use case. Success metrics will therefore be limited to volume and reach of resources produced, community engagement with the live production of the materials, and the extent to which project team members intend to directly reuse the materials produced as a result.

Open Training Resources

Some disconnected thoughts about who gives a whatever about OERs, brought on in part by @liamgh’s Why remix an Open Educational Resource? (see also this 2 year old post: So What Exactly Is An OpenLearn Content Remix?). A couple of other bits of context too, to to situate HE in a wider context of educational broadcasting:

Trust partially upholds fair trading complaints against the BBC: “BESA appealed to the Trust regarding three of the BBC’s formal learning offerings on bbc.co.uk between 1997 and 2009. … the Trust considers it is necessary for the Trust to conduct an assessment of the potential competitive impacts of Bitesize, Learning Zone Broadband and the Learning Portal, covering developments to these offerings since June 2007, and the way in which they deliver against the BBC’s Public Purposes. This will enable the Trust to determine whether the BBC Executive’s failure to conduct its own competitive impact assessment since 2007 had any substantive effect. … No further increases in investment levels for Bitesize, Learning Zone Broadband and the Learning Portal will be considered until the Trust has completed its competitive impact assessment on developments since 2007

Getting nearer day by day: “We launched a BBC College of Journalism intranet site back in January 2007 … aimed at the 7,500 journalists in the BBC … A handful of us put together about 1200 pages of learning – guides, tips, advice – and about 250 bits of video; a blog, podcasts, interactive tests and quizzes and built the tools to deliver them. A lot of late nights and a lot of really satisfying work. Satisfying, too, because we put into effect some really cool ideas about informal learning and were able to find out how early and mid career journalists learn best. … The plan always was to share this content with the people who’d paid for it – UK licence fee payers. And to make it available for BBC journalists to work on at home or in parts of the world where a www connection was more reliable than an intranet link. Which is where we more or less are now.” [my emphasis; see also BBC Training and Development]

And this: Towards Vendor Certification on the Open Web? Google Training Resources

So why my jaded attitude? Because I wonder (again) what it is we actually expect to happen to these OERs (how many OER projects re-use other peoples’ bids to get funding? How many reuse each others ‘what are OERs stuff’? How many OER projects ever demonstrate a remix of their content, or a compelling reuse of it? How many publish their sites as a wiki so other people can correct errors? How many are open to public comments, ffs? How many give a worked example of any of the twenty items on Liam’s list with their content, and how many of them mix in other people’s OER content if they ever do so? How many attempt to publish running stats on how their content is being reused, and how many demonstrate showcase examples of content remix and reuse.

That said, there are signs of some sort of use: ‘Self-learners’ creating university of online; maybe the open courseware is providing a discovery context for learners looking for specific learning aids (or educators looking for specific teaching aids)? That is, while use might be most likely at the disaggregated level, discovery will be mediated through course level aggregations (the wider course context providing the SEO, or discovery metadata, that leads to particular items being discovered? Maybe Google turns up the course, and local navigation helps (expert) users browse to the resource they were hoping to discover?)

Early days yet, I know, but how much of the #ukoer content currently being produced will be remixed with, or reused alongside, content from other parts of that project as part of end-of-project demos? (Of course, if reuse/remix isn’t really what you expect, then fine… and, err, what are you claiming, exactly? Simple consumption? That’s fine, but say it; limit yourself to that…)

Ok, rant part over. Deep breath. Here comes another… as academics, we like to think we do the education thing, not the training thing. But for those of you who do learn new stuff, maybe every day, what do you find most useful to support that presumably self-motivated learning? For my own part, I tend to search for tutorials, and maybe even use How Do I?. That is, I look for training materials. A need or a question frames the search, and then being able to do something, make something, get my head round something enough to be able to make use of it, or teach it on, frames the admittedly utilitarian goal. Maybe that ability to look for those materials is a graduate level information skill, so it’s something we teach, right…? (Err… but that would be training…?!)

So here’s where I’m at – OERs are probably [possibly?] not that useful. But open training materials potentially are. (Or maybe not..?;-) Here are some more: UNESCO Training Platform

And so is open documentation.

They probably all could come under the banner of open information resources, but thy are differently useful, and differently likely to be reused/reusable, remixed/remixable, maintained/maintainable or repurposed/repurposeable. Of them all, I suspect that the opencourseware subset of OERs is the least re* of them all.

That is all…

Discuss…

Handling Yahoo Pipes Serialised PHP Output

One of the output formats supported by Yahoo Pipes is a PHP style array. In this post, which describes a way of seeing how well connected a particular Twitter user is to other Twitterers who have recently used a particular hashtag, I’ll show you how it can b used.

The following snippet, (cribbed from Coding Forums) shows how to handle this array:

//Declare the required pipe, specifying the php output
$req = "http://pipes.yahoo.com/ouseful/hashtagtwitterers?_render=php&min=3&n=100&q=%23jiscri";

// Make the request
$phpserialized = file_get_contents($req);

// Parse the serialized response
$phparray = unserialize($phpserialized);

//Here's the raw contents of the array
print_r($phparray);

//Here's how to parse it
foreach ($phparray['value']['items'] AS $key => $val)
	printf("<div><p><a href=\"%s\">%s</a></p><p>%s</p>\n", $val['link'], $val['title'], $val['description']);

The pipe used in the above snippet (http://pipes.yahoo.com/ouseful/hashtagtwitterers) displays a list of people who have recently used a particular hashtag on Twitter a minimum specified number of times.

It’s easy enough to parse out the Twitter ID of each individual, and then for a particular named individual see which of those hashtagging Twitterers they are either following, or are following them. (Why’s this interesting? Well, for any given hashtag community, it can show you how well connected you are with that community).

So let’s see how to do it. First, parse out the Twitter ID:

foreach ($phparray['value']['items'] AS $key => $val) {
	$id=preg_replace("/@([^\s]*)\s.*/", "$1", $val['title']);
	$idList[] = $id; 
}

We have the Twitter screennames, but now we want the actual Twitter user IDs. There are several PHP libraries for accessing the Twitter API. The following relies on an old, rejigged version of the library available from http://github.com/jdp/twitterlibphp/tree/master/twitter.lib.php (the code may need tweaking to work with the current version…), and is really kludged together… (Note to self – tidy this up on day!)

The algorithm is basically as follows, and generates a GraphViz .dot file that will plot the connections a particular user has with the members of a particular hashtagging community:

  • get the list of hashtagger Twitter usernames (as above);
  • for each username, call the Twitter API to get the corresponding Twitter ID, and print out a label that maps each ID to a username;
  • for the user we want to investigate, pull down the list of people who follow them from the Twitter API; for each follower, if the follower is in the hashtaggers set, print out that relationship;
  • for the user we want to investigate, pull down the list of people who they follow (i.e. their ‘friends’) from the Twitter API; for each friend, if the friend is in the hashtaggers set, print out that relationship;
$Twitter = new Twitter($myTwitterID, $myTwitterPwd);

//Get the Twitter ID for each user identified by the hashtagger pipe
foreach ($idList as $user) {
	$user_det=$Twitter->showUser($user, 'xml');
 	$p = xml_parser_create();
	xml_parse_into_struct($p,$user_det,$results,$index);
	xml_parser_free($p);
	$id=$results[$index['ID'][0]][value];
	$userID[$user]=$id;
	//print out labels in the Graphviz .dot format
	echo $id."[label=\"".$user."\"];\r";
}

//$userfocus is the Twitter screenname of the person we want to examine
$currUser=$userID[$userfocus];
 
//So who in the hashtagger list is following them?
$follower_det=$Twitter->getFollowers($userfocus, 'xml');
$p = xml_parser_create();
xml_parse_into_struct($p,$follower_det,$results,$index);
xml_parser_free($p);
foreach ($index['ID'] as $item){
	$follower=$results[$item][value];
	//print out edges in the Graphviz .dot format
	if (in_array($follower,$userID)) echo $follower."->".$currUser.";\r";
}

//And who in the hashtagger list are they following?
$friends_det=$Twitter->getFriends($userfocus, 'xml');
$p = xml_parser_create();
xml_parse_into_struct($p,$friends_det,$results,$index);
xml_parser_free($p);
foreach ($index['ID'] as $item){
	$followed=$results[$item][value];
	//print out edges in the Graphviz .dot format
	if (in_array($followed,$userID)) echo $currUser."->".$followed.";\r";
}

For completeness, here are the Twitter object methods and their associated Twitter API calls that were used in the above code:

function showUser($id,$format){
	$api_call=sprintf("http://twitter.com/users/show/%s.%s",$id,$format);
  	return $this->APICall($api_call, false);
}

function getFollowers($id,$format){
  	$api_call=sprintf("http://twitter.com/followers/ids/%s.%s",$id,$format);
 	return $this->APICall($api_call, false);
}
  
function getFriends($id,$format){
  	$api_call=sprintf("http://twitter.com/friends/ids/%s.%s",$id,$format);
 	return $this->APICall($api_call, false);
}

Running the code uses N+2 Twitter API calls, where N is the number of different users identified by the hashtagger pipe.

The output of the script is almost a minimal Graphviz .dot file. All that’s missing is the wrapper, e.g. something like: digraph twitterNet { … }. Here’s what a valid file looks like:

(The labels can appear either before or after the edges – it makes no difference as far as GraphViz is concernd.)

Plotting the graph will show you who the individual of interest is connected to, and how, in the particular hashtag community.

So for example, in the recent #ukoer community, here’s how LornaMCampbell is connected. First a ‘circular’ view:

ukoerInternalNetLMC2

The arrow direction goes FROM one person TO a person they are following. In the circular diagram, it can be quite hard to see whether a connection is reciprocated or one way.

The Graphviz network diagram uses a separate edge for each connection and makes it easier to spot reciprocated links:

ukoerInternalNetLMC

So, there we have it. Another way of looking at Twitter hashtag networks to go along with Preliminary Thoughts on Visualising the OpenEd09 Twitter Network, A Quick Peek at the IWMW2009 Twitter Network and More Thinkses Around Twitter Hashtag Networks: #JISCRI

Open Educational Resources and the University Library Website

Being a Bear of Very Little Brain, I find it convenient to think of the users of academic library websites falling into one of three ‘deliberate’ and one ‘by chance’ categories:

– students (i.e. people taking at course);
– lecturers (i.e. people creating or supporting a course);
– researchers;
– folk off the web (i.e. people who Googled in who are none of the above).

The following Library website homepage (in this case, from Leicester) is typical:

…and the following options on the Library catalogue are also typical:

So what’s missing…?

How about a link to “Teaching materials”, or “open educational resources”?

After all, if you’re a lecturer looking to pull a new course together, or a student who’s struggling to make head or tail of the way one of your particular lecturers is approaching a particular topic, or a researcher who needs a crash course in a particular method or technique, maybe some lecture notes or course materials are exactly the sort of resource you need?

Trying to kickstart the uptake of open educational materials has not be as easy as might be imagined (e.g. On the Lack of Reuse of OER), but maybe this is because OERs aren’t as ‘legitimately discoverable’ as other academic resources.

If anyone using an academic library website can’t easily search educational resources in that context, what does that say about the status of those resources in the eyes of the Library?

Bearing in mind my crude list of user classes, and comparing them to the sorts of resources that academic libraries do try to support the discovery of, what do we find?

– the library catalogue returns information about books (though full text search is not available) and the titles of journals; it might also tap into course reading lists.
– the e-resources search provides full text search over e-book and journal content.

One of the nice features of the OU wesbite search (not working for me at the moment: “Our servers are busy”, apparently…) is that it is possible to search OU course materials for the course you are currently on (if you’re a student) or across all courses if you are staff. A search over OpenLearn materials is also provided. However, I don’t think these course material searches are available from the Library website?

So here’s a suggestion for the #UKOER folk – see if you can persuade your library to start offering a search over OERs from their website (Scott Wilson at CETIS is building an OER aggregator that might help in this respect, and there are also initiativs like OER Commons).

And, err, as a tip: when they say they already do, a link to the OER Commons site on a page full of links to random resources, buried someowhre deep within the browsable bowels of the library website doesn’t count. It has to be at least as obvious(?!), easy to use(?!) and prominent(?!?) as the current Library catalogue and journal/database searches…