WP_LE

And so it came to pass that the campus was divided.

The LMS had given way to the VLE and some little control was given over to the instructors that they might upload some of their own content to the VLE, yet woe betide any who tried to add their own embed codes or script tags, for verily it is evil and the devil’s own work…

And in the dark recesses of the campus, the student masses were mocked with paltry trifles thrown to them in the form of a simple blogging engine, that they might chat amongst each other and feel as if their voice was being heard…

But over time, the blogging engine did grow in stature until such a day that it was revealed in its fullest glory, and verily did the VLE cower beneath the great majesty of that which came to be known as the WP_LE…

…or something like that…

Three posts, from three players, who just cobbled together something that could well work at institutional scale…

  1. New digs for UMW Blogs, or the anatomy of a redesign: an “anatomy of the redesign of UMW Blogs” (WordPress MU), describing sitewide aggregation, tagclounds and all sorts of groovy stuff on the homepage, along with courses, support and contact pages;
  2. Reuse, resources, re-whatever…: showing how Mediawiki can now be used in all sort of ways to feed wiki content into WordPress… (just think about it: this is the bliki concept working for real on two best-of-breed, open source plaforms…);
  3. Batch adding users to a WordPress site: “import users into a site. All you need to provide is a username and email address for each student and it will create the account, generate a password, assign the specified user Role, and send an email to the student so they can login”…

So what do we have here? WordPress MU and Mediawiki working together to provide a sitewide, integrated publish platform. The multi-user import “doesn’t create blogs for each student” but I think that’s something that could be fixed easily enough, if required…

Thus far, we’ve been pretty quiet here at the OU on the WordPress and Mediawiki front, although both platfroms are used internally… but just before the summer, as one of the final OpenLearn projects, we got the folks over at Isotoma to put together a couple of WordPress and WordPress MU widgets.

Hopefully we’ll be making them available soon, along with some demo sites, but for now, here’s a tease of what we’ve pulled together.

Now you may or may not remember the the Reverend’s edupunkery that resulted in Proud Spammer of Open University Courses, a demo of how to import an OpenLearn unit content RSS feed into a WordPress blog…?

Well we’ve run with that idea – and generalised it a little – so that you can take any of the OpenLearn topic/subject area feeds (that list a set of units in a particular topic) and set up each of the courses itemised in the list with its own WordPress MU blog. Automatically. At the click of a button. What this means is that if you want to create collection of course unit blogs using OpenLearn units, you can do it in one go…

Now there are a few issues with some of the links that are pulled into the blogs from the OpenLearn feeds, and there’s some dodgy bits of script that need thinking about, but at the very least we now have a bulk spamming of OpenLearn courses tool… And if we can get a fix going with the imported, internal unit blog links, and maybe some automated blog tagging and categorising done at import time, then there is plenty of scope for emergent uncourse link mapping across and between OpenLearn WP MU course units…

Using separate WordPress MU blogs to publish unchanging “static” courses is one thing of course – the blog environment makes it easy to comment and publicly annotate each separate unit page. But compare these fixed, unchanging blog courses with how you might consume a blogged (un)course the first time it was presented… Assuming that pages were posted as they were written over the life of the course, you get each new section as new post in your feed reader every day or two…

So step in an old favourite of mine – daily feeds. (Anyone remember the OpenLearn_daily experiment that would deliver an OpenLearn unit via a feed over several days, relative to the day you first subscribed to it?) Our second offerin is a daily feeds widget for WordPress. Subscribe to a daily feed, and you’ll get one item a day from a static course unit blog in your feed reader, starting with the first item in the course unit on the first day.

Taking the two widgets together, we can effectively create a version of OpenLearn in which each OpenLearn unit will be delivered via its own WP MU blog, and each unit capable of being consumed via a daily feed…

A couple of people have been trying out the widgets already, and if anyone else would like a “private release” copy of the code to play with before we post it openly, please get in touch….

Joining the Flow – Invisible Library Tech Support

Idling some thoughts about what to talk about in a session the OU Library* is running with some folks from Cambridge University Library services as part of an Arcadia Trust funded project there (blog), I started wondering about how info professionals in an organisation might provide invisible support to their patrons by joining in the conversation…

*err – oops; I mentioned the OU Library without clearing the text first; was I supposed to submit this post for censor approval before publishing it? ;-)

One way to do this is to comment on blog posts, as our own Tim Wales does on OUseful.info pages from time to time (when I don’t reply, Tim, it’s because I can’t add any more… but I’ll be looking out for your comments with an eagle eye from now on… ;-) [I also get delicious links for:d to me by Keren – who’s also on Twitter – and emailed links and news stories from Juanita on the TU120 course team.]

Another way is to join the twitterati…

“Ah”, you might say, “I can see how that would work. We set up @OULibrary, then our users subscribe to us and then when they want help they can send us a message, and we can get back to them… Cool… :-)”

Err… no.

The way I’d see it working would be for @OULibrary, for example, to subscribe to the OU twitterati and then help out when they can; “legitimate, peripheral, participatory support” would be one way of thinking about it…

Now of course, it may be that @OULibrary doesn’t want to be part of the whole conversation (at least, not at first…), but just the question asking parts…

In which case, part of the recipe might go something like this: use the advanced search form to find out the pattern for cool uri that lets you search for “question-like” things from a particular user:

(Other queries I’ve found work well are searches for: ?, how OR when OR ? , etc.)

//search.twitter.com/search?q=%22how%22+from%3Apsychemedia

The query gives you something like the above, including a link to an RSS feed for the search:

http://search.twitter.com/search.atom?q=how+%3F+from%3Apsychemedia

So now what do we do? We set up a script that takes a list of the twitter usernames of OU folks – you know how to find that list, right? I took the easy way ;-)

Liam’s suggestion links to an XML stream of status messages from people who follow PlanetOU, so the set might be leaky and/or tainted, right, and include people who have nothing to do with the OU… but am I bovvered? ;-)

(You can see a list of the followers names here, if you log in:
http://twitter.com/planetou/followers)

Hmmm… a list of status messages from people who may have something to do with the OU… Okay, dump the search thing, how about this…

The XML feed of friends statuses appears to be open (at the moment) so just filter the status messages of friends of PlanetOU and hope that OU folks have declared themselves to PlanetOU? (Which I haven’t… ;-)

Subscribe to this and you’ll have a stream of questions from OU folks who you can choose to help out, if you want…

A couple of alternatives would be to take a list of OU folks twitter names, and either follow them and filter your own friends stream for query terms, or generate search feed URLs for all them (my original thought, above) and roll those feeds into a single stream…

In each case, you have set up where the Library is invisibly asking “can I help you?”

Now you might think that libraries in general don’t work that way, that they’re “go to” services who help “lean forward” users, rather than offering help to “lean back” users who didn’t think to ask the library in the first place (err…..?), but I couldn’t possibly comment…

PS More links in to OU communities…

which leads to:

PPS (March 2011) seems like the web ha caught up: InboxQ

ORO Goes Naked With New ePrints Server

A few weeks ago, the OU Open Repository Online (“ORO”) had an upgrade to the new eprints server (breaking the screen scraping Visualising CoAuthors in Open Repository Online Papers demos I’d put together, sigh…).

I had a quick look at the time, and was pleased to see quite a bit of RSS support, as the FAQ describes:

Can I set up RSS feeds from ORO?
RSS feeds can be generated using search results.

To create a feed using a search on ORO:

Enter the search terms and click search. RSS icons will be displayed at the top of the search results. Right click the icon and click on Copy Shortcut. You can then paste the string into your RSS reader.

It is also possible to set up three types of RSS feed, by OU author, by department and by the latest 20 additions to ORO.

To create a feed by OU author start with the following URL:

http://oro.open.ac.uk/cgi/latest_tool?
mode=person&value=author&output=RSS

Please note the capital “RSS” at the end of the string

Substitute author for the author’s OUCU and paste the new string into your RSS reader.

To create a feed by department start with this URL:

http://oro.open.ac.uk/cgi/latest_tool?
mode=faculty&value=math-math&output=RSS

Please note the capital “RSS” at the end of the string

This displays all research that relates to Maths (represented by the code “math-math”). To extract the other department codes used by ORO, go to the following URL:
http://oro.open.ac.uk/view/faculty_dept/faculty_dept.html
locate your department and note the URL (this will appear in the bottom left corner of the screen when you hover over the link). The departmental code is situated between “http://oro.open.ac.uk/view/faculty_dept/” and “.html”, e.g. “cobe”, “arts-musi”, etc. Copy the department code into the relevant part of the string and paste the string into an RSS reader.

To create a feed of the latest 20 additions to ORO use this URL:
http://oro.open.ac.uk/cgi/latest_tool&output=RSS

This feed can also be generated by right clicking on the RSS icons in the top right corner of the screen and choosing copy shortcut

The previous version of eprints offered an OAI-PMH endpoint, which I haven’t found on the new setup, but there is lots of export and XML goodness for each resource lodged with the repository – at last, it’s gettin’ nekkid with its data, as a quick View Source of the HTML splash page for a resource shows:

Output formats include an ASCII, BibTeX, EndNote, Refer, Reference Manager and HTML Citations; a Dublin Core description of the resource; an EP3 XML format; METS and MODS (whatever they are?!); and an OpenURL ContextObject description.

The URLs to each export format are regularly efined and keyed by the numerical resource identifier, (which also keys the URL to the resource’s HTML splash page).

The splash page also embodies a resource description meta data in the head (although the HTML display elements in the body of the page don’t appear to be marked up with microformats, formal or ad hoc).

This meta data availability makes it easy to create a page scraping Yahoo Searchmonkey app, as I’ll show in a later post…

ORO Results in Yahoo SearchMonkey

It’s been a long – and enjoyable – day today (err, yesterday, I forgot to post this last night!), so just a quick placeholder post, that I’ll maybe elaborate on with techie details at a later date, to show one way of making some use of the metadata that appears in the ORO/eprints resource splash pages (as described in ORO Goes Naked With New ePrints Server): a Yahoo SearchMonkey ORO augmented search result – ORO Reference Details (OUseful).

The SearchMonkey extension – which when “installed” in your Yahoo profile, will augment ORO results in organic Yahoo search listings with details about the publication the reference appears in, the full title (or at least, the first few characters of the title!), the keyowrds used to describe the reference and the first author, along with links to a BibTeX reference and the document download (I guess I could also add a link in there to a full HTML reference?)

The SearchMonkey script comes in two parts – a “service” that scrapes the page linked to from the results listing:

And a “presentation” part, that draws on the service to augment the results:

It’s late – I’m tired – so no more for now; if you interested, check out the Yahoo SearchMonkey documentation, or Build your own SearchMonkey app.

Figure:Ground – Mashing Up the PLE (MUPPLE’08) Links

After a nightmare journey, and a “no room at the inn, so walk round Maastricht all night looking for coffee shops” adventure, I fumbled and raced through a version of Figure:Ground – PLEs and the Flexible Learning Environment at MUPPLE’08 Workshop on Mash-Up Personal Learning Environments yesterday, and closed with a promise to post the presentation (such as it is) and some relevant links…

So here are the slides, (although I didn’t get round to annotating them, so they’re unlikely to make a lot of sense!):

And here are some links:

“Vision of a PLE” – a couple of people picked up on the “my PLE” image I used that included offline media and social context alongside the typical web app offerings; you can find the original here: Mohamed Amine Chatti: “My PLE/PKM”.

The OpenU’s OpenLearn open content site can be found at http://openlearn.open.ac.uk. Unlike many other open content sites, the content is published in the context of a Moodle online learning environment that users can join for free. As well as providing a user environment, OpenLearn also makes the content available in a variety of convenient packaging formats (print, Moodle export format, IMS packages, RSS, HTML pages) that allow the content to be taken away and reused elsewhere.

Openlearnigg is a corank (Digg clone) site that pulls OpenLearn course unit URLs in via OpenLearn course listing RSS feeds, and then embeds the OpenLearn content within auto-generated course pages using a Grazr widget fed by OpenLearn unit full content feeds. OpenLearningg this uses OpenLearn syndication tools to mirror the content offerings of the OpenLearn site within a third party environment.

Something I didn’t mention was a pattern we’re developing for republishing with a click the OpenLearn content in WordPress environment (WP_LE). One of the widgets we have developed allows users to subscribe to “fixed” (i.e. unchanging) blog feeds and receive one item per day from the day they subscribe (which provides some all-important pacing for the user).

THe “MIT Courseware refactoring as syndication feeds is described in An MIT OpenCourseWare Course via an OPML Feed and Disaggregating an MIT OpenCourseware Course into Separate RSS Feeds, where I show how the feeds can be used in a Grazr widget to provide a presentation environment for an MIT OER course. I seem to remember the feeds were all handcrafted… You can also find links to the demos from those posts.

The Yale opencourseware feedification story is briefly covered in Yale OpenCourseware Feeds, along with links to each level of the nested Yahoo pipes that do the scraping. RSS Feed Demo from Yale Open Courseware gives a quick review of one how one of the pipes works.

The UC Berkeley Youtube video feeds/video courseware search are described in UCBerkeley Youtube Playlist Course Browser & Video Lecture Search and UC Berkeley Lectures on Youtube, via Grazr (the search part).

One of the aims of the MIT/Yale OPML feed doodles was roundtripping – taking an OER course site, generating feeds from it, and then recreating the site, but powered by the feeds. Getting a feel for the different sorts of feed could be bundled together to give a ‘course experience’ by reverse engineering courses is a stepping stone towards automatically generating some of those feeds using contextual searches, for example.

The Digital Worlds uncourse blog experment explores using a hosted WordPress blog as a course authoring environment, and the approriate use of tag and content feeds as delivery channels (the Visual gadgets uncourse blog does a similar thing using Blogger/Blogspot). Some of my reflections on the Digital Worlds creation process are in part captured in the weekly round-up posts that can be found here: OUseful 1.0 blog archive: Teaching and Learning posts. There’s also a presentation on the topic I gave to the OU CAL research group conference earlier this year: Digital Worlds presentation.

Stringle is my string’n’glue learning environment, as described in Stringle – Towards a String’n’Glue Learning Environment
(the URL structure is described here: StrinGLE URL “API”). Martin Weller also had a go at describing it: Stringle – almost a web 2.0 PLE?.

ANd the final link – was that was to http://ouseful.info, which currently resolves here, at the OUseful.info blog: https://ouseful.wordpress.com.

PS The whole “figure:ground” thing comes from psychology/studies on visual perception, though it turns out that Marshall Mcluhan also started using the phrase to capture a distinction between communciation technologies (the “medium”, viewed as the figure) and the context they operate in (the ground). I keep dipping in to odd bits of Mcluhan’s (and some of them are very odd!) and this medium/context is probably worth thinking through in a lot more detail with respect to “PLEs”.

What Google Thinks of the OU…

More and more search boxes now try to help the user out by making search completion recommendations if you pause awhile when typing query terms into a search box.

So here’s how you get helped out on Youtube:

And here’s what Google suggest is offering on a default (not signed in) Google personal page:

Here’s Yahoo:

Google Insights for Search also provides some food for thought from a free tool you can run against any search terms that get searched on enough. So here for example is the worldwide report for searches on open university over the last 90 days:

Tunneling down to look at searches for open university from the UK, I notice quite a lot were actually looking for information about university open days… Hmmm… do we have a permanent “open day” like web page up onsite anywhere, I wonder?

Let’s see – after all, the OU search engine never fails…

… to provide amusement…

Google comes up with:

Would it make sense, I wonder, to try to capitalise on the name of the university and pull traffic in to a landing page specifically designed to siphon off Google search traffic from students looking for open days at other universities? ;-)

“The Open University: where every day is a university open day. From Newcastle to Bristol, London to Leeds, Oxford to Cambridge, Birmingham to Edinburgh, Cardiff to Nottingham, why not pop in to your local regional Open University center to see what Open University courses might be right for you?”, or somesuch?! Heh heh… :-)

Time to Build Trust With an “Open Achievements API”?

I had a couple of long dog walks today, trying to clear a head full of cold, and as I was wandering I started pondering a qualifications equivalent of something like the Google Health Data API; that is, a “Qualifications Data API” that could be used to share information about the qualifications you have achieved.

A little dig around turned up the Schools Interoperability Framework, a bloated affair that tries to capture all manner of schools related personal data, although that’s not to say that a subset of the SIF wouldn’t be appropriate for capturing and sharing qualifications. And all the security gubbins covered in the spec might provide a useful guide as to what could be expected trying to actually build the API for real (the Google Health Data API also covers security and privacy issues).

I also came across an old mapping between various UK educational levels of attainment frameworks (UK Educational Levels (UKEL)) which I put to one side much as one might put aside a particularly distinctive jigsaw piece, (under the assumption that any formal qualifications described in a qualifications data API could probably be usefully mapped to an appropriate, standardised attainment level); a similar thing at a European level Bologna Process – Qualifications Framework and ECTS) – which got me wondering whether the European Credit Transfer System (ECTS) has a standard XML format for recording qualifications attained by an individual?;and a simple XML format for Uploading Qualifications and Statements of Attainment to the CQR from some Australian Training Agency or other:

Field Name Description
RTONationalID The Registered Training Organisation
National Code. Either this National Code or the State Code below must
be present and valid.
RTOStateID The Registered Training Organisation
State Code. Either this State or the National Code above must be present
and valid.
CourseNationalID The National Course Code for the Course
the Student completed. Either this National Code or the State Code below
must be present and valid.
CourseStateID The State Course Code for the Course
the Student completed. Either this State Code or the National Code above
must be present and valid.
StudentID The Student’s identity number / code.
Optional.
StudentFirstName The Student’s First Name. Required.
StudentLastName The Student’s Last Name. Required.
StudentMiddleName The Student’s Middle Name. Optional
StudentDOB The Student’s Date of Birth. Optional.

Format is: DD-MMM-YYYY. eg. 03-JAN-1976

ContractID ID for the Student’s Contract if apprentice
or trainee. Optional.

Format is: 9999/99

ParchmentNo A unique number / code that appears
on the Parchment / Certificate. Optional.
IssueDate Date the Qualification was Issued. Required.

Format is: DD-MMM-YYYY. eg. 27-MAR-2004

(This sort of thing would also naturally benefit from an association with details about a particular course pulled in from an XCRI course description…)

Looking at the above format, it struck me that a far more general “Open Achievements API” might actually be something quite useful. As well as describing formal awards, it could also optionally refer to informal achievements, or “trust measures” such as eBay seller rating, Amazon reviewer rank, World of Warcraft level or Grockit experience points.

In a sense, an Open Achievements API could complement the Google Open Social API with a range of claims a person might choose to make about themself that could be verified to a greater or lesser degree. The Open Acheivements API would therefore have to associate with each claimed achievement a “provenance”, that could range from “personal claim” through to some sort of identifier for securing an “official”, SSL transported verification from the body that presumably awarded the claimed acheievement (such as a particular formal qualification, for example).

By complementing Open Social, the Open Achievements API would provide a transport mechanism for associating CV information within a particular profile, as well as personal and social information. If it was supported by informal learning environments, such as the School of Everything, OpenLearn, or SocialLearn, it would allow informal learners to badge themselves with a portable record of their learning achievements (much as OU students can do with the Course Profiles Facebook Application).

Qualification(s), Recognition and Credible, Personal Vouchsafes

Via Downes, today, a link to a Chronicle of Higher Ed story asking: “When Professors Print Their Own Diplomas, Who Needs Universities?, and which reports on the distribution of ‘personally guaranteed’ certificates by open educator David Wiley to participants who were not formally enrolled in, but were allowed to participate in, (and were ‘unofficially’ graded on) an open course that ran last year.

Hopefully I’ll get a chance to ask David about that tomorrow. because I think this sort of ‘personal vouchsafe from a credible source’ could be a powerful ingredient in an “Open Achievements API”.

The post goes on:

But plenty of folks outside of higher education might jump in. Imagine the hosts of the TV show Myth Busters offering a course on the scientific method delivered via the Discovery Channel’s Web site. Or Malcolm Gladwell, author of the best-selling Tipping Point, teaching an online business course on The New Yorker’s site. Or a retired Nobel Prize winner teaching via a makeshift virtual classroom set up on her personal blog.

By developing credibility or ‘authority metrics’ (“AuthorityRank”?!) that reflect the extent to which there are legitimate grounds for an agent to ‘bestow an award’ on an individual on the grounds that the individual has demonstrated some competency or understanding in a particular area, we might be able to build a trust based framework for ‘qualifying’ an individual’s capabilities in a particular area with a given degree of confidence.

An Open Achievements API would provide a structure for declaring such achievements, and different ‘qualification platforms’ could compete on the efficacy of their authority ranking mechanisms in terms of positioning themselves as ‘high worth’ qualifying engines (cf. “good universities”).

It’s late, I’m tired, and I have no idea if this will make any sense to me in the morning…

eduTwitterin’

Jane’s list of “100+ (E-)Learning Professionals to follow on Twitter” (which includes yours truly, Martin and Grainne from the OpenU :-) has been doing the rounds today, so in partial response to Tony Karrer asking “is there an equivalent to OPML import for twitter for those of us who don’t want to go through the list and add people one at a time?”, I took an alternative route to achieving a similar effect (tracking those 100+ e-learning professionals’ tweets) and put together a Yahoo pipe to produce an aggregated feed – Jane’s edutwitterers pipe

Scrape the page and create a semblance of a feed of the edutwitterers:

Tidy the feed up a bit and make sure we only include items that link to valid twitter RSS feed URLs (note that the title could do with a little more tidying up…) – the regular expression for the link creates the feed URL for each edutwitterer:

Replace each item in the edutwitterers feed with the tweets from that person:

From the pipe, subscribe to the aggregated edutwitters’ feed.

Note, however, that the aggregated feed is a bit slow – it takes time to pull out tweets for each edutwitterer, and there is the potential for feeds being cached all over the place (by Yahoo pipes, by your browser, or whatever you happen to view the pipes output feed etc. etc.)

A more efficient route might be to produce an OPML feed containing links to each edutwitterer’s RSS feed, and then view this as a stream in a Grazr widget.

Creating the OPML file is left as an exercise for the reader (!) – if you do create one, please post a link as a comment or trackback… ;-) Here are three ways I can think of for creating such a file:

  1. add the feed URL for each edutwitter as a separate feed in an Grazr reading list (How to create a Grazr (OPML) reading list). If you don’t like/trust Grazr, try OPML Manager;
  2. build a screenscraper to scrape the usernames and then create an output OPML file automatically;
  3. view source of Jane’s orginal edutwitterers page, cut out the table that lists the edutwitterers, paste the text into a text editor and work some regular ecpression ‘search and replace’ magic; (if you do this, how about posting your recipe/regular expressions somewhere?!;-)

Enough – time to start reading Presentation Zen

iTunes in Your Pocket… Almost…

Having been tipped off about about a Netvibes page that the Library folks are pulling together about how to discover video resources (Finding and reusing video – 21st century librarianship in action, methinks? ;-) I thought I’d have a look at pulling together an OU iTunes OPML bundle that could be used to provide access to OU iTunes content in a Grazr widget (or my old RadiOBU OpenU ‘broadcast’ widget ;-) and maybe also act as a nice little container for viewing/listening to iTunes content on an iPhone/iPod Touch.

To find the RSS feed for a particular content area in iTunesU, navigate to the appropriate page (one with lists of actual downloadable content showing in the bottom panel), make sure you have the right tab selected, then right click on the “Subscribe” button and copy the feed/subscription URL (or is there an easier way? I’m not much of an iTunes user?):

You’ll notice in the above case that as well as the iPod video (mp4v format?), there is a straight video option (.mov???) and a transcript. I haven’t started to think about how to make hackable use of the transcripts yet, but in my dreams I’d imagine something like these Visual Interfaces for Audio/Visual Transcripts! ;-) In addition, some of the OU iTunesU content areas offer straight audio content.

Because finding the feeds is quite a chore (at least in the way I’ve described it above), I’ve put together an OU on iTunesU OPML file, that bundles together all the separate RSS from the OU on iTunesU area (to view this file in an OPML widget, try here: OU iTunesU content in a Grazr widget).

The Grazr widget lets you browse through all the feeds, and if you click on an actual content item link, iit should launch a player (most likely Quicktime). Although the Grazr widget has a nice embedded player for MP3 files, it doesn’t seem to offer an embedded player for iTunes content (or maybe I’m missing something?)

You can listen to the audio tracks well enough in an iPod Touch (so the same is presumably true for an iPhone?) using the Grazr iphone widget – but for some reason I can’t get the iPod videos to play? I’m wondering if this might be a mime-type issue? or maybe there’s some other reason?

(By the by, it looks like the content is being served from an Amazon S3 server… so has the OU bought into using S3 I wonder? :-)

For completeness, I also started to produce a handcrafted OPML bundle of OU Learn Youtube playlists, but then discovered I’d put together a little script ages ago that will create one of these automatically, and route each playlist feed through a feed augmentation pipe that adds a link to each video as a video enclosure:

http://ouseful.open.ac.uk/xmltools/youtubeUserPlaylistsOPML.php?user=oulearn

Why would you want to do this? Because if there’s a video payload as an enclosure, Grazr will provide an embedded player for you… as you can see in this screenshot of Portable OUlearn Youtube playlists widget (click through the image to play with the actual widget):

These videos will play in an iPod Touch, although the interaction is a bit clunky, and actually slight cleaner using the handcrafted OPML: OUlearn youtube widget for iphone.

PS it’s also worth remembering that Grazr can embed Slideshare presentations, though I’m pretty sure these won’t work on the iPhone…