eduTwitterin’

Jane’s list of “100+ (E-)Learning Professionals to follow on Twitter” (which includes yours truly, Martin and Grainne from the OpenU :-) has been doing the rounds today, so in partial response to Tony Karrer asking “is there an equivalent to OPML import for twitter for those of us who don’t want to go through the list and add people one at a time?”, I took an alternative route to achieving a similar effect (tracking those 100+ e-learning professionals’ tweets) and put together a Yahoo pipe to produce an aggregated feed – Jane’s edutwitterers pipe

Scrape the page and create a semblance of a feed of the edutwitterers:

Tidy the feed up a bit and make sure we only include items that link to valid twitter RSS feed URLs (note that the title could do with a little more tidying up…) – the regular expression for the link creates the feed URL for each edutwitterer:

Replace each item in the edutwitterers feed with the tweets from that person:

From the pipe, subscribe to the aggregated edutwitters’ feed.

Note, however, that the aggregated feed is a bit slow – it takes time to pull out tweets for each edutwitterer, and there is the potential for feeds being cached all over the place (by Yahoo pipes, by your browser, or whatever you happen to view the pipes output feed etc. etc.)

A more efficient route might be to produce an OPML feed containing links to each edutwitterer’s RSS feed, and then view this as a stream in a Grazr widget.

Creating the OPML file is left as an exercise for the reader (!) – if you do create one, please post a link as a comment or trackback… ;-) Here are three ways I can think of for creating such a file:

  1. add the feed URL for each edutwitter as a separate feed in an Grazr reading list (How to create a Grazr (OPML) reading list). If you don’t like/trust Grazr, try OPML Manager;
  2. build a screenscraper to scrape the usernames and then create an output OPML file automatically;
  3. view source of Jane’s orginal edutwitterers page, cut out the table that lists the edutwitterers, paste the text into a text editor and work some regular ecpression ‘search and replace’ magic; (if you do this, how about posting your recipe/regular expressions somewhere?!;-)

Enough – time to start reading Presentation Zen

Qualification(s), Recognition and Credible, Personal Vouchsafes

Via Downes, today, a link to a Chronicle of Higher Ed story asking: “When Professors Print Their Own Diplomas, Who Needs Universities?, and which reports on the distribution of ‘personally guaranteed’ certificates by open educator David Wiley to participants who were not formally enrolled in, but were allowed to participate in, (and were ‘unofficially’ graded on) an open course that ran last year.

Hopefully I’ll get a chance to ask David about that tomorrow. because I think this sort of ‘personal vouchsafe from a credible source’ could be a powerful ingredient in an “Open Achievements API”.

The post goes on:

But plenty of folks outside of higher education might jump in. Imagine the hosts of the TV show Myth Busters offering a course on the scientific method delivered via the Discovery Channel’s Web site. Or Malcolm Gladwell, author of the best-selling Tipping Point, teaching an online business course on The New Yorker’s site. Or a retired Nobel Prize winner teaching via a makeshift virtual classroom set up on her personal blog.

By developing credibility or ‘authority metrics’ (“AuthorityRank”?!) that reflect the extent to which there are legitimate grounds for an agent to ‘bestow an award’ on an individual on the grounds that the individual has demonstrated some competency or understanding in a particular area, we might be able to build a trust based framework for ‘qualifying’ an individual’s capabilities in a particular area with a given degree of confidence.

An Open Achievements API would provide a structure for declaring such achievements, and different ‘qualification platforms’ could compete on the efficacy of their authority ranking mechanisms in terms of positioning themselves as ‘high worth’ qualifying engines (cf. “good universities”).

It’s late, I’m tired, and I have no idea if this will make any sense to me in the morning…

Time to Build Trust With an “Open Achievements API”?

I had a couple of long dog walks today, trying to clear a head full of cold, and as I was wandering I started pondering a qualifications equivalent of something like the Google Health Data API; that is, a “Qualifications Data API” that could be used to share information about the qualifications you have achieved.

A little dig around turned up the Schools Interoperability Framework, a bloated affair that tries to capture all manner of schools related personal data, although that’s not to say that a subset of the SIF wouldn’t be appropriate for capturing and sharing qualifications. And all the security gubbins covered in the spec might provide a useful guide as to what could be expected trying to actually build the API for real (the Google Health Data API also covers security and privacy issues).

I also came across an old mapping between various UK educational levels of attainment frameworks (UK Educational Levels (UKEL)) which I put to one side much as one might put aside a particularly distinctive jigsaw piece, (under the assumption that any formal qualifications described in a qualifications data API could probably be usefully mapped to an appropriate, standardised attainment level); a similar thing at a European level Bologna Process – Qualifications Framework and ECTS) – which got me wondering whether the European Credit Transfer System (ECTS) has a standard XML format for recording qualifications attained by an individual?;and a simple XML format for Uploading Qualifications and Statements of Attainment to the CQR from some Australian Training Agency or other:

Field Name Description
RTONationalID The Registered Training Organisation
National Code. Either this National Code or the State Code below must
be present and valid.
RTOStateID The Registered Training Organisation
State Code. Either this State or the National Code above must be present
and valid.
CourseNationalID The National Course Code for the Course
the Student completed. Either this National Code or the State Code below
must be present and valid.
CourseStateID The State Course Code for the Course
the Student completed. Either this State Code or the National Code above
must be present and valid.
StudentID The Student’s identity number / code.
Optional.
StudentFirstName The Student’s First Name. Required.
StudentLastName The Student’s Last Name. Required.
StudentMiddleName The Student’s Middle Name. Optional
StudentDOB The Student’s Date of Birth. Optional.

Format is: DD-MMM-YYYY. eg. 03-JAN-1976

ContractID ID for the Student’s Contract if apprentice
or trainee. Optional.

Format is: 9999/99

ParchmentNo A unique number / code that appears
on the Parchment / Certificate. Optional.
IssueDate Date the Qualification was Issued. Required.

Format is: DD-MMM-YYYY. eg. 27-MAR-2004

(This sort of thing would also naturally benefit from an association with details about a particular course pulled in from an XCRI course description…)

Looking at the above format, it struck me that a far more general “Open Achievements API” might actually be something quite useful. As well as describing formal awards, it could also optionally refer to informal achievements, or “trust measures” such as eBay seller rating, Amazon reviewer rank, World of Warcraft level or Grockit experience points.

In a sense, an Open Achievements API could complement the Google Open Social API with a range of claims a person might choose to make about themself that could be verified to a greater or lesser degree. The Open Acheivements API would therefore have to associate with each claimed achievement a “provenance”, that could range from “personal claim” through to some sort of identifier for securing an “official”, SSL transported verification from the body that presumably awarded the claimed acheievement (such as a particular formal qualification, for example).

By complementing Open Social, the Open Achievements API would provide a transport mechanism for associating CV information within a particular profile, as well as personal and social information. If it was supported by informal learning environments, such as the School of Everything, OpenLearn, or SocialLearn, it would allow informal learners to badge themselves with a portable record of their learning achievements (much as OU students can do with the Course Profiles Facebook Application).

What Google Thinks of the OU…

More and more search boxes now try to help the user out by making search completion recommendations if you pause awhile when typing query terms into a search box.

So here’s how you get helped out on Youtube:

And here’s what Google suggest is offering on a default (not signed in) Google personal page:

Here’s Yahoo:

Google Insights for Search also provides some food for thought from a free tool you can run against any search terms that get searched on enough. So here for example is the worldwide report for searches on open university over the last 90 days:

Tunneling down to look at searches for open university from the UK, I notice quite a lot were actually looking for information about university open days… Hmmm… do we have a permanent “open day” like web page up onsite anywhere, I wonder?

Let’s see – after all, the OU search engine never fails…

… to provide amusement…

Google comes up with:

Would it make sense, I wonder, to try to capitalise on the name of the university and pull traffic in to a landing page specifically designed to siphon off Google search traffic from students looking for open days at other universities? ;-)

“The Open University: where every day is a university open day. From Newcastle to Bristol, London to Leeds, Oxford to Cambridge, Birmingham to Edinburgh, Cardiff to Nottingham, why not pop in to your local regional Open University center to see what Open University courses might be right for you?”, or somesuch?! Heh heh… :-)

Figure:Ground – Mashing Up the PLE (MUPPLE’08) Links

After a nightmare journey, and a “no room at the inn, so walk round Maastricht all night looking for coffee shops” adventure, I fumbled and raced through a version of Figure:Ground – PLEs and the Flexible Learning Environment at MUPPLE’08 Workshop on Mash-Up Personal Learning Environments yesterday, and closed with a promise to post the presentation (such as it is) and some relevant links…

So here are the slides, (although I didn’t get round to annotating them, so they’re unlikely to make a lot of sense!):

And here are some links:

“Vision of a PLE” – a couple of people picked up on the “my PLE” image I used that included offline media and social context alongside the typical web app offerings; you can find the original here: Mohamed Amine Chatti: “My PLE/PKM”.

The OpenU’s OpenLearn open content site can be found at http://openlearn.open.ac.uk. Unlike many other open content sites, the content is published in the context of a Moodle online learning environment that users can join for free. As well as providing a user environment, OpenLearn also makes the content available in a variety of convenient packaging formats (print, Moodle export format, IMS packages, RSS, HTML pages) that allow the content to be taken away and reused elsewhere.

Openlearnigg is a corank (Digg clone) site that pulls OpenLearn course unit URLs in via OpenLearn course listing RSS feeds, and then embeds the OpenLearn content within auto-generated course pages using a Grazr widget fed by OpenLearn unit full content feeds. OpenLearningg this uses OpenLearn syndication tools to mirror the content offerings of the OpenLearn site within a third party environment.

Something I didn’t mention was a pattern we’re developing for republishing with a click the OpenLearn content in WordPress environment (WP_LE). One of the widgets we have developed allows users to subscribe to “fixed” (i.e. unchanging) blog feeds and receive one item per day from the day they subscribe (which provides some all-important pacing for the user).

THe “MIT Courseware refactoring as syndication feeds is described in An MIT OpenCourseWare Course via an OPML Feed and Disaggregating an MIT OpenCourseware Course into Separate RSS Feeds, where I show how the feeds can be used in a Grazr widget to provide a presentation environment for an MIT OER course. I seem to remember the feeds were all handcrafted… You can also find links to the demos from those posts.

The Yale opencourseware feedification story is briefly covered in Yale OpenCourseware Feeds, along with links to each level of the nested Yahoo pipes that do the scraping. RSS Feed Demo from Yale Open Courseware gives a quick review of one how one of the pipes works.

The UC Berkeley Youtube video feeds/video courseware search are described in UCBerkeley Youtube Playlist Course Browser & Video Lecture Search and UC Berkeley Lectures on Youtube, via Grazr (the search part).

One of the aims of the MIT/Yale OPML feed doodles was roundtripping – taking an OER course site, generating feeds from it, and then recreating the site, but powered by the feeds. Getting a feel for the different sorts of feed could be bundled together to give a ‘course experience’ by reverse engineering courses is a stepping stone towards automatically generating some of those feeds using contextual searches, for example.

The Digital Worlds uncourse blog experment explores using a hosted WordPress blog as a course authoring environment, and the approriate use of tag and content feeds as delivery channels (the Visual gadgets uncourse blog does a similar thing using Blogger/Blogspot). Some of my reflections on the Digital Worlds creation process are in part captured in the weekly round-up posts that can be found here: OUseful 1.0 blog archive: Teaching and Learning posts. There’s also a presentation on the topic I gave to the OU CAL research group conference earlier this year: Digital Worlds presentation.

Stringle is my string’n’glue learning environment, as described in Stringle – Towards a String’n’Glue Learning Environment
(the URL structure is described here: StrinGLE URL “API”). Martin Weller also had a go at describing it: Stringle – almost a web 2.0 PLE?.

ANd the final link – was that was to http://ouseful.info, which currently resolves here, at the OUseful.info blog: https://ouseful.wordpress.com.

PS The whole “figure:ground” thing comes from psychology/studies on visual perception, though it turns out that Marshall Mcluhan also started using the phrase to capture a distinction between communciation technologies (the “medium”, viewed as the figure) and the context they operate in (the ground). I keep dipping in to odd bits of Mcluhan’s (and some of them are very odd!) and this medium/context is probably worth thinking through in a lot more detail with respect to “PLEs”.

ORO Results in Yahoo SearchMonkey

It’s been a long – and enjoyable – day today (err, yesterday, I forgot to post this last night!), so just a quick placeholder post, that I’ll maybe elaborate on with techie details at a later date, to show one way of making some use of the metadata that appears in the ORO/eprints resource splash pages (as described in ORO Goes Naked With New ePrints Server): a Yahoo SearchMonkey ORO augmented search result – ORO Reference Details (OUseful).

The SearchMonkey extension – which when “installed” in your Yahoo profile, will augment ORO results in organic Yahoo search listings with details about the publication the reference appears in, the full title (or at least, the first few characters of the title!), the keyowrds used to describe the reference and the first author, along with links to a BibTeX reference and the document download (I guess I could also add a link in there to a full HTML reference?)

The SearchMonkey script comes in two parts – a “service” that scrapes the page linked to from the results listing:

And a “presentation” part, that draws on the service to augment the results:

It’s late – I’m tired – so no more for now; if you interested, check out the Yahoo SearchMonkey documentation, or Build your own SearchMonkey app.

ORO Goes Naked With New ePrints Server

A few weeks ago, the OU Open Repository Online (“ORO”) had an upgrade to the new eprints server (breaking the screen scraping Visualising CoAuthors in Open Repository Online Papers demos I’d put together, sigh…).

I had a quick look at the time, and was pleased to see quite a bit of RSS support, as the FAQ describes:

Can I set up RSS feeds from ORO?
RSS feeds can be generated using search results.

To create a feed using a search on ORO:

Enter the search terms and click search. RSS icons will be displayed at the top of the search results. Right click the icon and click on Copy Shortcut. You can then paste the string into your RSS reader.

It is also possible to set up three types of RSS feed, by OU author, by department and by the latest 20 additions to ORO.

To create a feed by OU author start with the following URL:

http://oro.open.ac.uk/cgi/latest_tool?
mode=person&value=author&output=RSS

Please note the capital “RSS” at the end of the string

Substitute author for the author’s OUCU and paste the new string into your RSS reader.

To create a feed by department start with this URL:

http://oro.open.ac.uk/cgi/latest_tool?
mode=faculty&value=math-math&output=RSS

Please note the capital “RSS” at the end of the string

This displays all research that relates to Maths (represented by the code “math-math”). To extract the other department codes used by ORO, go to the following URL:
http://oro.open.ac.uk/view/faculty_dept/faculty_dept.html
locate your department and note the URL (this will appear in the bottom left corner of the screen when you hover over the link). The departmental code is situated between “http://oro.open.ac.uk/view/faculty_dept/” and “.html”, e.g. “cobe”, “arts-musi”, etc. Copy the department code into the relevant part of the string and paste the string into an RSS reader.

To create a feed of the latest 20 additions to ORO use this URL:
http://oro.open.ac.uk/cgi/latest_tool&output=RSS

This feed can also be generated by right clicking on the RSS icons in the top right corner of the screen and choosing copy shortcut

The previous version of eprints offered an OAI-PMH endpoint, which I haven’t found on the new setup, but there is lots of export and XML goodness for each resource lodged with the repository – at last, it’s gettin’ nekkid with its data, as a quick View Source of the HTML splash page for a resource shows:

Output formats include an ASCII, BibTeX, EndNote, Refer, Reference Manager and HTML Citations; a Dublin Core description of the resource; an EP3 XML format; METS and MODS (whatever they are?!); and an OpenURL ContextObject description.

The URLs to each export format are regularly efined and keyed by the numerical resource identifier, (which also keys the URL to the resource’s HTML splash page).

The splash page also embodies a resource description meta data in the head (although the HTML display elements in the body of the page don’t appear to be marked up with microformats, formal or ad hoc).

This meta data availability makes it easy to create a page scraping Yahoo Searchmonkey app, as I’ll show in a later post…