WP_LE

And so it came to pass that the campus was divided.

The LMS had given way to the VLE and some little control was given over to the instructors that they might upload some of their own content to the VLE, yet woe betide any who tried to add their own embed codes or script tags, for verily it is evil and the devil’s own work…

And in the dark recesses of the campus, the student masses were mocked with paltry trifles thrown to them in the form of a simple blogging engine, that they might chat amongst each other and feel as if their voice was being heard…

But over time, the blogging engine did grow in stature until such a day that it was revealed in its fullest glory, and verily did the VLE cower beneath the great majesty of that which came to be known as the WP_LE…

…or something like that…

Three posts, from three players, who just cobbled together something that could well work at institutional scale…

  1. New digs for UMW Blogs, or the anatomy of a redesign: an “anatomy of the redesign of UMW Blogs” (WordPress MU), describing sitewide aggregation, tagclounds and all sorts of groovy stuff on the homepage, along with courses, support and contact pages;
  2. Reuse, resources, re-whatever…: showing how Mediawiki can now be used in all sort of ways to feed wiki content into WordPress… (just think about it: this is the bliki concept working for real on two best-of-breed, open source plaforms…);
  3. Batch adding users to a WordPress site: “import users into a site. All you need to provide is a username and email address for each student and it will create the account, generate a password, assign the specified user Role, and send an email to the student so they can login”…

So what do we have here? WordPress MU and Mediawiki working together to provide a sitewide, integrated publish platform. The multi-user import “doesn’t create blogs for each student” but I think that’s something that could be fixed easily enough, if required…

Thus far, we’ve been pretty quiet here at the OU on the WordPress and Mediawiki front, although both platfroms are used internally… but just before the summer, as one of the final OpenLearn projects, we got the folks over at Isotoma to put together a couple of WordPress and WordPress MU widgets.

Hopefully we’ll be making them available soon, along with some demo sites, but for now, here’s a tease of what we’ve pulled together.

Now you may or may not remember the the Reverend’s edupunkery that resulted in Proud Spammer of Open University Courses, a demo of how to import an OpenLearn unit content RSS feed into a WordPress blog…?

Well we’ve run with that idea – and generalised it a little – so that you can take any of the OpenLearn topic/subject area feeds (that list a set of units in a particular topic) and set up each of the courses itemised in the list with its own WordPress MU blog. Automatically. At the click of a button. What this means is that if you want to create collection of course unit blogs using OpenLearn units, you can do it in one go…

Now there are a few issues with some of the links that are pulled into the blogs from the OpenLearn feeds, and there’s some dodgy bits of script that need thinking about, but at the very least we now have a bulk spamming of OpenLearn courses tool… And if we can get a fix going with the imported, internal unit blog links, and maybe some automated blog tagging and categorising done at import time, then there is plenty of scope for emergent uncourse link mapping across and between OpenLearn WP MU course units…

Using separate WordPress MU blogs to publish unchanging “static” courses is one thing of course – the blog environment makes it easy to comment and publicly annotate each separate unit page. But compare these fixed, unchanging blog courses with how you might consume a blogged (un)course the first time it was presented… Assuming that pages were posted as they were written over the life of the course, you get each new section as new post in your feed reader every day or two…

So step in an old favourite of mine – daily feeds. (Anyone remember the OpenLearn_daily experiment that would deliver an OpenLearn unit via a feed over several days, relative to the day you first subscribed to it?) Our second offerin is a daily feeds widget for WordPress. Subscribe to a daily feed, and you’ll get one item a day from a static course unit blog in your feed reader, starting with the first item in the course unit on the first day.

Taking the two widgets together, we can effectively create a version of OpenLearn in which each OpenLearn unit will be delivered via its own WP MU blog, and each unit capable of being consumed via a daily feed…

A couple of people have been trying out the widgets already, and if anyone else would like a “private release” copy of the code to play with before we post it openly, please get in touch….

ORO Goes Naked With New ePrints Server

A few weeks ago, the OU Open Repository Online (“ORO”) had an upgrade to the new eprints server (breaking the screen scraping Visualising CoAuthors in Open Repository Online Papers demos I’d put together, sigh…).

I had a quick look at the time, and was pleased to see quite a bit of RSS support, as the FAQ describes:

Can I set up RSS feeds from ORO?
RSS feeds can be generated using search results.

To create a feed using a search on ORO:

Enter the search terms and click search. RSS icons will be displayed at the top of the search results. Right click the icon and click on Copy Shortcut. You can then paste the string into your RSS reader.

It is also possible to set up three types of RSS feed, by OU author, by department and by the latest 20 additions to ORO.

To create a feed by OU author start with the following URL:

http://oro.open.ac.uk/cgi/latest_tool?
mode=person&value=author&output=RSS

Please note the capital “RSS” at the end of the string

Substitute author for the author’s OUCU and paste the new string into your RSS reader.

To create a feed by department start with this URL:

http://oro.open.ac.uk/cgi/latest_tool?
mode=faculty&value=math-math&output=RSS

Please note the capital “RSS” at the end of the string

This displays all research that relates to Maths (represented by the code “math-math”). To extract the other department codes used by ORO, go to the following URL:
http://oro.open.ac.uk/view/faculty_dept/faculty_dept.html
locate your department and note the URL (this will appear in the bottom left corner of the screen when you hover over the link). The departmental code is situated between “http://oro.open.ac.uk/view/faculty_dept/” and “.html”, e.g. “cobe”, “arts-musi”, etc. Copy the department code into the relevant part of the string and paste the string into an RSS reader.

To create a feed of the latest 20 additions to ORO use this URL:
http://oro.open.ac.uk/cgi/latest_tool&output=RSS

This feed can also be generated by right clicking on the RSS icons in the top right corner of the screen and choosing copy shortcut

The previous version of eprints offered an OAI-PMH endpoint, which I haven’t found on the new setup, but there is lots of export and XML goodness for each resource lodged with the repository – at last, it’s gettin’ nekkid with its data, as a quick View Source of the HTML splash page for a resource shows:

Output formats include an ASCII, BibTeX, EndNote, Refer, Reference Manager and HTML Citations; a Dublin Core description of the resource; an EP3 XML format; METS and MODS (whatever they are?!); and an OpenURL ContextObject description.

The URLs to each export format are regularly efined and keyed by the numerical resource identifier, (which also keys the URL to the resource’s HTML splash page).

The splash page also embodies a resource description meta data in the head (although the HTML display elements in the body of the page don’t appear to be marked up with microformats, formal or ad hoc).

This meta data availability makes it easy to create a page scraping Yahoo Searchmonkey app, as I’ll show in a later post…

ORO Results in Yahoo SearchMonkey

It’s been a long – and enjoyable – day today (err, yesterday, I forgot to post this last night!), so just a quick placeholder post, that I’ll maybe elaborate on with techie details at a later date, to show one way of making some use of the metadata that appears in the ORO/eprints resource splash pages (as described in ORO Goes Naked With New ePrints Server): a Yahoo SearchMonkey ORO augmented search result – ORO Reference Details (OUseful).

The SearchMonkey extension – which when “installed” in your Yahoo profile, will augment ORO results in organic Yahoo search listings with details about the publication the reference appears in, the full title (or at least, the first few characters of the title!), the keyowrds used to describe the reference and the first author, along with links to a BibTeX reference and the document download (I guess I could also add a link in there to a full HTML reference?)

The SearchMonkey script comes in two parts – a “service” that scrapes the page linked to from the results listing:

And a “presentation” part, that draws on the service to augment the results:

It’s late – I’m tired – so no more for now; if you interested, check out the Yahoo SearchMonkey documentation, or Build your own SearchMonkey app.

OpenLearn ebooks, for free, courtesy of OpenLearn RSS and Feedbooks…

A couple of weeks ago, I popped the Stanza ebook reader application on my iPod Touch (it’s been getting some good reviews, too: Phone Steals Lead Over Kindle ). I didn’t add any ebooks to it, but it did come with a free sample book, so when I was waiting for a boat on my way home last week, I had a little play and came away convinced that I would actually be able to read a long text from it.

So of course, of course, the next step was to have a go at converting OpenLearn courses to an ebook format and see how well they turned out…

There are a few ebook converters out there, such as the Bookglutton API that will “accept HTML documents and generates EPUB files. Post a valid HTML file or zipped HTML archive to this url to get an EPUB version as the response” for example, so it is possible to upload a download(!) of an OpenLearn unit ‘print version’ (a single HTML page version of an OpenLearn unit) or upload the zipped HTML version of a unit (although in that case you have to meddle with the page names so they are used in the correct order when generating the ebook).

The Stanza desktop app, free as a beta download at the moment, but set to be (affordable) payware later this year can also handle epub generation (in fact, it will output an ebook in all manner of formats).

The easiest way I’ve found to generate ebooks though is, of course, feed powered:-) Sign up for an account with Feedbooks, click on the news icon (err…?!) and then add a feed (err…?!)

(Okay, so the interface is a little hard to navigate at times… No big obvious way to “Add feed here”, for example, that uses a version of the feed icon as huge visual cue, but maybe that’ll come…)

Once the feed is added, it synchs and you have your ebook. So for example, here are a couple of Feedbooks powered by OpenLearn unit RSS feeds:

RSS Feedbook ebookfor the OpenLearn unit “Parliament and the law”http://feedbooks.com/feed/6906.epub
RSS Feedbook ebook for the OpenLearn unit “Introducing consciousness”http://feedbooks.com/feed/6905.epub

Getting the ebook in Stanza on the iPod Touch/iPhone is also a little clunky at the the moment, although once it’s there it works really well. Whilst there is a route directly to Feedbooks from the app (as well as feed powered ebooks, Feedbooks also acts as a repository for a reasonable selection of free ebooks taht can be pulled into the iPhione Stanza app quite easily), the only way I could find to view my RSS powered feedbooks was to enter the URL; and on the iPod, the feedbook URLs were hard to find: logging in to my account on the Feedbooks site and clicking the ebook link just gave an error as the iPod tried to open a document format it couldn’t handle – and Safari wouldn’t show me the URL in the address bar (it redirected somewhere).

Anyway, user interface issues aside, the route to ebookdom for the OpenLearn materials is essentially a straightforward one – grab a unit content RSS feed, paste it into Feedbooks to generate an ePub book, and then view it in Stanza. The Feedbooks folks are working on extending their API too, so hopefully better integration within Stanza should be coming along shortly.

Once the feedbook has been synched to the Stanza iPhod app, it stays there – no further internet connection required. One neat feature of the app is that each book in your collection is bookmarked at the place you left off reading it, so you could have several OpenLearn units on the go at the same time, accessing them all offline, and being able to click back to exactly the point where you left it.

At the moment the ebooks that Feedbooks generates don’t contain images, so it might not be appropriate to try to read every OpenLearn unit as a Feedbooks ebook. There are also issues where units refer out to additional resources – external readings in the form of linked PDFs, or audio and video assets, but for simple text dominated units, the process works really well.

(I did wonder if Feedbooks replaced images from the OpenLearn units with their alt text, or transclusion of linked to longdesc descriptions, but apparently not. No matter though, as it seems that many OpenLearn images aren’t annotated with description text…)

If you have an iPhone or iPod Touch, and do nothing else this week, get Stanza installed and have a play with Feedbooks…

Open Content Anecdotes

Reading Open Content is So, Like, Yesterday just now, the following bits jumped out at me:

Sometimes– maybe even most of the time– what I find myself needing is something as simple as a reading list, a single activity idea, a unit for enrichment. At those times, that often-disparaged content is pure gold. There’s a place for that lighter, shorter, smaller content… one place among many.

I absolutely agree that content is just one piece of the open education mosaic that is worth a lot less on its own than in concert with practices, context, artifacts of process, and actually– well, you know– teaching. Opening content up isn’t the sexiest activity. And there ain’t nothin’ Edupunk about it. But I would argue that in one way if it’s not the most important, it’s still to be ranked first among equals. Not just for reasons outlined above, but because for the most part educators have to create and re-create anew the learning context in their own environment. Artifacts from the processes of others– the context made visible– are powerful and useful additions that can invigorate one’s own practice, but I still have to create that context for myself, regardless of whether it is shared by others or not. Content, however, can be directly integrated and used as part of that necessary process. When all is said and done, neither content nor “context” stand on their own particularly well.

For a long time now, I’ve been confused about what ‘remixing’ and ‘reusing’ open educational content means in practical terms that will see widespread, hockey stick growth in the use of such material.

So here’s where I’m at… (err, maybe…?!)

Open educational content at the course level: I struggle to see the widespread reuse of courses, as such; that is, one insitution delivering another; if someone from another institution wants to reuse our course materials (pedagogy built in!), we license it to them; for a fee. And maybe we also run the assessment, or validate it. It might be that some institutions direct their students to a pre-existing, open ed course produced by another instituion where the former instituion doesnlt offer the course; maybe several institutions will hook up together around specialist open courses so they can offer them to small numbers of their own students in a larger, distributed cohort, and as such gain some mutual benefit from bringing the cohort up to a size where it works as a community, or where it becomes financially viable to provide an instructor to lead students through the material.

For indidividuals working through a course on their own, it’s worth bearing in mind that most OERs released by “trad” HEIs are not designed as distance education materials, created with the explicit intention that they are studied by an individual at a remote location. The distance educational materials we create at the OU often follow a “tutorial-in-print” model, with built in pacing and “pedagogical scaffolding” in the form of exercises and self-assessment questions. Expecting widespread consumption of complete courses by individuals is, I think, unlikely. As with a distributed HEI cohort model, it may be that gorups of individuals will come together around a complete course, and maybe even collectively recruit a “tutor”, but again, I think this could only ever be a niche play.

The next level of granularity down is what would probably have been termed a “learning object” not very long ago, and is probably called something like an ‘element’ or ‘item’ in a ‘learning design’, but which I shall call instead a teaching or learning anecdote (i.e. a TLA ;-); be it an exercise, a story, an explanation or an activity, it’s a narrative something that you can steal, reuse and repurpose in your own teaching or learning practice. And the open licensing means that you know you can reuse it in a fair way. You provide the context, and possibly some customisation, but the original narrative came from someone else.

And at the bottom is the media asset – an image, video, quote, or interactive that you can use in your own works, again in a fair way, without having to worry about rights clearance. It’s just stuff that you can use. (Hmmm I wonder: if you think about a course as a graph, a TLA is a fragment of that graph (a set of nodes connected by edges), and a node, (and maybe even an edge?) is an asset?)

The finer the granularity, the more likely it is that something can be reused. To reuse a whole course maybe requires that I invest hours of time on that single resource. To reuse a “teaching anecdote”, exercise or activity takes minutes. To drop in a video or an image into my teaching means I can use it for a few a seconds to illustrate a point, and then move on.

As educators, we like to put our own spin on the things we teach; as learners viewed from a constructivist or constructionist stance, we bring our own personal context to what we are learning about. The commitment required to teach, or follow, a whole course is a significant one. The risk associated with investing a large amount of attention in that resource is not trivial. But reusing an image, or quoting someone else’s trick or tip, that’s low risk… If it doesn’t work out, so waht?

For widespread reuse of the smaller open ed fragments, then we need to be able to find them quickly and easily. A major benefit of reuse is that a reused component allows you to costruct your story quicker, because you can find readymade pieces to drop into it. But if the pieces are hard to find, then it bcomes easier to create them yourself. The bargain is soemthing like this:

if (quality of resource x fit with my story/time spent looking for that resource) > (quality of resource x fit with my story/time spent creating that resource), then I’m probably better of creating it myself…

(The “fit with my story” is the extent to which the resource moves my teaching or learning on in the direction I want it to go…)

And this is possible where the ‘we need more‘ OERs comes in; we need to populate something – probably a search engine – with enough content so that when I make my poorly formed query, something reasonable comes back; and even if the results don’t turn up the goods with my first query, the ones that are returned should give me the clues – and the hope – that I will be able to find what I need with a refinement or two of my search query.

I’m not sure if there is a “flickr for diagrams” yet (other than flickr itself, of course), maybe something along the lines of O’Reilly’s image search, but I could see that being a useful tool. Similarly, a deep search tool into the slides on slideshare (or at least the ability to easily pull out single slides from appropriately licensed presentations).

Now it might be that any individual asset is only reused once or twice; and that any individual TLA is only used once or twice; and that any given course is only used once or twice; but there will be more assets than TLAs (becasue resources can be disaggreated from TLAs), and more TLAs than courses (becuase TLAs can be disaggregated from courses), so the “volume reuse” of assets summed over all assets might well generate a hockey stick growth curve?

In terms of attention – who knows? If a course consumes 100x as much attention as a TLA, and a TLA consumes 10x as much attenion as an asset. maybe it will be the course level open content that gets the hiockey stcik in terms of “attention consumption”?

PS being able to unlock things at the “asset” level is one of the reasons why I don’t much like it when materials are released just as PDFs. For example, if a PDF is released as CC non-derivative, can I take a screenshot of a diagram contained within it and just reuse that? Or the working through of a particular mathematical proof?

PS see also “Misconceptions About Reuse”.

OpenLearn WordPress Plugins

Just before the summer break, I managed to Patrick McAndrew to use some of his OpenLearn cash to get a WordPress-MU plugin built that would allow anyone to republish OpenLearn materials across a set of WordPress Multi-User blogs. A second WordPress plugin was commissioned that would allow any learners happening by the blogs would subscribe to those courses using “Daily feeds”, that would deliver course material to them on a daily basis.

The plugins were coded by Greg Gaughan at Isotoma, and tested by Jim and D’Arcy, among others… (I haven’t acted on your feedback yet – sorry, guys…:-( For all manner of reasons, I didn’t post the plugins (partly because I wanted to do another pass on usability/pick up on feedback, but mainly because I wanted set up a demo site first… but I still haven’t done that… so here’s a link to the plugins anyway in case anyone fancies having a play over the next few weeks: OpenLearn WordPress Plugins.

I’ll keep coming back to this post – and the download page – to add in documentation and some of the thoughts and discussions we had about how to evolve the WPMU plugin workflow/UI etc, as well as the daily feeds widget functionality.

In the meantime, here’s the minimal info I gave the original testers:

The story is:
– one ‘openlearn republisher’ plugin, that will take the URL of an RSS feed describing OpenLearn courses (e.g. on the Modern Languages page, the RSS: Modern Languages feed)) , and suck those courses into WPMU, one WPMU blog per course, via the full content RSS feed for each course.

– one “daily feeds” widget; this can be added to any WP blog and should provide a ‘daily’ feed of the content from that blog, that sends e.g. one item per day to the subscriber from the day they subscribe. The idea here is if a WP blog is used as a content publishing sys for ‘static’, unchanging content (e.g. a course, or a book, where each chapter is a post, or a fixed length podcast series), users can still get it delivered in a paced/one-per-day fashion. This widget should work okay…

Here’s another link the page where you can find the downloads: OpenLearn WordPress Plugins. Enjoy – all comments welcome. Please post a link back here if you set up any blogs using either of these two plugins.

Open University Podcasts on Your TV – Boxee App

Over the weekend, a submission went in from The Open University (in particular, from Liam GreenHughes (dev) and some of the OU Comms team Dave Winter in Online Services (design)), to the Boxee application competition (UK’s Open University on boxee).

For those of you who haven’t com across Boxee, it’s an easy to use video on demand aggregator that turns your computer into a video appliance and lets you watch video content from a wide range of providers (including BBC iPlayer) on your TV. Liam’s been evangelising it for some time, as well as exploring how to get OU Podcasts into it via RSS’n’OPML feeds (An OU Podcast RSS feed for Boxee).

(For those of you who prefer to just stick with the Beeb, then the BBC iPlayer big screen version provides an interface optimised for use on your telly.)

As well as channeling online video services, and allowing users to wire in their own video and audio content via a feed feed, Boxee also provides a plugin architecture for adding additional services to your Boxee setup. The recent Boxee competition promoted this facility by encouraging developers to create new applications for it.

So what does the OU Podcasts Boxee app over and above a simple subscription to an OU podcasts feed?

A pleasing, branded experience, that’s what.

So for example, on installing the OU podcasts app (available from the Boxee App Box), an icon for it is added to your Internet Services applications.

Launching the application takes you to an OU podcasts browser that is organised along similar lines to the OU’s Youtube presence, that is, in terms of OU Learn, OU Research and OU Life content. The Featured content area also provides a mechanism for pushing editorially selected content to higher prominence. (Should this be the left-most, default option, I wonder, rather than the OU Learn channel?)

In the Research area, a single level of navigation exists, listing the various episodes available:

OU Boxee app

Th more comprehensive Learn area organises content into topic basic based themes/episode collections (listed in the right hand panel) with the episodes associated with a particular selected theme or collection displayed in the left hand panel. Selecting an episode in the left hand panel then reveals its description in the right hand panel (as in the screenshot above).

So for example, when we go to the OU Learn area, the Arts and Humanities episodes are listed in the left hand area (by default), and available collections in the right.

We can scroll down the collections and select one, Engineering for example:

Episodes in this collection are listed in the left hand panel, and further subcollections in the right hand panel (it all seems a little confusing to describe, but it actually seems to work okay… maybe?!;-)

Highlighting an actual episode then displays a description of it.

Selecting a program to play pops up a confirmation “play this” overlay, along with a link to further information for the episode:

Both audio and video content can be channeled to the service – selecting a video programme provides a full screen view of the episode, whilst audio is played within a player

The “Read More” option provides a description of the episode, as well as social rating and recommendation options:

Finally, a search tool allows for content to be discovered using user selected search terms,

If you search with an OU course code, and there is video on the OU podcasts site from the course, the search may turn that course related video up…

This wouldn’t be a OUseful post if I didn’t add my own 2p’s worth, of course, so what else would I have liked to have seen in this app. One thing that comes to mind is a seven day catch-up of OU co-pro content that has been broadcast on the BBC (or more generally, the ability to watch all OU co-pro content that is currntly available on the BBC iPlayer). I developed a proof-of-concept demonstrator of how such a service might work on the web, or for the iPhone/iPod Touch (iPhone 7 Day OU Programme CatchUp, via BBC iPlayer), so under the assumption that the Boxee API can provide the hooks you need to be able to play iPlayer content, I’d guess adding this sort of functionality shouldn’t take Liam much more than half-an-hour?!;-)

I also wonder if the application can be used to preserve local state in the form of personalisation information? For example, could a user create their own saved searches – and by default their own topic themed channels? Items in such a feed could also be nominally tagged with that search term back on a central server, if, for example, if a user watched an episode that had been retrieved using a particular search term all the way through?

To vote for the OU Boxee app, please go to: vote for your favorite apps, RSVP for the boxee event in SF.

PS the OU Podcasts app is not the only education related submission to the competition. There’s also OpenCourseWare on boxee, which porvides a single point of entry to several video collections from some of the major US OCW projects.

PPS it also turns out that KMi have a developer who’s currently working on a range of mobile apps for the iPhone/iPod Touch, Android phones and so on. If any OU readers have ideas for compelling OU related mobile apps, you just may get lucky in getting it built, so post the idea as a comment to this post, or contact, err, erm, @stuartbrown, maybe?

PPPS Now I’m not sure how much time was spent on the app, but as the competition was only launched on May 5th, with a closing date of June 14th, it can’t have been that long, putting things like even the JISC Rapid Innovation (JISCRI) process to shame…?!;-)

PDFs Do Your Licensing For You…

PDF is not a portable DATA format

That is:

PDF, a digital form used to represent electronic documents, allows users to exchange and view the documents easily and reliably, independent of the environments in which they are created, viewed and printed, while preserving their content and visual appearance. [PDF Format Becomes ISO Standard]

no derivs No Derivative Works — You may not alter, transform, or build upon this work.

Open Educational Resources and the University Library Website

Being a Bear of Very Little Brain, I find it convenient to think of the users of academic library websites falling into one of three ‘deliberate’ and one ‘by chance’ categories:

– students (i.e. people taking at course);
– lecturers (i.e. people creating or supporting a course);
– researchers;
– folk off the web (i.e. people who Googled in who are none of the above).

The following Library website homepage (in this case, from Leicester) is typical:

…and the following options on the Library catalogue are also typical:

So what’s missing…?

How about a link to “Teaching materials”, or “open educational resources”?

After all, if you’re a lecturer looking to pull a new course together, or a student who’s struggling to make head or tail of the way one of your particular lecturers is approaching a particular topic, or a researcher who needs a crash course in a particular method or technique, maybe some lecture notes or course materials are exactly the sort of resource you need?

Trying to kickstart the uptake of open educational materials has not be as easy as might be imagined (e.g. On the Lack of Reuse of OER), but maybe this is because OERs aren’t as ‘legitimately discoverable’ as other academic resources.

If anyone using an academic library website can’t easily search educational resources in that context, what does that say about the status of those resources in the eyes of the Library?

Bearing in mind my crude list of user classes, and comparing them to the sorts of resources that academic libraries do try to support the discovery of, what do we find?

– the library catalogue returns information about books (though full text search is not available) and the titles of journals; it might also tap into course reading lists.
– the e-resources search provides full text search over e-book and journal content.

One of the nice features of the OU wesbite search (not working for me at the moment: “Our servers are busy”, apparently…) is that it is possible to search OU course materials for the course you are currently on (if you’re a student) or across all courses if you are staff. A search over OpenLearn materials is also provided. However, I don’t think these course material searches are available from the Library website?

So here’s a suggestion for the #UKOER folk – see if you can persuade your library to start offering a search over OERs from their website (Scott Wilson at CETIS is building an OER aggregator that might help in this respect, and there are also initiativs like OER Commons).

And, err, as a tip: when they say they already do, a link to the OER Commons site on a page full of links to random resources, buried someowhre deep within the browsable bowels of the library website doesn’t count. It has to be at least as obvious(?!), easy to use(?!) and prominent(?!?) as the current Library catalogue and journal/database searches…