Open Course Production

Following a chat with Mark Surman of the Mozilla Foundation a week or two ago, I’ve been pondering a possible “flip” between:

a) the production of course materials as part of a (closed) internal process, primarily for use within a (closed) course in a particular institution, and then released under an open license (such as a Creative commons license); and

b) the production of course materials in the open that are then:

i) pulled into the institution for use within a (closed) course; or

ii) used (or not) to support self-directed learning towards an assessment only award.

In the OU, the course production model can take a team of several academics, supported by a course manager, media project manager, editor, picture researcher, rights chasers, developers, artists, et al. several years to produce a course that will then last for between five and ten years of presentation. In addition, handover of course materials may take place up to a year before the first presentation of the course. Course units are typically drafted by individual authors, and then passed for comment and critical reading to the rest of the course team. Typically, materials will pass through at least two drafts before final handover.

(After a little digging, and the help of @ostephens, I managed to track down some reports on how course production was managed in the early years of the OU: Course Production: Some Basic Problems, Course Production: Activities and Activity Networks, Course Production: Planning and Scheduling, Course Production: The Problem of Assessment, though I haven’t had chance to read them yet…)

For the OU short course T151 Digital Worlds, the majority of the course team authored content was published as it was being written on a public WordPress blog (Digital Worlds Uncourse Blog); in the current version of the course, students are referred to that public content from within the VLE. (Note that the copyright and licensing of content on the public blog is left deliberately vague!)

Although the Digital Worlds content was written by a single author (me;-), the model was intended to support at the very least a team blog approach, or a distributed blog network authoring approach. Rather than authors writing large chunks of text and then passing them for comment to other course team members, the blogged approach encourages authors to: a) read along with what others are producing; b) create short chunks of material (500-800 words, typical blog post length) on a particular topic (probably linked to other posts on the topic) that are convenient to study in a single study session or interstitial learning break (cf. @lorcand on Interstitial reading); c) link out to related resources; d) act as a focus for trackbacks (passive related resource discovery) and comments that might influence the direction taken in future blog posts.

The use of WordPress as the blogging platform was deliberate, in part because of the wide support WordPress offers for RSS/Atom feed generation. By linking between posts, as well as tagging and categorising posts appropriately, a structure emerges that offers many different possible pathways through the content. RSS feeds with everything means that it’s then relatively straightforward to republish different pathways apparently as linear runs of content elsewhere, if required (e.g. as in an edufeedr environment, perhaps?)

Authoring content in a public forum – ideally under an open content license – means that content becomes available for re-use even as it is being drafted. By opening up comments, feedback can be solicited that allows content to be improved by updating blog posts, if necessary, as well as identifying topics or clarifications that can be addressed in separate backlinking blog posts. By opening up the production process, we make it far more likely that others will contribute to that process, helping shape and influence that content, than expecting others to take openly licensed content as a large chunk and then produced openly licensed derived works as a result (i.e. forks?!)

In short: maybe we shouldn’t just be releasing content created in a closed process as Open Educational Resources (OERs); rather, we should be producing them in public using an open source production model?

As Cameron Neylon suggests in a critique of academic research publishing (It’s not information overload, nor is it filter failure: It’s a discovery deficit):

t is very easy to say there is too much academic literature – and I do. But the solution which seems to be becoming popular is to argue for an expansion of the traditional peer review process. To prevent stuff getting onto the web in the first place. This is misguided for two important reasons. Firstly it takes the highly inefficient and expensive process of manual curation and attempts to apply it to every piece of research output created. This doesn’t work today and won’t scale as the diversity and sheer number of research outputs increases tomorrow. Secondly it doesn’t take advantage of the nature of the web. They way to do this efficiently is to publish everything at the lowest cost possible, and then enhance the discoverability of work that you think is important. We don’t need publication filters, we need enhanced discovery engines. Publishing is cheap, curation is expensive whether it is applied to filtering or to markup and search enhancement.

Filtering before publication worked and was probably the most efficient place to apply the curation effort when the major bottleneck was publication. Value was extracted from the curation process of peer review by using it reduce the costs of layout, editing, and printing through simple printing less. But it created new costs, and invisible opportunity costs where a key piece of information was not made available. Today the major bottleneck is discovery. …

The problem we have in scholarly publishing is an insistence on applying this print paradigm publication filtering to the web alongside an unhealthy obsession with a publication form, the paper, which is almost designed to make discovery difficult. If I want to understand the whole argument of a paper I need to read it. But if I just want one figure, one number, the details of the methodology then I don’t need to read it, but I still need to be able to find it, and to do so efficiently, and at the right time.

Currently scholarly publishers vie for the position of biggest barrier to communication. The stronger the filter the higher the notional quality. But being a pure filter play doesn’t add value because the costs of publication are now low. The value lies in presenting, enhancing, curating the material that is published.

And so on… (read the whole thing).

Maybe we need to think about educational materials in a similar way? By creating the materials in the open, we start to identify what the good stuff is, as well as being able to benefit from direct and relevant feedback from people who are interested in the topic because they discovered it by looking for it, or at least something like it. (For educators, if they think they are helping shape content, for example through commenting on it, they may be more likely to link back to it and direct their students to it because they have a stake in it, albeit weakly and possibly indirectly.)

In response to a call I put out out on Twitter last night for links to work relating to the use of open source production models in course development, @mweller suggested that Andreas Meiszner‘s PhD work may be relevant here? “My PhD research is aimed at investigating the impact of the organizational structure and operational organization on ICT enriched education by conducting a comparative study between FLOSS (Free / Libre Open Source Software) communities and Higher Education Institutions (HEIs). This work will conduct a comparative study between FLOSS communities and HEIs. The primary unit of analysis is (i.) the organizational structure of FLOSS communities and HEIs, (ii.) the operational organization of FLOSS communities and HEIs and (iii.) the learning process, outcome and environment in FLOSS communities and HEIs.”

(These are also relevant, I think? OSS-Watch briefings on Community source vs open source and The community source development model.)

By placing content out in the open, we also provide a stepping stone towards producing “assessment only” courses. By decoupling the teaching/learning content from the assessment, we can offer assessment only products (such as derivatives of the OU’s APEL containers, maybe?) that assess students based on their informal study of our open materials. (I’m not sure if any courses are yet assessing students who have studied materials placed on OpenLearn?) Once mechanisms are in place for writing robust assessments under the assumption that students will have been drawing at least in part on the study of open OU materials, we can maybe start to be more flexible in assessing students who have made used of other OERs (or indeed, any resources that they have been able to use to further their understanding on a topic).

Just by the by, it’s also worth noting that decoupling of assessment from teaching at the degree level is in the air at moment (e.g. New universities could teach but not test for degrees, says Vince Cable) …

Related: an old and confused post about what happens when content on the inside is opened up to the outside so that folk from the inside can work on it on the outside using all their skills from the inside but not having to adhere to any of its constraints… Innovating from the Inside, Outside

Confluence in My Feed Reader – The Side Effects of Presenting

Don’tcha just love it when a complementary posts happen along within a day or two of each other? Earlier this week, Martin posted on the topic of Academic output as collateral damage suggested that “you can view higher education as a long tail content production system. And if you are producing this stuff as a by-product of what you do anyway then a host of new possibilities open up. You can embrace unpredictability”.

And then today, other Martin comes along with a post – Presentation: Twitter for in-class voting and more for ESTICT SIG – linking to a recording of a presentation he gave yesterday, but one that includes twitter backchannel captions from the presentation that were tweeted by the presentation that in turn itself, as well as the (potentially extended/remote) audience.

Brilliant… I love it…I’m pretty much lost for words…

`Just... awesome...

What we have here, then, is the opening salvo in a presentation capture and amplification strategy where the side effects of the presentation create a legacy in several different dimensions – an audio-visual record, for after the fact; a presentation that announces it’s own state to a potentially remote Twitter audience, and that in turn can drive backchannel activity; a recording of the backchannel, overlaid as captions on the video recording; and a search index that provides timecoded results from a search based on the backchannel and the tweets broadcast by the presentation itself. (If nothing else, capturing just the tweets from the presentation provides a way of deep searching in time into the presentation).

Amazing… just amazing…

Will Digital Scholarship be Reflected in the New World University Rankings?

In what looks like quite a lazily produced article (isn’t cut and paste wonderful?;-), Firm foundations for global comparisons, the THES has reported that Thomson Reuters are to start working on a new global database to underpin international university ranking tables.

An Open Letter to Administrators from Thomson Reuters states:

Our aim with the GLOBAL INSTITUTIONAL PROFILES PROJECT [no need to SHOUT;-) – Ed.] … is to develop a data source that provides the best informed and most effective resource to build profiles of universities and research-based institutions around the world.

As someone “quoted” in the THES article Thomson Reuters press release put it:

“There is a need for robust, dynamic, and above all transparent and verifiable data on scholarly performance to reshape how administrators approach institutional comparisons.”

So, I wonder… are the new rankings going to include factors at an institutional level that reflect on the digital scholarly activity of a university, such as the blogging activity of its researchers (JISC projects increasingly expect project blogs to report regularly on project activity, for example), or things like traffic numbers for institutional Youtube or iTunes channels?

And will the rankings reflect teaching and student support activity, as well as research? Will having vibrant online communities or institution related Facebook apps with thousands of installs be recognised (not that anyone liked Course Profiles, though that’s presumably because we didn’t have a budget holder spend a huge amount on it and then have to chase internal glory payback to justify the expense…;-), or its engagement with publishing others form of open educational resources (OERs)?

If you want to participate in the GIPP (isn’t that slang for vomit? Or maybe that’s gip..?;-), you may be able to find a way here: GIPP; after all, they do say:

Researcher engagement is critical to ensuring this new initiative delivers what the industry has long been asking for—an accurate representation of the institutional landscape, from the source. … [T]he need for researcher participation and completed surveys remains constant.

PS Martin – has any of your digital scholarship (prezi seasick…bleurghhh) work considered metrics at the institutional level, as well as the personal level?

Open Training Resources

Some disconnected thoughts about who gives a whatever about OERs, brought on in part by @liamgh’s Why remix an Open Educational Resource? (see also this 2 year old post: So What Exactly Is An OpenLearn Content Remix?). A couple of other bits of context too, to to situate HE in a wider context of educational broadcasting:

Trust partially upholds fair trading complaints against the BBC: “BESA appealed to the Trust regarding three of the BBC’s formal learning offerings on bbc.co.uk between 1997 and 2009. … the Trust considers it is necessary for the Trust to conduct an assessment of the potential competitive impacts of Bitesize, Learning Zone Broadband and the Learning Portal, covering developments to these offerings since June 2007, and the way in which they deliver against the BBC’s Public Purposes. This will enable the Trust to determine whether the BBC Executive’s failure to conduct its own competitive impact assessment since 2007 had any substantive effect. … No further increases in investment levels for Bitesize, Learning Zone Broadband and the Learning Portal will be considered until the Trust has completed its competitive impact assessment on developments since 2007

Getting nearer day by day: “We launched a BBC College of Journalism intranet site back in January 2007 … aimed at the 7,500 journalists in the BBC … A handful of us put together about 1200 pages of learning – guides, tips, advice – and about 250 bits of video; a blog, podcasts, interactive tests and quizzes and built the tools to deliver them. A lot of late nights and a lot of really satisfying work. Satisfying, too, because we put into effect some really cool ideas about informal learning and were able to find out how early and mid career journalists learn best. … The plan always was to share this content with the people who’d paid for it – UK licence fee payers. And to make it available for BBC journalists to work on at home or in parts of the world where a www connection was more reliable than an intranet link. Which is where we more or less are now.” [my emphasis; see also BBC Training and Development]

And this: Towards Vendor Certification on the Open Web? Google Training Resources

So why my jaded attitude? Because I wonder (again) what it is we actually expect to happen to these OERs (how many OER projects re-use other peoples’ bids to get funding? How many reuse each others ‘what are OERs stuff’? How many OER projects ever demonstrate a remix of their content, or a compelling reuse of it? How many publish their sites as a wiki so other people can correct errors? How many are open to public comments, ffs? How many give a worked example of any of the twenty items on Liam’s list with their content, and how many of them mix in other people’s OER content if they ever do so? How many attempt to publish running stats on how their content is being reused, and how many demonstrate showcase examples of content remix and reuse.

That said, there are signs of some sort of use: ‘Self-learners’ creating university of online; maybe the open courseware is providing a discovery context for learners looking for specific learning aids (or educators looking for specific teaching aids)? That is, while use might be most likely at the disaggregated level, discovery will be mediated through course level aggregations (the wider course context providing the SEO, or discovery metadata, that leads to particular items being discovered? Maybe Google turns up the course, and local navigation helps (expert) users browse to the resource they were hoping to discover?)

Early days yet, I know, but how much of the #ukoer content currently being produced will be remixed with, or reused alongside, content from other parts of that project as part of end-of-project demos? (Of course, if reuse/remix isn’t really what you expect, then fine… and, err, what are you claiming, exactly? Simple consumption? That’s fine, but say it; limit yourself to that…)

Ok, rant part over. Deep breath. Here comes another… as academics, we like to think we do the education thing, not the training thing. But for those of you who do learn new stuff, maybe every day, what do you find most useful to support that presumably self-motivated learning? For my own part, I tend to search for tutorials, and maybe even use How Do I?. That is, I look for training materials. A need or a question frames the search, and then being able to do something, make something, get my head round something enough to be able to make use of it, or teach it on, frames the admittedly utilitarian goal. Maybe that ability to look for those materials is a graduate level information skill, so it’s something we teach, right…? (Err… but that would be training…?!)

So here’s where I’m at – OERs are probably [possibly?] not that useful. But open training materials potentially are. (Or maybe not..?;-) Here are some more: UNESCO Training Platform

And so is open documentation.

They probably all could come under the banner of open information resources, but thy are differently useful, and differently likely to be reused/reusable, remixed/remixable, maintained/maintainable or repurposed/repurposeable. Of them all, I suspect that the opencourseware subset of OERs is the least re* of them all.

That is all…

Discuss…

Open Educational Resources and the University Library Website

Being a Bear of Very Little Brain, I find it convenient to think of the users of academic library websites falling into one of three ‘deliberate’ and one ‘by chance’ categories:

– students (i.e. people taking at course);
– lecturers (i.e. people creating or supporting a course);
– researchers;
– folk off the web (i.e. people who Googled in who are none of the above).

The following Library website homepage (in this case, from Leicester) is typical:

…and the following options on the Library catalogue are also typical:

So what’s missing…?

How about a link to “Teaching materials”, or “open educational resources”?

After all, if you’re a lecturer looking to pull a new course together, or a student who’s struggling to make head or tail of the way one of your particular lecturers is approaching a particular topic, or a researcher who needs a crash course in a particular method or technique, maybe some lecture notes or course materials are exactly the sort of resource you need?

Trying to kickstart the uptake of open educational materials has not be as easy as might be imagined (e.g. On the Lack of Reuse of OER), but maybe this is because OERs aren’t as ‘legitimately discoverable’ as other academic resources.

If anyone using an academic library website can’t easily search educational resources in that context, what does that say about the status of those resources in the eyes of the Library?

Bearing in mind my crude list of user classes, and comparing them to the sorts of resources that academic libraries do try to support the discovery of, what do we find?

– the library catalogue returns information about books (though full text search is not available) and the titles of journals; it might also tap into course reading lists.
– the e-resources search provides full text search over e-book and journal content.

One of the nice features of the OU wesbite search (not working for me at the moment: “Our servers are busy”, apparently…) is that it is possible to search OU course materials for the course you are currently on (if you’re a student) or across all courses if you are staff. A search over OpenLearn materials is also provided. However, I don’t think these course material searches are available from the Library website?

So here’s a suggestion for the #UKOER folk – see if you can persuade your library to start offering a search over OERs from their website (Scott Wilson at CETIS is building an OER aggregator that might help in this respect, and there are also initiativs like OER Commons).

And, err, as a tip: when they say they already do, a link to the OER Commons site on a page full of links to random resources, buried someowhre deep within the browsable bowels of the library website doesn’t count. It has to be at least as obvious(?!), easy to use(?!) and prominent(?!?) as the current Library catalogue and journal/database searches…

Single Page RSS Feeds – So What? So this…

Having posted about Single Item RSS Feeds on WordPress blogs: RSS For the Content of This Page, it struck me that whilst this facility might be of interest to a very, very select few, most people would probably have the response: so what?

To answer that question, it might help if I let you into a little secret: I’m not really that into content, open educational or otherwise. What I am interested in is how content can flow around the web, and how it can be re-presented in different ways and different places around the web by different people, all pulling on the same source.

So if we consider single page RSS feeds, what this means is that I can re-present the content of any of my WordPress blogged posts anywhere that accepts RSS. So for example, I could view just that post as a Wordle generated word cloud, or subscribe to the RSS version of single blog post on a Netvibes page (maybe along with other related posts):

and view the post in that location:

(At the moment not many other platforms appear to offer single page RSS feeds. I was hopeful that the Guardian might, because they have quite a well developed feed platform, but I couldn’t find a way to grab a single page feed trivially from a page URI:-(

To see why that might be useful, you need to know another of my little secrets. I don’t really think of RSS feeds being used to transport new content, such as the latest posts from the many blogs I still subscribe to. For sure, they can be used for that purpose, and a great many RSS readers are set up to accommodate that sort of use (only showing you feed items you haven’t already read, for example), but that is a special case. The more general case is simply that feeds are used to transport content that has quite a simple structure around the web. And this content might be fixed, static, immutable. That is, the content of the feed might never change once the feed has been created, as in the case of OpenLearn course unit full content RSS feeds.

AS AN ASIDE… I generally think of RSS feeds as providing a way of transporting simple content “items” around where each item has a quite simple structure:

If you think of a blog post or news article as an item, the title is hopefully obvious (the title of the post/article), the description is the content “body” of the item (e.g. the text content of the news article) and the link is the URL of where that post or article can be found on the web. The other elements are optional: what I refer to as annotations correspond to things like latitude and longitude co-ordinates that can be used add geographical information to the item so that it can b plotted on a map for example; and what I term a payload would be something like an audio file that gets delivered when you subscribe to an RSS podcast feed from somewhere like iTunes or IT Conversations.

Once you start viewing RSS feeds as a general transport mechanism, then you start to see the world in a slightly different way… So for example: the a href=”https://ouseful.wordpress.com/2009/07/08/single-item-rss-feeds-on-wordpress-blogs-rss-for-the-content-of-this-page/”>Single Item RSS Feeds post reveals how to create single item RSS feeds from the URL of a blog post hosted on WordPress. Now if I bookmark a series of WordPress hosted blog posts to somewhere like the delicious.com social bookmarking site, and tag them all in the same way, I can get an RSS feed out that contains a list of posts that can be obtained in XML form (that is, as single item RSS feeds).

Hmmm….

So maybe if I find a series of posts from WordPress blogs all over the world on a particular topic, I can create my own custom RSS feed of those posts that I can use as the basis of a reading list, for example, or to feed a Netvibes page on a particular topic, or even to feed an RSS2PDF service*?

* these needn’t be really horrible and divisive… For example, the Feedjournal service will take in an RSS feed and produce a rather nice looking newspaper version of your feed… ;-)

Now it just so happens, I’ve prepared one of these earlier. In particular, I’ve posted a small collection of blog posts on the topic of WordPress from a variety of (WordPress) blogs at http://delicious.com/psychemedia/singlefeeddemo:

You’ll notice that I can get an RSS feed of this list out too: from http://delicious.com/rss/psychemedia/singlefeeddemo in fact.

Now the links I’ve bookmarked are links to the original HTML page version of each blog post; but all it takes is the simple matter of rewriting those URLs by adding ?feed=rss2&withoutcomments=1 on to the end of them to get the RSS version of each post.

Hmm… Yahoo Pipes, where are you? Let’s just pull in the RSS feed of those WordPress hosted blog post bookmarks, and rewrite the URLs to their single item RSS feed equivalent:

Now we can loop through each of those items, and replace it with the actual content of those single item RSS feeds:

The output of the pipe is then a real RSS feed that contains items that correspond to the content of WordPress blog posts that I have bookmarked on delicious.

Now just think about this for a moment: most RSS feeds are transitory – the content that appears in the feed on a blog post is a reverse chronological list of the 10 or 20 most recent items on the blog (or in a particular category on a particular blog). The feed we are pulling in to this pipe may be fixed (e.g. if we create a list of bookmarks tagged in a particular way, and then don’t tag any more bookmarks in that way) and used to create a very specific a list of blog posts from all over the web. By rewriting the URLs to get the RSS version of each bookmarked post, we can create our own full RSS feed of those list items. (Actually, that isn’t quite true – if the blog is configured to only emit partial RSS feeds, we’ll only get a partial version of a post, typically the first sentence or two.)

(Pipes’ homepages only show preview versions of a feed description, even if the full description is available.)

Just to recap, here’s the whole pipe:

We take in a list of bookmarked URLs that correspond to bookmarked WordPress blog posts, and generate the single item RSS feed URL for each post. We then use these URLs to pull in the content for each post, and this create out own, full content custom RSS feed. The pipe itself emits RSS, so w can take the RSS feed from the pipe and feed it into any service that consumes RSS, such as Feedjournal:

Alternatively, I could subscribe to the pipe’s output feed in somewhere like Netvibes (or even a VLE) and then view the contents of my customised feed in that location. Or I could import that feed into a new WordPress blog. And so on…

Now of course I appreciate that many people will still say: so what? But it’s a start… a small step towards a world in which I can declare an arbitrary list of links to content spread all over the web and then pull it into a single location where I can consume it, or process it further, such as converting it into a PDF (which is a preferred way of consuming large chunks of content for many people) or even delivering it in drip feed fashion over an extended period of time as a serialised RSS feed, for example.

An exercise for the interested reader: clone the pipe and modify it so that it will accept as user input an RSS URL so that the pipe can be used to consume any social bookmarking service RSS feed.

Note: as the pipe stands, the order of items in the feed will correspond to the order in which they were bookmarked. It is possible to tag each bookmark with its desired position in the RSS feed, but that is a rather more advanced topic. (See a soon to be(?!)* deprecated solution to that problem here: Ordered Lists of Links from delicious Using Yahoo Pipes.

* If @hapdaniel hasn’t already published a more elegant solution to this problem using YQL Execute somewhere, I’ll try to do so when I get a chance…

PS ho hum, maybe we don’t need RSS after all: Instapaper, Del.icio.us, Yahoo! Pipes and being Slack (via @mediaczar)

Open University Podcasts on Your TV – Boxee App

Over the weekend, a submission went in from The Open University (in particular, from Liam GreenHughes (dev) and some of the OU Comms team Dave Winter in Online Services (design)), to the Boxee application competition (UK’s Open University on boxee).

For those of you who haven’t com across Boxee, it’s an easy to use video on demand aggregator that turns your computer into a video appliance and lets you watch video content from a wide range of providers (including BBC iPlayer) on your TV. Liam’s been evangelising it for some time, as well as exploring how to get OU Podcasts into it via RSS’n’OPML feeds (An OU Podcast RSS feed for Boxee).

(For those of you who prefer to just stick with the Beeb, then the BBC iPlayer big screen version provides an interface optimised for use on your telly.)

As well as channeling online video services, and allowing users to wire in their own video and audio content via a feed feed, Boxee also provides a plugin architecture for adding additional services to your Boxee setup. The recent Boxee competition promoted this facility by encouraging developers to create new applications for it.

So what does the OU Podcasts Boxee app over and above a simple subscription to an OU podcasts feed?

A pleasing, branded experience, that’s what.

So for example, on installing the OU podcasts app (available from the Boxee App Box), an icon for it is added to your Internet Services applications.

Launching the application takes you to an OU podcasts browser that is organised along similar lines to the OU’s Youtube presence, that is, in terms of OU Learn, OU Research and OU Life content. The Featured content area also provides a mechanism for pushing editorially selected content to higher prominence. (Should this be the left-most, default option, I wonder, rather than the OU Learn channel?)

In the Research area, a single level of navigation exists, listing the various episodes available:

OU Boxee app

Th more comprehensive Learn area organises content into topic basic based themes/episode collections (listed in the right hand panel) with the episodes associated with a particular selected theme or collection displayed in the left hand panel. Selecting an episode in the left hand panel then reveals its description in the right hand panel (as in the screenshot above).

So for example, when we go to the OU Learn area, the Arts and Humanities episodes are listed in the left hand area (by default), and available collections in the right.

We can scroll down the collections and select one, Engineering for example:

Episodes in this collection are listed in the left hand panel, and further subcollections in the right hand panel (it all seems a little confusing to describe, but it actually seems to work okay… maybe?!;-)

Highlighting an actual episode then displays a description of it.

Selecting a program to play pops up a confirmation “play this” overlay, along with a link to further information for the episode:

Both audio and video content can be channeled to the service – selecting a video programme provides a full screen view of the episode, whilst audio is played within a player

The “Read More” option provides a description of the episode, as well as social rating and recommendation options:

Finally, a search tool allows for content to be discovered using user selected search terms,

If you search with an OU course code, and there is video on the OU podcasts site from the course, the search may turn that course related video up…

This wouldn’t be a OUseful post if I didn’t add my own 2p’s worth, of course, so what else would I have liked to have seen in this app. One thing that comes to mind is a seven day catch-up of OU co-pro content that has been broadcast on the BBC (or more generally, the ability to watch all OU co-pro content that is currntly available on the BBC iPlayer). I developed a proof-of-concept demonstrator of how such a service might work on the web, or for the iPhone/iPod Touch (iPhone 7 Day OU Programme CatchUp, via BBC iPlayer), so under the assumption that the Boxee API can provide the hooks you need to be able to play iPlayer content, I’d guess adding this sort of functionality shouldn’t take Liam much more than half-an-hour?!;-)

I also wonder if the application can be used to preserve local state in the form of personalisation information? For example, could a user create their own saved searches – and by default their own topic themed channels? Items in such a feed could also be nominally tagged with that search term back on a central server, if, for example, if a user watched an episode that had been retrieved using a particular search term all the way through?

To vote for the OU Boxee app, please go to: vote for your favorite apps, RSVP for the boxee event in SF.

PS the OU Podcasts app is not the only education related submission to the competition. There’s also OpenCourseWare on boxee, which porvides a single point of entry to several video collections from some of the major US OCW projects.

PPS it also turns out that KMi have a developer who’s currently working on a range of mobile apps for the iPhone/iPod Touch, Android phones and so on. If any OU readers have ideas for compelling OU related mobile apps, you just may get lucky in getting it built, so post the idea as a comment to this post, or contact, err, erm, @stuartbrown, maybe?

PPPS Now I’m not sure how much time was spent on the app, but as the competition was only launched on May 5th, with a closing date of June 14th, it can’t have been that long, putting things like even the JISC Rapid Innovation (JISCRI) process to shame…?!;-)

Appropriating Technology

Watching Scott Leslie’s The Open Educator as DJ – Towards a Practice of Remix keynote from TTIX 2009, I tweeted*:

[* Note that I was watching a recording, but it would have been useful to be able to participate (at least in an asymmetric way (asymmetric participation?!) in the tweet stream by watching replayed tweets from the actual presentation (or other people’s recorded viewing ‘as live’ tweets). I’ve pondered this before, e.g. in the sense of Twitter subtitles or anytime commenting (will Wave be able to do that, I wonder?!;-)]

Anyway – back to the tweet, and on second thoughts I wonder whether appropriating technology might actually be a better phrase to riff on, in at least two senses:

Firstly, in the sense of us appropriating technologies that might have been designed for other purposes in order to use them in an educational context.

Secondly, in the sense of using appropriating technologies to sample, sequence and deliver education related performances, in the way Scott demonstrates as part of his ‘Educator as DJ’ workflow.