The Learning Journey Starts Here: Youtube.edu and OpenLearn Resource Linkage

Mulling over the OU’s OULearn pages on Youtube a week or two ago, colleague Bernie Clark pointed out to me how the links from the OU clip descriptions could be rather hit or miss:

Via @lauradee, I see that the OU has a new offering on YouTube.com/edu is far more supportive of links to related content, links that can represent the start of a learning journey through OU educational – and commentary – content on the OU website.

Here’s a way in to the first bit of OU content that seems to have appeared:

This links through to a playlist page with a couple of different sorts of opportunity for linking to resources collated at the “Course materials” or “Lecture materials” level:

(The language gives something away, I think, about the expectation of what sort of content is likely to be uploaded here…)

So here, for example, are links at the level of the course/playlist:

And here are links associated with each lecture, erm, clip:

In this first example, several types of content are being linked to, although from the link itself it’s not immediately obvious what sort of resource a link points to? For example, some of the links lead through to course units on OpenLearn/Learning Zone:

Others link through to “articles” posted on the OpenLearn “news” site (I’m not ever really sure how to refer to that site, or the content posts that appear on it?)

The placing of content links into the Assignments and Others tabs always seems a little arbitrary to me from this single example, but I suspect that when a few more lists have been posted some sort of feeling about what sorts of resources should go where (i.e. what folk might expect by “Assignment” or “Other” resource links). If there’s enough traffic generated through these links, a bit of A/B testing might even be in order relating to the positioning of links within tabs and the behaviour of students once they click through (assuming you can track which link they clicked through, of course…)?

The transcript link is unambiguous though! And, in this case at least), resolves to a PDF hosted somewhere on the OU podcasts/media filestore:

(I’m not sure if caption files are also available?)

Anyway – it’ll be interesting to hear back about whether this enriched linking experience drives more traffic to the OpenLearn resources, as well as whether the positioning of links in the different tab areas has any effect on engagement with materials following a click…

And as far as the linkage itself goes, I’m wondering: how are the links to OpenLearn course units and articles generated/identified, and are those links captured in one of the data.open.ac.uk stores? Or is the process that manages what resource links get associated with lists and list items on Youtube/edu one that doesn’t leave (or readily support the automated creation of) public data traces?

PS How much (if any( of the linked resource goodness is grabbable via the Youtube API, I wonder? If anyone finds out before me, please post details in the comments below:-)

Confluence in My Feed Reader – The Side Effects of Presenting

Don’tcha just love it when a complementary posts happen along within a day or two of each other? Earlier this week, Martin posted on the topic of Academic output as collateral damage suggested that “you can view higher education as a long tail content production system. And if you are producing this stuff as a by-product of what you do anyway then a host of new possibilities open up. You can embrace unpredictability”.

And then today, other Martin comes along with a post – Presentation: Twitter for in-class voting and more for ESTICT SIG – linking to a recording of a presentation he gave yesterday, but one that includes twitter backchannel captions from the presentation that were tweeted by the presentation that in turn itself, as well as the (potentially extended/remote) audience.

Brilliant… I love it…I’m pretty much lost for words…

`Just... awesome...

What we have here, then, is the opening salvo in a presentation capture and amplification strategy where the side effects of the presentation create a legacy in several different dimensions – an audio-visual record, for after the fact; a presentation that announces it’s own state to a potentially remote Twitter audience, and that in turn can drive backchannel activity; a recording of the backchannel, overlaid as captions on the video recording; and a search index that provides timecoded results from a search based on the backchannel and the tweets broadcast by the presentation itself. (If nothing else, capturing just the tweets from the presentation provides a way of deep searching in time into the presentation).

Amazing… just amazing…

Searching the Backchannel – Martin Bean, OU VC, Twitter Captioned at JISC10

Other Martin’s been at it again, this time posting JISC10 Conference Keynotes with Twitter Subtitles.

The OU’s VC, Martin Bean, gave the opening keynote, and I have to admit it really did make me feel that the OU is the best place for me to be working at the moment :-)

… though maybe after embedding that, my days are numbered…? Err…

Anyway, I feel like I’ve not really been keeping up with other Martin’s efforts, so here’s a quick hack a placemarker/waypoint in one of the directions I think the captioning could go – deep search linking into video streams (where deep linking is possible).

Rather than search the content, we’re going to filter captions for a particular video, in this case the twitter caption file from Martin (other, other Martin?!) Bean’s #JISC10 opening keynote. The pipework is simple – grab the URL of the caption file and a “search” term, parse the captions into a feed with one item per caption, then filter on the caption content. I added a little Regular Expression block just to give a hint as to how you might generate a deeplink into content based around the tart time of the caption:

Filter based searching caption

You can find the pipe here: Twitter caption search

One thing to note is that it may take some time for someone to tweet what someone has said. If we had a transcript caption file (i.e. a timecoded transcript of the presentation) we might be able to work out the “mean time to tweet” for a particular event/twitterer, in which case we could backdate timestamps to guess the actual point in the video that a person was tweeting about. (I looked at using auto-genearated transcript files from Youtube to trial this, but at the current time, they’re rubbish. That said, voice search on my phone was rubbish a year ago, but by Christmas it was working pretty well, so the Goog’s algorithms learn quickly, especially where error signals are available. So bear in mind that if you do post videos to Youtube, and you can upload a caption file, as well as helping viewers, you’ll also be helping train Google’s auto-transcription service (because it’ll be able to compare the result of auto-transcription with your captions file…. If you’re the Goog, there are machine learning/supervised learning cribs everywhere!))

(Just by the by, I also wonder if we could colour code captions to identify in a different colour tweets that refer to the content of an earlier tweet/backchannel content, rather than the foreground content of the speaker?)

Unfortunately, caption files on Youtube, which does support deep time links into videos, only appear to be available to video owners (Youtube API: Captions), so I can’t do a demo with Youtube content… and I so should be doing other things that I don’t have the time right now to look at what would be required deeplinking elsewhere…:-(

PS The captioner tool can be found here: https://mashe.hawksey.info/ititle  http://www.rsc-ne-scotland.org.uk/mashe/ititle/

Martin Hawksey, whose work this is, has described the evolution of the app in a series of several posts here: http://www.rsc-ne-scotland.org.uk/mashe/?s=twitter+subtitles

Watching YouTube Videos on Boxee via DeliTV

One of the easiest ways to get started with DeliTV is to use it to watch video feed subscription from YouTube.

With DeliTV, you can bookmark the following sorts of Youtube content and then view it in a DeliTV Channel:

Bookmarked YouTube page Resulting DeliTV subscription
User homepage/channel
e.g Teachers’ TV channel
Guardian Newspaper
Recently uploaded videos for that user
Playlist page e.g T151: 3D Geo-World Demos Playlist feed
Video page e.g The Machine is Us/ing Us (Final Version) Single video
[NEW] Search results page e.g Search for “formula one” Search results containing 20 most relevant videos

Here is the example channel bookmarked to a demo DeliTV channel guide: delitv_ytdemo:

(You can of course grab a copy of any of these bookmarks into your own delicious account.)

We can now bookmark this channel guide so that it appears in a DeliTV multiplex. In the following example, I’m bookmarking it to my main delitv feed, and also to the boxeetest5 multiplex.

Here’s the result in my boxeetest5 feed:

DeliTV

And here’s a view of the delitv_ytdemo channel guide:

DeliTV channel guide

This is what the bookmarked user/channel produces – the recent uploads listing for that user/channel:

DeliTV - Youtube user/channel recent upoads

And here’s the playlist guide:

DeliTV - Youtube playlist feed

Remember, with DeliTV you don’t need to bookmark the actual Youtbe feed – just bookmark the user/channel, playlist or video page to Delicious, and DeliTV will do the rest for you…

To learn how to subscribe to your own DeliTV channel, see Deli TV – Personally Programmed Social Television Channels on Boxee: Prototype

PS a new feature, currently in testing, lets you bookmark a search results page. Whilst it is possible to generate searches for playlist or users/channels as well as videos, DeliTV currently returns just the 20 most relevant Youtube videos when a Youtube search results page is bookamarked.

UK HEI Boxee Channel

A week or so ago, Liz Azyan posted a list of UK HEI Youtube channels. Although not quite as polished as @liamgh et al’s OU Boxee app, I piucked up on a couple of suggestions Liam made over a pint last night about simply subscribing to an RSS feed in Boxee to roll my own UK HEI Youtube Boxee channel thing…

So here are the institutional channels:

and here’s a peek inside one of them:

This lets me watch the most recently uploaded videos to all (?) the UK HEIs’ most recent uploads to their Youtube channels, organised by institution via a lean back TV interface.

(You might be able to submenu the institutional channels/streams according to playlists they have specified, as well as tidying up things like icons/logos, maybe, but this was a 10 minute hack, rather than a half hour hack, ok?!;-)

Here’s the recipe…

1. Grab the table from Liz’s web page and create a feed from it:

2. Generate the feed URIs for the most recent uploads to each channel (in the form required by Boxe – e.g. rss://gdata.youtube.com/feeds/base/users/abertayTV/uploads?alt=rss&v=2&orderby=published):

Filter out stuff that isn’t a feed and complete the pipe:

We can now grab the RSS feed from the pipe in the normal way and subscribe to it via a personal account on the Boxee website.

If you now launch the Boxee app, select:

Video:

Internet

Video Feeds (My Feeds)

– the UK HEI Youtube Videos Channel

And from there, you should be able to browse – and play – the recent uploads to all the UK HEI Youtube channels that Liz has listed.

Not that I had a niggle with my Boxe player – I could hear the audio but not see the video for any of the Youtube videos when I tried to play them. If anyone else tries out this channel and gts the same problem, please let me know and I;ll see if it’s a feed problem. Otherwise, I’ll assume it’s a local glitch…

Here’s the RSS feed URI again: “UK HEI Youtube Channels on Boxee” RSS feed

PS out of interest, if I had bid to do this as a #jiscri project, how much should I have asked for?
planning: 10 mins chatting with Liam over a pint ysterday;
design: <5 mins looking up Youtube API/URI patterns
implementation:: <5 mins creating Yahoo pipe
configuration: <5 mins subscribing to the pipe feed in Boxee
testing: <5 mins seeing if it worked in Boxee (which it doesn’t, properly, but I’m blaming that on a local problem and trustung that it does actually work… err…?!;-)
Okay, so all told it was maybe a sub-20 minute hack rather than 5 minute one?
documentation: (i.e. blog post) 30-45 mins, incl grabbing screenshots.

And I’m on holiday today…

Guardian Game Reviews – with Video Trailers

In what I intend to be the last post in the series for a while – maybe – here’s a quick hack around the Guardian Open-Platform Content API that shows how to “annotate” a particular set of articles (specifically, recent video game reviews) with video trailers for the corresponding game pulled in from Youtube, and then render the whole caboodle in a Grazr widget.

So let’s begin…

Take one Guardian API query:
http://api.guardianapis.com/content/search?filter=/global/reviews&filter=/technology/games&api_key=MYSECRETACTIVATEDAPIKEY we construct the URI and call the webservice:

Here’s a typical result:

As with the football results map pipe, the linkj-text looks like a good candidate for search query text. So let’s tweak it for use as a search on somewhere like Youtube:

I want to be able to let the user choose trailer or review videos, which is why I cleared the search string of the word “review” first, before adding the user preference back into the string.

Now run a Youtube GData/API search using an old pipe block I found laying around (Youtube video search pipe) and grab the top result:

Now I happen to know that if you give a Grazr widget a feed with enclosure.url attributes that point to a flash file, it will embed the flash resource for you:

So now we can take the RSS output of the pipe and pop it into a Guardian game review with video previews Grazr widget:

(The widget layout is customised as described in part in Setting the 3 Pane divider positions.)

If you want to grab the widget and embed it your own webpages, it’s easy enough to do so (although not on hosted WordPress blogs). Simply click on “Share” and select “Customize” from the widget menu bar:

Then you can customise the widget and grab an embed code, or a link to a full screen view:

Good, eh? ;-)

PS Grazr (which is actually an OPML viewer as well as an RSS viewer) embeds other stuff too. For example, here it as as a slideshare viewer; and here it is showing how to automatically generate embedded players for Youtube videos and MP3 audio files using delicious Feed Enhancer – Auto-Enclosures pipe. (If you’re into a bit of hackery, it’ll carry Scribd iPaper too: Embed Scribd iPaper in Grazr Demo. I’m guessing you should be able to get it to embed arbitrary flash games as well?)

Easier Twitter Powered Subtitles for Youtube Movies

Pretty much all the sentiment I’ve picked from my post on Twitter Powered Subtitles for Conference Audio/Videos on Youtube is that the process for generating the subtitles is too complicated, so I’ve had a go at simplifying it:

The workflow is now as follows. Suppose you have a recording of an event that people were tweeting through using a particular hashtag, and you want to annotate the recording using the tweets made at the time as subtitles.

    Go to Twitter advanced search and search for the particular hashtag;

  1. Tweak the number of results on the page and the date setting (if necessary):
  2. If you only want tweets FROM a particular person, limit the search that way too:
  3. If the results you want to convert to subtitles are on “older” search results pages, navigate to the required results page
  4. When you have a results page containing the tweets you want to convert to subtitles, grab the URL of that results page and copy it into the subtitler form at http://ouseful.open.ac.uk/twitterSubtitles.php
  5. Optionally, if you want to specify the tweet that you want to be the first subtitle, copy its URL (that is, the URL that is pointed to by the View tweet link for that tweet:
  6. Optionally again, if you want to specify the tweet on the results page that you want to be the lastsubtitle, grab it’s URL and paste it into the form.
  7. Generate the subtitles:
  8. Save the page as a text file with the suffix .sub:
  9. You can now upload the .sub subtitle file to Youtube.
  10. So hopefully, that’s a little easier? (Note that there is a also a bookmarklet on the subtitler page that will create the subtitle file directly from a Twitter advanced search results page.)

    PS here are some more thoughts about ways in which the subtitler might develop: Twitter Powered Youtube Subtitles, Reprise: Anytime Commenting.

    PPS if anyone fancies converting the Javascript that generates the subtitles in the browser to PHP that will do the processing on the server, please feel free to post the code back here as a comment ;-)

    PPPS Woudln’t it be good if CoverItLive offered an exportable subtitle file from previous events? In the meantime, does anyone know if it’s possible to get an RSS feed of posts from previous CoverItLive event commentaries?

    PPPs See also: Twitterprompter?, discussing several possible use cases for Twitter in a live presentation environment.

Twitter Powered Youtube Subtitles, Reprise: Anytime Commenting

One of the things that attracts me to serialised feeds (as well as confusing the hell out of me) is the possibility of letting people subscribe to, and add, comments in “relative time”…

… that is, as well as viewing the content via a serialised feed, the comments feed should also be serialised (with timestamps for each comment calculated relative to the time at which the person commenting started receiving the serialised feed).

Applying this to the idea of tweeted Youtube movie subtitles (Twitter Powered Subtitles for Conference Audio/Videos on Youtube) in which every tweet made during a presentation at or around that presentation becomes a subtitle on a recording of that presentation, it strikes me that a similar model is possible.

That is, different individuals could watch a Youtube video at different times, tweeting along as they do so, and then these tweets could be aggregated according to relative timestamps to provide a single, combined set of subtitles.

So how might this work in practice? Here’s a thought experiment run through…

Firstly, it’d probably be convenient to set up a twitter account to send the tweets to (say @example, for example).

Create a tag for the video – this could be something like #yt:tBmFzF8szpo for the video at http://www.youtube.com/watch?v=tBmFzF8szpo.

(Alan Levine reminded me about flickr machine tags earlier today, which are maybe also worth considering in this respect, e.g. as a source of inspiration for a tagging convention?)

Grab a ctrl-C copy of the phrase @example #yt:tBmFzF8szpo for quick pasting into a new tweet, and then start watching the video, tweeting along as you do so…

To generate your subtitle feed, you can then do a search based on Tweets from your username (which would be @psychemedia in my case) to e.g. @example, with hashtag #yt:tBmFzF8szpo, and maybe also using a date range.

(You could augment the Yahoo pipe I used in the twitter subtitle generator proof of concept to remove the hashtag when generating the feed used for subtitling?)

The actual subtitle file generator could then pull in several different subtitle feeds from separate people, relativise their timestamps relative to the time of the first tweet (which could maybe use a keyword, too – such as “START”: @example START #yt:tBmFzF8szpo;-) and then produce an aggregated subtitle feed.

As more people watched the video (maybe including the subtitles to date), their feeds could be added to the aggregating subtitle file generator, and the subtitle file updated/refreshed.

Individuals could even rewatch the video and create new feeds for themselves to join in the emerging conversation…

(Okay, so it’s maybe slower than just reading through the comments, having to replay the video in real time to read the tweets, but this is a sort of thought experiment, right, albeit one that can be implemented quite easily…;-)

PS In one of the comments to Show and Translate YouTube Captions Matt Cutts gave an example of a URL that “will search for the word “china” in videos with closed captions” [ http://www.youtube.com/results?closed_captions=1&search_query=china ] (although I’m not sure how well it works?).

So I’m thinking – if live tweets from an event can be associated with a video of an event (maybe because the video is posted with a link to a (now out of date!) upcoming record for that event in order to anchor it in time) then being able to search the tweets as captions/subtitles provides a crib for deeplink searching into the video? (But then, I guess the Goog is looking at audio indexing anyway?)

PPS I just came across another tool for adding subtitles to Youtube videos, as well as videos from other online video sites – overstream.net:

It’s worth looking at, maybe?

PPPS see also Omnisio, a recent Google acquisition that offers “select clips from videos you find on YouTube and other video sites, and easily post them on your profile page or blog. Even better, you and your friends can add comments directly in the video!”.

And there’s more: “With Omnisio you make and share your own shows by assembling clips from different videos.” Roll on the remixes :-)

PPPPS Martin implemented anytime commenting