Archive for the ‘OU2.0’ Category
So it seems like the revamped OU presence Youtube has gone live, with three channels (at the mo) and maybe another one to come if I read the greyed out Research icon right?
(Laura has the low down on the launch here: OU launch YouTube site.)
The main channel – which I guess establishes the ou view brand – appears to be home to ads and TV trails, and based on the old content that’s there is currently just a reskinning of the original OU presence on Youtube.
More exciting are the two new channels…. Firstly, OU Life, a video box for students and staff to talk about their relationship with the OU:
As well as student voices, there are some staff voices in there too…
What’s interesting about these movies is the way that secondary videos are linked to from the talking head videos – so you can easily view the videos that are being talked about as examples ‘learning content’ on Youtube. Click through on the above movie and you’ll see what I mean…
It’s interesting to note how the branding carries through to the about box on each video page… and the fluid ident at the end of each clip is really quite beautiful:-)
The second new area (again with its own colour theme) is OU Learn, a collection of movies from OU course materials.
Some of the content has been organised in playlists, which could be handy…
Again the branding carries through to the video splash page, and it’s good to see the use of course code tags that could well support some automated mashup magic somewhere down the line…;-)
One thing I can’t see offhand are license terms and conditions – is the material up for remix under a CC license or not, for example? Whatever the case, the material all seems to be embeddable:-)
For any OU staff readers interested in getting their view, or other video content, onto the OU view pages, there’s some handy advice on the intranet: Online Services Intranet > Web2.0: Youtube. (When I get back from holiday, I’ll go through the material that’s there and post what I can…)
I’m not sure about the extent to which the OU Youtube and iTunes content is either duplicated or exclusive to each site? (Or maybe content will be pushed to each site in parallel?) But there’s an info page about the iTunesU strategy on the intranet site too, so I’ll try to work out to what extent the two initiatives complement each other…
For almost as long as I can remember (?! e.g. Search Powered Predictions), I’ve had the gut feeling that one of the most useful indicators about the courses our students want to study is their search behaviour, both in terms of searches that drive (potential) students to the OU courses and qualifications website from organic search listings, as well as their search behaviour whilst on the OU site, and whilst floundering around within the courses and quals minisite.
A quick skim through our current strategic priorities doc (OU Futures 2008 (internal only), though you can get a flavour from the public site: Open University Strategic Priorities 2007) suggests that there is increased interest in making use of data, for example as demonstrated by the intention to develop a more systematic approach for new curriculum developments, such that the student market, demography and employment sectors are the primary considerations.
So, to give myself something to think about over the next few days/weeks, here’s a marker post about what a “course search insights” tool might offer, inspired in part by the Google Youtube Insights interface.
So, using Youtube Insight as a starting point, let’s see how far we can get…
First off, the atom is not a Youtube video, it’s a course, or to be more exact, a course page on the courses and quals website… Like this page for T320 Ebusiness technologies: foundations and practice for example. The ideas are these: what might an “Insight” report look like for a course page such as this, how might it be used to improve the discoverability of the page (and improve appropriate registration conversion rates), and how might search behaviour inform curriculum development?
Firstly, it might be handy to segment the audience reports into four:
- people hitting the page from an organic search listing;
- people hitting the page from an internal (OU search engine) search listing;
- people hitting the page from an ‘organic’ link on a third party site (e.g. a link to the course page from someone’s blog);
- people hitting the page from an external campaign/adword etc on a search engine;
- people hitting the page from any other campaign (banner ads etc);
- the rest…
For the purposes of this post, I’ll just focus on the first two, search related, referrers… (and maybe the third – ‘organic’ external links). What would be good to know, and how might it be useful?
First off, a summary report of the most popular search terms would be handy:
- The terms used in referrers coming from external organic search results give us some insight into the way that the search engines see the page – and may provide clues relating to how to optimise the page so as to ensure we’re getting the traffic we expect from the search engines.
- The terms used within the open.ac.uk search domain presumably come from (potential) students who have gone through at least one micro-conversion, in that they have reached, and stayed in, the OU domain. Given that we can (sometimes) identify whether users are current students (e.g. they may be logged in to the OU domain as a student) or new to the OU, there’s a possibility of segmenting here between the search terms used to find a page by current students, and new prospects.
(Just by the by, I emailed a load of OU course team chairs a month or two ago about what search terms they would expect potential students to use on Google (or on the OU search engine) to find their course page on the courses and quals site. I received exactly zero responses…)
The organic/third party incoming link traffic can also provide useful insight as to how courses are regarded from the insight – an analysis of link text, and maybe keyword analysis of the page containing the link – can provide us with clues about how other people are describing our courses (something which also feeds into the way that the search engines will rank our course pages; inlink/backlink analysis can further extend this approach.). I’m guessing there’s not a lot of backlinking out there yet (except maybe from professional societies?), but if and when we get an affiliate scheme going, this may be one to watch…?
So that’s one batch of stuff we can look at – search terms. What else?
As a distance learning organisation, the OU has a national reach (and strategically, international aspirations), so a course insight tool might also provide useful intelligence about the geographical location of users looking at a particular course. Above average numbers of people reading about a course from a particular geo-locale might provide evidence about the effectiveness of a local campaign, or even identify a local need for a particular course (such as the opening or closure of large employer).
The Youtube Insight reports shows how as the Google monster gets bigger, it knows more and more about us (I’m thinking of the Youtube Insight age demographic/gender report here). So providing insight about the gender split and age range of people viewing a course may be useful (we can find this information out for registered users – incoming users are rather harder to pin down…), and may provide further insight when these figures are compared to the demographics of people actually taking the course, particularly if the demographic of people who view a course on the course catalogue page differs markedly from the demographics of people who take the course…
(Notwithstanding the desire to be an “open” institution, I do sometimes wonder whether we should actually try to pitch different courses at particular demographics, but I’m probably not allowed to say things like that…;-)
As well as looking at search results that (appear) to provide satisfactory hits, it’s also worth looking at the internal searches that don’t get highly relevant results. These searches might indicate weak optimisation of pages – appropriate search terms donlt find appropriate course pages – or they might identify topics or courses that users are looking for that don’t exist in the current OU offerings. Once again, it’s probably worth segmenting these unfulfilled/unsatisfactory courses according to new prospects and current students (and maybe even going further, e.g. by trying to identify the intentions of current students by correlating their course history with their search behaviour, we may gain insight into emerging preferences relating to free choice courses within particular degree programmes).
To sum up… Search data is free, and may provide a degree of ‘at arms length’ insight about potential students before we know anything about them ‘officially’ by virtue of them registering with us, as well as insight relating to emerging interests that might help drive curriculum innovation. By looking at data analysis and insight tools that are already out there, we can start to dream about what course insight tools might look like, that can be used to mine the wealth of free search data that we can collect on a daily basis, and turn it into useful information that can help improve course discovery and conversion, and feed into curriculum development.
Chatting with Stuart over a pint last week, he mentioned that the Open2 folks had started publishing a programme announcement feed on Twitter that lets you know when a TV programme the OU’s been involved with is about to be shown on one of the BBC channels: open2 programme announcements on Twitter.
By subscribing to the RSS feed from the Open2 twitter account, it’s easy enough to get yourself an alert for upcoming BBC/OU programmes.
The link goes through to the programme page on the open2 website, which is probably a Good Thing, but it strikes me that there’s no obvious way to watch the programme from the Open2 page?
That is, there’s no link to an iplayer or BBC programmes view, such as BBC Programmes > Coast:
If I’m reading the BBC Programmes Developers’ Guide correctly, not all the URL goodness has been switched on for these URLs yet? For example, here’s the guidance:
To access these add .xml, .json or .yaml to the end of the url.
Whilst http://www.bbc.co.uk/programmes/b006mvlc works as I expect, http://www.bbc.co.uk/programmes/b006mvlc/episodes requires a branch into a year – http://www.bbc.co.uk/programmes/b006mvlc/episodes/2008, and I can’t get the upcoming or format extensions to work at all?
As well as the BBC Programmes page, we can also find iPlayer links from a search on the iPlayer site: Search for “coast” on iPlayer:
Going back to the twitter feed, I wonder whether there’s any point in having a second twitter account that alerts people as to when a programme is available on iplayer? A second alert could give you a day’s notice that a programme is about to disappear from iPlayer?
Now if the “popular science magazine show” referred to is the one that was mentioned at the BBC/OU science programming brainstorming session I posted about a couple of weeks ago, I’m pretty sure the producer said it wasn’t going to be like Tomorrow’s World… Which I guess means it is – in that it is going to be like Tomorrow’s World in terms of positioning and format, but it isn’t going to be exactly like it in terms of content and delivery… (I have to admit that I got the impression is was going to be more like *** **** for Science… ;-)
Ad Manager can help you sell, schedule, deliver, and measure both directly-sold and network-based inventory.
# Ad network management: Easily manage your third-party ad networks in Ad Manager to automatically maximize your network driven revenue.
# Day and Time Targeting: Don’t want your orders to run on weekends? No problem. With day and time targeting, you can set any new line items you create to run only during specific hours or days, or as little as 15 minutes per week. Use day and time targeting in addition to geography, bandwidth, browser, user language, operating system, domain and custom targeting.
In part, the Ad Manager allows you to use your own ads with Google’s ad serving technology, which can deliver ads according to:
* Browser version
* Browser language
* Day and time
* Geography (Country, region or state, metro, and city)
* Operating system
* User domain
If you can provide custom tagging information (e.g. by adding information from a personal profile into the ad code on the page displayed to the user) then the Ad Manager can also be used to provide custom targeting according to the tags you have available.
So here’s what I’m thinking – can we use the Google Ad Manager service to deliver contextualised content to users? That is, create “ad” areas on a page, and deliver our own “content ads” to it through the Google Ad Manager.
So for example, we could have a contentAd sidebar widget on a Moodle VLE page; we could add a custom tag into the widget relating to a particular course; and we could serve course related “ad” content through the Ad Manager.
By running the content of a page through a content analyser (such as Open Calais, which now offers RESTful calls via HTTP POST), or looking on a site such as delicious to see what the page has been tagged with, we can generate ‘contextual tags’ to further customise the content delivery.
So what? So think of small chunks of content as “contentAds”, and use the Google Ad Manager to serve that content in a segmented, context specific way to your users… ;-)
A couple of days ago, Stuart pointed me to Quarkbase, a one stop shop for looking at various web stats, counts and rankings for a particular domain (here’s the open.ac.uk domain on quarkbase, for example; see also: the Silobreaker view of the OU), which reminded me that I hadn’t created a version of the media release related news stories tracker that won me a gift voucher at IWMW2008 ;-)
So here it is: OU Media release effectiveness tracker pipe.
And to make it a little more palatable, here’s a view of the same in a Dipity timeline (which will also have the benefit of aggregating these items over time): OU media release effectiveness tracker timeline.
I also had a mess around trying to see how I could improve the implementation (one was was to add the “sort by date” flag to the Google news AJAX call (News Search Specific Arguments)), but then, of course, I got sidetracked… because it seemed that the Google News source I was using to search for news stories didn’t cover the THES (Times Higher Education Supplement).
But that was a bit hit and miss, and didn’t necessarily return the most recent results… so instead I created a pipe to search over the last month of the THES for stories that mention “open university” and then scrape the THES search results page: OU THES Scraper.
If you want to see how it works, clone the pipe and edit it…
One reusable component of the pipe is this fragment that will make sure the date is in the correct format for an RSS feed (if it isn’t in the right format, Dipity may well ignore it…):
Here’s the full expression (actually, a PHP strftime expression) for outputting the date in the required RFC 822 date-time format: %a, %d %b %Y %H:%M:%S %z
To view the OU in the THES tracker over time, I’ve fed it into another Dipity timeline: OU in the THES.
(I’ve also added the THES stories to the OUseful “OU in the news” tab at http://ouseful.open.ac.uk/.)
Going back to the media release effectiveness tracker, even if I was to add the THES as another news source, the coverage of that service would still be rather sparse. For a more comprehensive version, it would be better to plug in to something like the LexisNexis API and search their full range of indexed news from newspapers, trade magazines and so on… That said, I’m not sure if we have a license to use that API, and/or a key for it? But then again, that’s not really my job… ;-)
Having been tipped off about about a Netvibes page that the Library folks are pulling together about how to discover video resources (Finding and reusing video – 21st century librarianship in action, methinks? ;-) I thought I’d have a look at pulling together an OU iTunes OPML bundle that could be used to provide access to OU iTunes content in a Grazr widget (or my old RadiOBU OpenU ‘broadcast’ widget ;-) and maybe also act as a nice little container for viewing/listening to iTunes content on an iPhone/iPod Touch.
To find the RSS feed for a particular content area in iTunesU, navigate to the appropriate page (one with lists of actual downloadable content showing in the bottom panel), make sure you have the right tab selected, then right click on the “Subscribe” button and copy the feed/subscription URL (or is there an easier way? I’m not much of an iTunes user?):
You’ll notice in the above case that as well as the iPod video (mp4v format?), there is a straight video option (.mov???) and a transcript. I haven’t started to think about how to make hackable use of the transcripts yet, but in my dreams I’d imagine something like these Visual Interfaces for Audio/Visual Transcripts! ;-) In addition, some of the OU iTunesU content areas offer straight audio content.
Because finding the feeds is quite a chore (at least in the way I’ve described it above), I’ve put together an OU on iTunesU OPML file, that bundles together all the separate RSS from the OU on iTunesU area (to view this file in an OPML widget, try here: OU iTunesU content in a Grazr widget).
The Grazr widget lets you browse through all the feeds, and if you click on an actual content item link, iit should launch a player (most likely Quicktime). Although the Grazr widget has a nice embedded player for MP3 files, it doesn’t seem to offer an embedded player for iTunes content (or maybe I’m missing something?)
You can listen to the audio tracks well enough in an iPod Touch (so the same is presumably true for an iPhone?) using the Grazr iphone widget – but for some reason I can’t get the iPod videos to play? I’m wondering if this might be a mime-type issue? or maybe there’s some other reason?
(By the by, it looks like the content is being served from an Amazon S3 server… so has the OU bought into using S3 I wonder? :-)
For completeness, I also started to produce a handcrafted OPML bundle of OU Learn Youtube playlists, but then discovered I’d put together a little script ages ago that will create one of these automatically, and route each playlist feed through a feed augmentation pipe that adds a link to each video as a video enclosure:
Why would you want to do this? Because if there’s a video payload as an enclosure, Grazr will provide an embedded player for you… as you can see in this screenshot of Portable OUlearn Youtube playlists widget (click through the image to play with the actual widget):
These videos will play in an iPod Touch, although the interaction is a bit clunky, and actually slight cleaner using the handcrafted OPML: OUlearn youtube widget for iphone.
PS it’s also worth remembering that Grazr can embed Slideshare presentations, though I’m pretty sure these won’t work on the iPhone…
Sitting in a course team meeting of 6 for over 3 hours today (err, yesterday…) discussing second drafts of print material for a course unit that will be delivered for the first time in March 2010 (third drafts are due mid-December this year), it struck me that we were so missing the point as the discussion turned to how best to accommodate a reference from print material to a possible short video asset in such a way that a student reading the written print material might actually refer to the video in a timely way…
Maybe it’s because the topic was mobile telephony, but it struck me that the obvious way to get students reading print material to watch a video at the appropriate point in the text would be to use something like this:
By placing something like a QR code in the margin text at the point you want the reader to watch the video, you can provide an easy way of grabbing the video URL, and let the reader use a device that’s likely to be at hand to view the video with…
I have to admit the phrase “blended learning” has to date been largely meaningless to me… But this feels like the sort of thing I’d expect it to be… For example:
Jane is sitting at the table, reading a study block on whatever, her mobile phone on the table at her side. As she works through the material, she annotates the text, underlining key words and phrases, making additional notes in the margin. At a certain point in the text, she comes across a prompt to watch a short video to illustrate a point made in the previous paragraph. She had hoped not to have to use her PC in this study session – it’s such a hassle going upstairs to the study to turn it on… Maybe she’ll watch the video next time she logs in to the VLE (if she remembers…). Of course, life’s not like that now. She picks up her phone, takes a picture of the QR code in the margin, and places her phone back on the table, next to the study guide. The video starts, and she takes more notes as it plays…
Thinking about it, here’s another possibility:
Jim is in lean back mode, laying on the sofa, feet up, skimming through this week’s study guide. The course DVD is in the player. As he reads through the first section, there’s a prompt to watch an explanatory video clip. He could snap the QR code in the margin and watch the video on his phone, but as the course DVD is all cued up, it’s easy enough to select the block menu, and click on the appropriate clip’s menu item. Of course, it’d be just as easy to use the Wii connected to the TV to browse to the course’s Youtube page and watch the clips that way, but hey, the DVD video quality is much better…
This is quite an old OU delivery model – for years we expected students to record TV programmes broadcast in the early hours of the morning, or we’d send them video cassettes. But as video delivery has got easier, and the short form (2-3 minute video clip) has gained more currency, I get the feeling we’ve been moving away from the use of video media because it’s so expensive to produce and so inconvenient to watch…
“Thoughts”, because I don’t have time to do this right now, (although it shouldn’t take that long to pull together? Maybe half a day, at most?) and also to give a glimpse into to the sort of thinking I’d do walking the dog, in between having an initial idea about something to hack together, and actually doing it…
So here’s the premise: what sort of network exists within the OU on Twitter?
Stuff I’d need – a list of all the usernames of people active in the OU on Twitter; Liam is aggregating some on PlanetOU, I think?, and I seem to remember I’ve linked to an IET aggregation before.
Stuff to do (“drafting the algorithm”):
- for each username, pull down the list of the people they follow (and the people who follow them?);
- clean each list so it only contains the names of OU folks (we’re gonna start with a first order knowledge flow network, only looking at links within the OU).
- for each person, p_i, with followers F_ij, create pairs username(p_i)->username(F_ij); or maybe build a matrix: M(i,j)=1 if p_j follows p_i??
- imagine two sorts of visualisation: one, an undirected network graph (using Graphviz) that only shows links where following is reciprocated (A follows B AND B follows A); secondly, a directed graph visualisation, where the link simply represents “follows”.
Why bother? Because we want to look at how people are connected, and see if there are any natural clusters (this might be most evident in the reciprocal link case?) cf. the author clusters evident in looking at ORO co-authorship stuff. Does the network diagram give an inkling as to how knowledge might flow round the OU? Are there distinct clusters/small worlds connected to other distinct clusters by one or two individuals (I’m guessing people like Martin who follows everyone who follows him?). Are there “supernodes” in the network that can be used to get a message out to different groups?
Re: the matrix view: I need to read up on matrices… maybe there’s something we can do to identify clusters in there?
Now if only I had a few hours spare…
Readers of any prominent OU bloggers will probably have noticed that we appear to have something of Twitter culture developing within the organisation (e.g. “Twitter, microblogging and living in the stream“). After posting a few Thoughts on Visualising the OU Twitter Network…, I couldn’t resist the urge to have a go at drawing the OpenU twittergraph at the end of last week (although I had hoped someone else on the lazyweb might take up the challenge…) and posted a few teaser images (using broken code – oops) via twitter.
Anyway, I tidied up the code a little, and managed to produce the following images, which I have to say are spectacularly uninteresting. The membership of the ‘OU twitter network’ was identified using a combination of searches on Twitter for “open.ac.uk” and “Open University”, coupled with personal knowledge. Which is to say, the membership list may well be incomplete.
The images are based on a graph that plots who follows whom. If B follows A, then B is a follower and A is followed. In the network graphs, an arrow goes from A to B if A is followed by B (so in the network graph, the arrows point to people who follow you. The graph was constructed by making calls to the Twitter API for the names of people an individual followed, for each member of the OU Twitter network. An edge appears in the graph if a person in the OU twitter network follows another person in the OU Twitter network. (One thing I haven’t looked at is to see whether there are individuals followed by a large number of OpenU twitterers who aren’t in the OpenU twitter network… which might be interesting…)
Wordle view showing who in the network has the most followers (the word size is proportional to the number of followers, so the bigger your name, the more people there are in the OU network that follow you). As Stuart predicted, this largely looks like a function of active time spent on Twitter.
We can compare this with a Many Eyes tag cloud showing how widely people follow other members of the OU network (the word size is proportional to the number of people in the OU network that the named individual follows – so the bigger your name, the more people in the OU network you follow).
Note that it may be interesting to scale this result according to the total number of people a user is following:
@A’s OU network following density= (number of people @A follows in OU Twitter network)/(total number of people @A follows)
Similarly, maybe we could also look at:
@A’s OU network follower density= (number of people in OU Twitter network following @A)/(total number of people following @A)
(In the tag clouds, the number of people following is less than the number of people followed; I think this is in part because I couldn’t pull down the names of who a person was following for people who have protected their tweets?)
Here’s another view of people who actively follow other members of the OU twitter network:
And who’s being followed?
These treemaps uncover another layer of information if we add a search…
So for example, who is Niall following/not following?
And who’s following Niall?
I’m not sure how useful a view of the OU Twittergraph is itself, though?
Maybe more interesting is to look at the connectivity between people who have sent each other an @message. So for example, here’s how Niall has been chatting to people in the OU twitter network (a link goes from A to B if @A sends a tweet to @B):
We can also compare the ‘active connectivity’ of several people in the OU Twitter network. For example, who is Martin talking to, (and who’s talking to Martin) compared with Niall’s conversations?
As to why am I picking on Niall…? Well, apart from making the point that by engaging in ‘public’ social networks, other people can look at what you’re doing, it’s partly because thinking about this post on ‘Twitter impact factors’ kept me up all night: Twitter – how interconnected are you?.
The above is all “very interesting”, of course, but I’m not sure how valuable it is, e.g. in helping us understand how knowledge might flow around the OU Twitter network? Maybe I need to go away and start looking at some of the social network analysis literature, as well as some of the other Twitter network analysis tools, such as Twinfluence (Thanks, @Eingang:-)
PS Non S. – Many Eyes may give you a way of embedding a Wordle tagcloud…?)
As it happens, I have been know to look at my blog stats from time (!), and today I noticed something odd in the referrer stats:
A referral from Amazon. WTF?
The link goes to a book detail page for a book about Wikipedia:
Scrolling down a bit, I found this:
A blog post syndicated in the product page from one of the book’s authors that linked to my post on Data Scraping Wikipedia with Google Spreadsheets.
My immediate thought – is there any way we can blog info about courses that use set textbooks back into the related Amazon product page?