Visualising the OU Twitter Network

Readers of any prominent OU bloggers will probably have noticed that we appear to have something of Twitter culture developing within the organisation (e.g. “Twitter, microblogging and living in the stream“). After posting a few Thoughts on Visualising the OU Twitter Network…, I couldn’t resist the urge to have a go at drawing the OpenU twittergraph at the end of last week (although I had hoped someone else on the lazyweb might take up the challenge…) and posted a few teaser images (using broken code – oops) via twitter.

Anyway, I tidied up the code a little, and managed to produce the following images, which I have to say are spectacularly uninteresting. The membership of the ‘OU twitter network’ was identified using a combination of searches on Twitter for “” and “Open University”, coupled with personal knowledge. Which is to say, the membership list may well be incomplete.

The images are based on a graph that plots who follows whom. If B follows A, then B is a follower and A is followed. In the network graphs, an arrow goes from A to B if A is followed by B (so in the network graph, the arrows point to people who follow you. The graph was constructed by making calls to the Twitter API for the names of people an individual followed, for each member of the OU Twitter network. An edge appears in the graph if a person in the OU twitter network follows another person in the OU Twitter network. (One thing I haven’t looked at is to see whether there are individuals followed by a large number of OpenU twitterers who aren’t in the OpenU twitter network… which might be interesting…)

Wordle view showing who in the network has the most followers (the word size is proportional to the number of followers, so the bigger your name, the more people there are in the OU network that follow you). As Stuart predicted, this largely looks like a function of active time spent on Twitter.

We can compare this with a Many Eyes tag cloud showing how widely people follow other members of the OU network (the word size is proportional to the number of people in the OU network that the named individual follows – so the bigger your name, the more people in the OU network you follow).

Note that it may be interesting to scale this result according to the total number of people a user is following:

@A’s OU network following density= (number of people @A follows in OU Twitter network)/(total number of people @A follows)

Similarly, maybe we could also look at:

@A’s OU network follower density= (number of people in OU Twitter network following @A)/(total number of people following @A)

(In the tag clouds, the number of people following is less than the number of people followed; I think this is in part because I couldn’t pull down the names of who a person was following for people who have protected their tweets?)

Here’s another view of people who actively follow other members of the OU twitter network:

And who’s being followed?

These treemaps uncover another layer of information if we add a search…

So for example, who is Niall following/not following?

And who’s following Niall?

I’m not sure how useful a view of the OU Twittergraph is itself, though?

Maybe more interesting is to look at the connectivity between people who have sent each other an @message. So for example, here’s how Niall has been chatting to people in the OU twitter network (a link goes from A to B if @A sends a tweet to @B):

ou personal activer twittermap

We can also compare the ‘active connectivity’ of several people in the OU Twitter network. For example, who is Martin talking to, (and who’s talking to Martin) compared with Niall’s conversations?


As to why am I picking on Niall…? Well, apart from making the point that by engaging in ‘public’ social networks, other people can look at what you’re doing, it’s partly because thinking about this post on ‘Twitter impact factors’ kept me up all night: Twitter – how interconnected are you?.

The above is all “very interesting”, of course, but I’m not sure how valuable it is, e.g. in helping us understand how knowledge might flow around the OU Twitter network? Maybe I need to go away and start looking at some of the social network analysis literature, as well as some of the other Twitter network analysis tools, such as Twinfluence (Thanks, @Eingang:-)

PS Non S. – Many Eyes may give you a way of embedding a Wordle tagcloud…?)

Google Personal Custom Search Engines?

A couple of days ago, I gave a talk about possible future library services, and in passing mentioned the way that my Google search results are increasingly personalised. Martin picked up on this in a conversation over coffee, and then in a blog post (“Your search is valuable to us“):

This made me think that your search history is actually valuable, because the results you get back are a product of the hours you have invested in previous searches and the subject expertise in utilising search terms. So, if you are an expert in let’s say, Alaskan oil fields, and have been researching this area for years, then the Google results you get back for a search on possible new oil fields will be much more valuable than the results anyone else would get.

[I]f you can assemble and utilise the expert search of a network of people, then you can create a socially powered search which is very relevant for learners. Want to know about really niche debates in evolution? We’ve utilised Richard Dawkins, Steve Jones and Matt Ridley’s search history to give you the best results. Or if you prefer, the search is performed as the aggregation of a specialist community.

There are more than a few patents in this area of course (you can get a feel for what the search engines are (thinking about) doing in this area by having a read through these SEO by the SEA posts on “search+history+personal”), but I was wondering:

how easy would it be to expose my personal search history reranking filter (or whatever it is that Google uses) as a custom search engine (under my privacy controls, of course)?

As Martin says (and as we discussed over coffee), you’d want to disable further personalisation of your CSE by users who aren’t you (to get round the Amazon equivalent of Barbie doll and My Little Pony “items for you” recommendations I seem to get after every Christmas!), but exposing the personal search engine would potentially be a way of exposing a valuable commodity.

In the context of the Library, rather going to the Library website and looking up the books by a particular author, or going to ORO and looking up a particular author’s papers, you might pull their personalised search engine off the shelf and use that for a bit of a topic related Googling…

In a comment to Martin’s post, Laura suggests “Aren’t the search results that the expert actually follows up and bookmarks more powerful? Every academic should be publishing the RSS feeds for their social bookmarks, classified by key terms. The user can choose to filter these according to the social rating of the URL and aggregate results from a group of experts according to their reputation in their field and their online expertise in finding valuable sources.”

I guess this amounts to a social variant of the various “deliSearch” search engines out there, that let you run a search over a set of bookmarked pages or domains (see Search Hubs and Custom Search at ILI2007, for example, or these random OUseful posts on delicious powered search etc)?

At times like these, I sometimes wish I’d put a little more effort into searchfeedr (example: searching some of my delicious bookmarks tagged “search’ for items on “personal search”). I stopped working on searchfeedr before the Google CSE really got going, so it’d be possible to build a far more powerful version of it now…

Anyway, that’s maybe something else to put on the “proof-of-concept to do” list…

PS Note to self – also look at “How Do I?” instructional video search engine to see how easy it would be to embed videos in the results…

OpenLearn ebooks, for free, courtesy of OpenLearn RSS and Feedbooks…

A couple of weeks ago, I popped the Stanza ebook reader application on my iPod Touch (it’s been getting some good reviews, too: Phone Steals Lead Over Kindle ). I didn’t add any ebooks to it, but it did come with a free sample book, so when I was waiting for a boat on my way home last week, I had a little play and came away convinced that I would actually be able to read a long text from it.

So of course, of course, the next step was to have a go at converting OpenLearn courses to an ebook format and see how well they turned out…

There are a few ebook converters out there, such as the Bookglutton API that will “accept HTML documents and generates EPUB files. Post a valid HTML file or zipped HTML archive to this url to get an EPUB version as the response” for example, so it is possible to upload a download(!) of an OpenLearn unit ‘print version’ (a single HTML page version of an OpenLearn unit) or upload the zipped HTML version of a unit (although in that case you have to meddle with the page names so they are used in the correct order when generating the ebook).

The Stanza desktop app, free as a beta download at the moment, but set to be (affordable) payware later this year can also handle epub generation (in fact, it will output an ebook in all manner of formats).

The easiest way I’ve found to generate ebooks though is, of course, feed powered:-) Sign up for an account with Feedbooks, click on the news icon (err…?!) and then add a feed (err…?!)

(Okay, so the interface is a little hard to navigate at times… No big obvious way to “Add feed here”, for example, that uses a version of the feed icon as huge visual cue, but maybe that’ll come…)

Once the feed is added, it synchs and you have your ebook. So for example, here are a couple of Feedbooks powered by OpenLearn unit RSS feeds:

RSS Feedbook ebookfor the OpenLearn unit “Parliament and the law”
RSS Feedbook ebook for the OpenLearn unit “Introducing consciousness”

Getting the ebook in Stanza on the iPod Touch/iPhone is also a little clunky at the the moment, although once it’s there it works really well. Whilst there is a route directly to Feedbooks from the app (as well as feed powered ebooks, Feedbooks also acts as a repository for a reasonable selection of free ebooks taht can be pulled into the iPhione Stanza app quite easily), the only way I could find to view my RSS powered feedbooks was to enter the URL; and on the iPod, the feedbook URLs were hard to find: logging in to my account on the Feedbooks site and clicking the ebook link just gave an error as the iPod tried to open a document format it couldn’t handle – and Safari wouldn’t show me the URL in the address bar (it redirected somewhere).

Anyway, user interface issues aside, the route to ebookdom for the OpenLearn materials is essentially a straightforward one – grab a unit content RSS feed, paste it into Feedbooks to generate an ePub book, and then view it in Stanza. The Feedbooks folks are working on extending their API too, so hopefully better integration within Stanza should be coming along shortly.

Once the feedbook has been synched to the Stanza iPhod app, it stays there – no further internet connection required. One neat feature of the app is that each book in your collection is bookmarked at the place you left off reading it, so you could have several OpenLearn units on the go at the same time, accessing them all offline, and being able to click back to exactly the point where you left it.

At the moment the ebooks that Feedbooks generates don’t contain images, so it might not be appropriate to try to read every OpenLearn unit as a Feedbooks ebook. There are also issues where units refer out to additional resources – external readings in the form of linked PDFs, or audio and video assets, but for simple text dominated units, the process works really well.

(I did wonder if Feedbooks replaced images from the OpenLearn units with their alt text, or transclusion of linked to longdesc descriptions, but apparently not. No matter though, as it seems that many OpenLearn images aren’t annotated with description text…)

If you have an iPhone or iPod Touch, and do nothing else this week, get Stanza installed and have a play with Feedbooks…

Continous Group Exercise Feedback via Twitter?

Yesterday I took part in a session with Martin Weller and Grainne Conole pitching SocialLearn to the Library (Martin), exploring notions of a pedagogy fit for online social learning (Grainne) and idly wodering about how the Library might fit in all this, especially if it became ‘invisible’ (my bit: The Invisible Library):

As ever, the slides are pretty meaningless without me rambling over them… but to give a flavour, I first tried to set up three ideas of ‘invisibleness’:

– invisibility in everyday life (random coffee, compared to Starbucks: if the Library services were coffee, what coffee would they be, and what relationship would, err, drinkers have with them?);

– positive action, done invisibly (the elves and the shoemaker);

– and invisible theatre (actors ‘creating a scene’ as if it were real (i.e. the audience isn’t aware it’s a performance), engaging the audience, and leaving the audience to carry on participating (for real) in the scenario that was set up).

And then I rambled a bit a some webby ways that ‘library services’, or ‘information services’ might be delivered invisibly now and in the future…

After the presentations, the Library folks went into groups for an hour or so, then reported back to the whole group in a final plenary session. This sort of exercise is pretty common, I think, but it suddenly struck me that it could be far more interesting in the ‘reporter’ on each table was actually twittering during the course of the group discussion? This would serve to act as a record for each group, might allow ‘semi-permeable’ edges to group discussions (although maybe you don’t want groups to be ‘sharing’ ideas, and would let the facilitator (my experience is that there’s usually a facilitator responsible whenever there’s a small group exercise happening!) eavesdrop on every table at the same time, and maybe use that as a prompt for wandering over to any particular group to get them back on track, or encourage them to pursue a particular issue in a little more detail?

Thoughts on Visualising the OU Twitter Network…

“Thoughts”, because I don’t have time to do this right now, (although it shouldn’t take that long to pull together? Maybe half a day, at most?) and also to give a glimpse into to the sort of thinking I’d do walking the dog, in between having an initial idea about something to hack together, and actually doing it…

So here’s the premise: what sort of network exists within the OU on Twitter?

Stuff I’d need – a list of all the usernames of people active in the OU on Twitter; Liam is aggregating some on PlanetOU, I think?, and I seem to remember I’ve linked to an IET aggregation before.

Stuff to do (“drafting the algorithm”):

– for each username, pull down the list of the people they follow (and the people who follow them?);
– clean each list so it only contains the names of OU folks (we’re gonna start with a first order knowledge flow network, only looking at links within the OU).
– for each person, p_i, with followers F_ij, create pairs username(p_i)->username(F_ij); or maybe build a matrix: M(i,j)=1 if p_j follows p_i??
– imagine two sorts of visualisation: one, an undirected network graph (using Graphviz) that only shows links where following is reciprocated (A follows B AND B follows A); secondly, a directed graph visualisation, where the link simply represents “follows”.

Why bother? Because we want to look at how people are connected, and see if there are any natural clusters (this might be most evident in the reciprocal link case?) cf. the author clusters evident in looking at ORO co-authorship stuff. Does the network diagram give an inkling as to how knowledge might flow round the OU? Are there distinct clusters/small worlds connected to other distinct clusters by one or two individuals (I’m guessing people like Martin who follows everyone who follows him?). Are there “supernodes” in the network that can be used to get a message out to different groups?

Re: the matrix view: I need to read up on matrices… maybe there’s something we can do to identify clusters in there?

Now if only I had a few hours spare…

Video Print

Sitting in a course team meeting of 6 for over 3 hours today (err, yesterday…) discussing second drafts of print material for a course unit that will be delivered for the first time in March 2010 (third drafts are due mid-December this year), it struck me that we were so missing the point as the discussion turned to how best to accommodate a reference from print material to a possible short video asset in such a way that a student reading the written print material might actually refer to the video in a timely way…

Maybe it’s because the topic was mobile telephony, but it struck me that the obvious way to get students reading print material to watch a video at the appropriate point in the text would be to use something like this:

By placing something like a QR code in the margin text at the point you want the reader to watch the video, you can provide an easy way of grabbing the video URL, and let the reader use a device that’s likely to be at hand to view the video with…

I have to admit the phrase “blended learning” has to date been largely meaningless to me… But this feels like the sort of thing I’d expect it to be… For example:

Jane is sitting at the table, reading a study block on whatever, her mobile phone on the table at her side. As she works through the material, she annotates the text, underlining key words and phrases, making additional notes in the margin. At a certain point in the text, she comes across a prompt to watch a short video to illustrate a point made in the previous paragraph. She had hoped not to have to use her PC in this study session – it’s such a hassle going upstairs to the study to turn it on… Maybe she’ll watch the video next time she logs in to the VLE (if she remembers…). Of course, life’s not like that now. She picks up her phone, takes a picture of the QR code in the margin, and places her phone back on the table, next to the study guide. The video starts, and she takes more notes as it plays…

Thinking about it, here’s another possibility:

Jim is in lean back mode, laying on the sofa, feet up, skimming through this week’s study guide. The course DVD is in the player. As he reads through the first section, there’s a prompt to watch an explanatory video clip. He could snap the QR code in the margin and watch the video on his phone, but as the course DVD is all cued up, it’s easy enough to select the block menu, and click on the appropriate clip’s menu item. Of course, it’d be just as easy to use the Wii connected to the TV to browse to the course’s Youtube page and watch the clips that way, but hey, the DVD video quality is much better…

This is quite an old OU delivery model – for years we expected students to record TV programmes broadcast in the early hours of the morning, or we’d send them video cassettes. But as video delivery has got easier, and the short form (2-3 minute video clip) has gained more currency, I get the feeling we’ve been moving away from the use of video media because it’s so expensive to produce and so inconvenient to watch…

iTunes in Your Pocket… Almost…

Having been tipped off about about a Netvibes page that the Library folks are pulling together about how to discover video resources (Finding and reusing video – 21st century librarianship in action, methinks? ;-) I thought I’d have a look at pulling together an OU iTunes OPML bundle that could be used to provide access to OU iTunes content in a Grazr widget (or my old RadiOBU OpenU ‘broadcast’ widget ;-) and maybe also act as a nice little container for viewing/listening to iTunes content on an iPhone/iPod Touch.

To find the RSS feed for a particular content area in iTunesU, navigate to the appropriate page (one with lists of actual downloadable content showing in the bottom panel), make sure you have the right tab selected, then right click on the “Subscribe” button and copy the feed/subscription URL (or is there an easier way? I’m not much of an iTunes user?):

You’ll notice in the above case that as well as the iPod video (mp4v format?), there is a straight video option (.mov???) and a transcript. I haven’t started to think about how to make hackable use of the transcripts yet, but in my dreams I’d imagine something like these Visual Interfaces for Audio/Visual Transcripts! ;-) In addition, some of the OU iTunesU content areas offer straight audio content.

Because finding the feeds is quite a chore (at least in the way I’ve described it above), I’ve put together an OU on iTunesU OPML file, that bundles together all the separate RSS from the OU on iTunesU area (to view this file in an OPML widget, try here: OU iTunesU content in a Grazr widget).

The Grazr widget lets you browse through all the feeds, and if you click on an actual content item link, iit should launch a player (most likely Quicktime). Although the Grazr widget has a nice embedded player for MP3 files, it doesn’t seem to offer an embedded player for iTunes content (or maybe I’m missing something?)

You can listen to the audio tracks well enough in an iPod Touch (so the same is presumably true for an iPhone?) using the Grazr iphone widget – but for some reason I can’t get the iPod videos to play? I’m wondering if this might be a mime-type issue? or maybe there’s some other reason?

(By the by, it looks like the content is being served from an Amazon S3 server… so has the OU bought into using S3 I wonder? :-)

For completeness, I also started to produce a handcrafted OPML bundle of OU Learn Youtube playlists, but then discovered I’d put together a little script ages ago that will create one of these automatically, and route each playlist feed through a feed augmentation pipe that adds a link to each video as a video enclosure:

Why would you want to do this? Because if there’s a video payload as an enclosure, Grazr will provide an embedded player for you… as you can see in this screenshot of Portable OUlearn Youtube playlists widget (click through the image to play with the actual widget):

These videos will play in an iPod Touch, although the interaction is a bit clunky, and actually slight cleaner using the handcrafted OPML: OUlearn youtube widget for iphone.

PS it’s also worth remembering that Grazr can embed Slideshare presentations, though I’m pretty sure these won’t work on the iPhone…