Thoughts on Visualising the OU Twitter Network…

“Thoughts”, because I don’t have time to do this right now, (although it shouldn’t take that long to pull together? Maybe half a day, at most?) and also to give a glimpse into to the sort of thinking I’d do walking the dog, in between having an initial idea about something to hack together, and actually doing it…

So here’s the premise: what sort of network exists within the OU on Twitter?

Stuff I’d need – a list of all the usernames of people active in the OU on Twitter; Liam is aggregating some on PlanetOU, I think?, and I seem to remember I’ve linked to an IET aggregation before.

Stuff to do (“drafting the algorithm”):

– for each username, pull down the list of the people they follow (and the people who follow them?);
– clean each list so it only contains the names of OU folks (we’re gonna start with a first order knowledge flow network, only looking at links within the OU).
– for each person, p_i, with followers F_ij, create pairs username(p_i)->username(F_ij); or maybe build a matrix: M(i,j)=1 if p_j follows p_i??
– imagine two sorts of visualisation: one, an undirected network graph (using Graphviz) that only shows links where following is reciprocated (A follows B AND B follows A); secondly, a directed graph visualisation, where the link simply represents “follows”.

Why bother? Because we want to look at how people are connected, and see if there are any natural clusters (this might be most evident in the reciprocal link case?) cf. the author clusters evident in looking at ORO co-authorship stuff. Does the network diagram give an inkling as to how knowledge might flow round the OU? Are there distinct clusters/small worlds connected to other distinct clusters by one or two individuals (I’m guessing people like Martin who follows everyone who follows him?). Are there “supernodes” in the network that can be used to get a message out to different groups?

Re: the matrix view: I need to read up on matrices… maybe there’s something we can do to identify clusters in there?

Now if only I had a few hours spare…

Visualising the OU Twitter Network

Readers of any prominent OU bloggers will probably have noticed that we appear to have something of Twitter culture developing within the organisation (e.g. “Twitter, microblogging and living in the stream“). After posting a few Thoughts on Visualising the OU Twitter Network…, I couldn’t resist the urge to have a go at drawing the OpenU twittergraph at the end of last week (although I had hoped someone else on the lazyweb might take up the challenge…) and posted a few teaser images (using broken code – oops) via twitter.

Anyway, I tidied up the code a little, and managed to produce the following images, which I have to say are spectacularly uninteresting. The membership of the ‘OU twitter network’ was identified using a combination of searches on Twitter for “open.ac.uk” and “Open University”, coupled with personal knowledge. Which is to say, the membership list may well be incomplete.

The images are based on a graph that plots who follows whom. If B follows A, then B is a follower and A is followed. In the network graphs, an arrow goes from A to B if A is followed by B (so in the network graph, the arrows point to people who follow you. The graph was constructed by making calls to the Twitter API for the names of people an individual followed, for each member of the OU Twitter network. An edge appears in the graph if a person in the OU twitter network follows another person in the OU Twitter network. (One thing I haven’t looked at is to see whether there are individuals followed by a large number of OpenU twitterers who aren’t in the OpenU twitter network… which might be interesting…)

Wordle view showing who in the network has the most followers (the word size is proportional to the number of followers, so the bigger your name, the more people there are in the OU network that follow you). As Stuart predicted, this largely looks like a function of active time spent on Twitter.

We can compare this with a Many Eyes tag cloud showing how widely people follow other members of the OU network (the word size is proportional to the number of people in the OU network that the named individual follows – so the bigger your name, the more people in the OU network you follow).

Note that it may be interesting to scale this result according to the total number of people a user is following:

@A’s OU network following density= (number of people @A follows in OU Twitter network)/(total number of people @A follows)

Similarly, maybe we could also look at:

@A’s OU network follower density= (number of people in OU Twitter network following @A)/(total number of people following @A)

(In the tag clouds, the number of people following is less than the number of people followed; I think this is in part because I couldn’t pull down the names of who a person was following for people who have protected their tweets?)

Here’s another view of people who actively follow other members of the OU twitter network:

And who’s being followed?

These treemaps uncover another layer of information if we add a search…

So for example, who is Niall following/not following?

And who’s following Niall?

I’m not sure how useful a view of the OU Twittergraph is itself, though?

Maybe more interesting is to look at the connectivity between people who have sent each other an @message. So for example, here’s how Niall has been chatting to people in the OU twitter network (a link goes from A to B if @A sends a tweet to @B):

ou personal activer twittermap

We can also compare the ‘active connectivity’ of several people in the OU Twitter network. For example, who is Martin talking to, (and who’s talking to Martin) compared with Niall’s conversations?

2008-10-13_0157

As to why am I picking on Niall…? Well, apart from making the point that by engaging in ‘public’ social networks, other people can look at what you’re doing, it’s partly because thinking about this post on ‘Twitter impact factors’ kept me up all night: Twitter – how interconnected are you?.

The above is all “very interesting”, of course, but I’m not sure how valuable it is, e.g. in helping us understand how knowledge might flow around the OU Twitter network? Maybe I need to go away and start looking at some of the social network analysis literature, as well as some of the other Twitter network analysis tools, such as Twinfluence (Thanks, @Eingang:-)

PS Non S. – Many Eyes may give you a way of embedding a Wordle tagcloud…?)

Referrer Traffic from Amazon – WTF?!

As it happens, I have been know to look at my blog stats from time (!), and today I noticed something odd in the referrer stats:

A referral from Amazon. WTF?

The link goes to a book detail page for a book about Wikipedia:

Scrolling down a bit, I found this:

A blog post syndicated in the product page from one of the book’s authors that linked to my post on Data Scraping Wikipedia with Google Spreadsheets.

My immediate thought – is there any way we can blog info about courses that use set textbooks back into the related Amazon product page?

Printing Out Online Course Materials With Embedded Movie Links

Although an increasing number of OU courses include the delivery of online course materials, written for online delivery as linked HTML pages, rather than just as print documents viewable online, we know (anecdotally at least, from requests that printing options be made available to print off whole sections of a course with a single click) that many students want to be able to print off the materials… (I’m not sure we know why they want to print off the materials, though?)

Reading through a couple of posts that linked to my post on Video Print (Finding problems for QR tags to solve and Quite Resourceful?) I started to ponder a little bit more about a demonstrable use case that we could try out in a real OU course context over a short period of time, prompted by the following couple of comments. Firstly:

So, QR codes – what are they good for? There’s clearly some interest – I mentioned what I was doing on Twitter and got quite a bit of interest. But it’s still rare to come across QR codes in the wild. I see them occasionally on blogs/web-pages but I just don’t much see the point of that (except to allow people like me to experiment). I see QR codes as an interim technology, but a potentially useful one, which bridges the gap between paper-based and digital information. So long as paper documents are an important aspect of our lives (no sign of that paper-less office yet) then this would seem to be potentially useful.
[Paul Walk: Quite Resourceful?]

And secondly:

There’s a great idea in this blog post, Video Print:

By placing something like a QR code in the margin text at the point you want the reader to watch the video, you can provide an easy way of grabbing the video URL, and let the reader use a device that’s likely to be at hand to view the video with…

I would use this a lot myself – my laptop usually lives on my desk, but that’s not where I tend to read print media, so in the past I’ve ripped URLs out of articles or taken a photo on my phone to remind myself to look at them later, but I never get around to it. But since I always have my phone with me I’d happily snap a QR code (the Nokia barcode software is usually hidden a few menus down, but it’s worth digging out because it works incredibly well and makes a cool noise when it snaps onto a tag) and use the home wifi connection to view a video or an extended text online.

As a ‘call to action’ a QR tag may work better than a printed URL because it saves typing in a URL on a mobile keyboard.
[Mia Rdige: Finding problems for QR tags to solve]

And the hopefully practical idea I came up with was this: in the print option of our online courses that embed audio and/or video, design a stylesheet for the print version of the page that will add a QR code that encodes a link to the audio or video asset in the margin of the print out or alongside a holding image for each media asset. In the resources area of the course, provide an explanation of QR codes, maybe with a short video showing how they are used, and links (where possible) to QR reader tools for the most popular mobile devices.

So for example, here is a partial screenshot of material taken from T184 Robotics and the Meaning of Life (the printout looks similar):

And here’s what a trivial change to the stylesheet might produce:

The QR code was generated using the Kaywa QR-code generator – just add a URL as a variable to the generator service URL, and a QR code image appears :-)

Here’s what the image embed code looks like (the link is to the T184 page on the courses and qualifications website – in practice, it would be to the video itself):

<img src=”http://qrcode.kaywa.com/img.php?s=6&d=http%3A%2F%2Fwww3.open.ac.uk%2Fcourses%2Fbin%2Fp12.dll%3FC01t184&#8243; alt=”qrcode” />

Now anyone familiar with OU production processes will know that many of our courses still takes years – that’s right, years – to put together, which makes ‘rapid testing’ rather difficult at times ;-)

But just making a tiny tweak to the stylesheet of the print option in an online course is low risk, and not going to jeopardise quality of course (or a student’s experience of it). But it might add utility to the print out for some students, and it’s a trivial way of starting to explore how we might “mobilise” our materials for mixed online and offline use. And any feedback we get is surely useful for going forwards?

Bung the Common Craft folk a few hundred quid for a “QR codes in Plain English” video and we’re done?

Just to pre-empt the most obvious OU internal “can’t do that because” comment – I know that not everyone prints out the course materials, and I know that not everyone has a mobile phone, and I know that of those that do, not everyone will have a phone that can cope with reading QR codes or playing back movies, and that’s exactly the point

I’m not trying to be equitable in the sense of giving everyone exactly the same experience of exactly the same stuff. Because I’m trying to find ways of providing access to the course materials in a way that’s appropriate to the different ways that students might want to consume them.

As to how we’d know whether anyone was actually using the QR codes – one way might be to add a campaign tracking code onto each QR coded URL, so that at least we’d be able to tell which of the assets were were hosting were being hit from the QR code.

So now here’s a question for OU internal readers. Which “innovation pipeline” should I use to turn the QR code for video assets idea from just an idea into an OU innovation? The CETLs? KMi? IET (maybe their CALRG?) The new Innovation office? LTS Strategic? The Mobile Learning interest group thingy? The Moodle/VLE team? Or shall I just take the normal route of an individual course team member persuading a developer to do it as a favour on a course I’m currently involved with (a non-scalable result in terms of taking the innovation OU -wide, because the unofficial route is an NIH route…!)

And as a supplementary question, how much time should I spend writing up the formal project proposal (CETLs) or business case (LTS Strategic, Innovation Office(?)) etc, and then arguing it through various committees, bearing in mind I’ve spent maybe an hour writing this blog post and the previous one (and also that there’s no more to write – the proof now is in the testing ;-), and it’d take a developer maybe 2 hours to make the stylesheet change and test it?

I just wonder what would happen if any likely candidates for the currently advertised post of e-Learning Developer, in LTS (Learning and Teaching Solutions) were to mention QR codes and how they might be used in answer to a question about how they might “demonstrate a creative but pragmatic approach to delivering the ‘right’ solution within defined project parameters”?! Crash and burn, I suspect!;-)

(NB on the jobs front, the Centre for Professional Learning and Development is also advertising at the moment, in particular for a Interactive Media Developer and a Senior Learning Developer.)

Okay, ranty ramble over, back to the weekend…

PS to link to a sequence that starts so many minutes and seconds in, use the form: http://uk.youtube.com/watch?v=mfv_hOFT1S4#t=9m49s.

PPS for a good overview of QR codes and mobile phones, see Mobile codes – an easy way to get to the web on your mobile phone.

PPPS [5/2010] Still absolutely no interest in the OU for this sort of thing, but this approach does now appear to be in the wild… Books Come Alive with QR Codes & Data in the Cloud

2.0 1.0, and a Huge Difference in Style

A couple of weeks ago I received an internal email announcing a “book project”, described on the project blog as follows:

During the summer I [Darrell Ince, an OU academic in the Computing department] read a lot about Web 2.0 and became convinced that there might be some mileage in asking our students to help develop materials for teaching. I set up two projects: the first is the mass book writing project that this blog covers …

The book writing project involves OU students, and anyone else who wants to volunteer, writing a book about the Java-based computer-art system known as Processing.

A student who wants to contribute 2500 words to the project will carry out the following tasks:

* Email an offer to write to the OU.
* We will send them a voucher that will buy them a copy of a recently published book by Greenberg.
* They will then read the first 3 chapters of the book.
* We will give them access to a blog which contains a specification of 85 chunks of text about 2500 words in length.
* The student will then write it and also develop two sample computer programs
* The student will then send the final text and the two programs to the OU.

We will edit the text and produce a sample book from a self-publisher and then attempt to interest a mainstream publisher to take the book.

[Darrel Ince Mass Writing Blog: Introduction]

A second project blog – Book Fragments – contains a list of links to blogs of people who are participating in the project, and other project related information, such as a “sample chapter”, and a breakdown of the work assigned to each “chunk” of the book (see all but the first post in the September archive; some education required there in the use of blog post tags, I think?! ;-)

This is quite an ambitious – and exciting – project, but it really feels to me like far too much like the “trad OU” authoring model, not least in that the focus is on producing a print item (a book) about an exciting interactive medium (Processing). It also seems to be using the tools from a position of inexperience about what the tools can do, or what other tools are on offer. For example, I wonder what sorts of decisions were made regarding the recommended authoring environment (Blogspot blogs).

Now just jumping in and doing it with these tools is a Good Thing, but a little bit of knowledge could maybe help extract more value from the tools? And a couple of days with a developer and a designer could probably pull quite a powerful authoring and publishing environment together that would work really well for developing in-browser, no plugin or download required, visually appealing interactive Processing related materials.

So for what it’s worth, here are some of the things I’d have pondered at some length if I was allowed to run this sort of project (which I’m not…;-)

Authoring Environment:

  • as the target output is a book, I’d have considered authoring in Google docs. (Did I mention I got a hack in Google Apps Hacks, which was authored in Google docs? ;-) Google docs supports single or multiple author access, public, shared or private documents (with a viariety of RW access privileges) and the ability to look back over historical changes. Even if authors were ecouraged to write separate drafts of their chapters, this could have been done in separate Google docs documents, linked to from a blog post.
  • would authoring chunks in a wiki have been appropriate? We can get a Mediawiki set up on the open.ac.uk domain on request, either in public or behind the firewall. Come to that, we can also get WordPress blogs set up too, either individual ones or a team blog – would a team blog have been a better approach, with sensible use of categories to partition the content? Niall would probably say the project should have used a Moodle VLE blog or wiki, but I”d respond that probably wouldn’t be a very good idea ;-)

Given that authors have been encouraged to use blogs, I’d have straightway pulled a blog roll together, maybe created a Planet aggregator site like Planet OU (here’s a temporary pipe aggregation solution), and probably indexed all the blogs in a custom search engine? And I’d have tried to interest the authors in using tags and categories.

Looking over the project blog to date, it seems there has been an issue with how to lay out code fragments in HTML (given that in vanilla HTML, white space is reduced to a single space when the HTML is rendered).

Now for anyone who lives in the web, the answer is to use a progressive enhancement library that will mark up the code in a language sensitive way. I quite like the Syntax Highlighter library, although on a quick trial with a default Blogspot template, it didn’t work:-) (That said, a couple of hours work from a competent web designer should result in a satisfactory, if simple, stylesheet that could use this library, and could then be made available to project participants).

A clunkier approach is to use something like Quick Highlighter, one of several such services that lets you paste in a block of programme code and get marked up HTML out. (The trick here is to paste the the CSS for e.g. the Java mark-up into the blog stylesheet template, and then all you have to do is paste marked up programme code into a blog post.)

A second issue I have is that I imagine that writing – and testing – the Processing code requires a download, and Java, and probably a Java authoring environment; and that’s getting too far away from the point, which is learning how to do stuff in Processing (or maybe that isn’t the point? Maybe the point is to teach trad programming and programming tools using Processing to provide some sort of context?)

So my solution? Use John Resig’s processing.js library – a port of Processing to Javascript – and then use something like the browser based Obsessing editor – write Processing code in the browser, then run it using processing.js:

A tweak to the Blogspot template should mean that processing code can be included in a post and executed using processing.js? Or if not, we could probably get it to work in an OU hosted WordPress environment?

Finally, the “worthy academic” pre-planned structure of the intended book just doesn’t work for me. I’d phrase the project in a far more playful way, and try to accrete comments and questions around mini-projects working out how to get various things working in Processing, probably in a blogged uncourse like way.

Sort of related to this, I’ve been thinking of writing something not too dissimilar from I’m Leaving, along the lines of “I’m not into dumbing down, but I’m quitting the ivory tower”, because the arrogance of academia is increasingly doing my head in. (If you’re a serious academic, you’re not allowed to use “slang” like that. You have to say you are “seriously concerned by the blah blah blah blah blah blah blah”… It does my head in ;-)

PS not liking the proposed book structure is not say I’m not into teaching proper computer science – I’d love to see us teaching compiler theory, or web services using real webservices, like some of the telecoms companies’ APIs;-) But there’s horses for courses, and this Processing stuff should be fun and accessible, right? (And that doesn’t mean it has to be substanceless…)

PPS how intersting would it have been to coolaboratively write an interatcive book, along the lines of this interactive presentation: Learning Advanced Javascript – double click in any of the code displaying slides, and you can edit – and run – the Javascript code in the browser/withi the presentation (described here: Adv. JavaScript and Processing.js, which includes a link to a downloadable version of the interactive presentation).

The Convenience of Embedded, Flash Played, PDFs

Yesterday, my broadband connection went down as BT replaced the telegraph pole that hangs the phone wire to our house, which meant I managed to get a fair bit of reading done, both offline and via a tab sweep.

One of my open tabs contained a ReadWriteWeb Study: Influencers are Alive and Well on Social Media Sites, which reviewed a study form Rubicon Consulting that provides some sort of evidence for the majority of “user generated content” on the web being produced by a small percentage of the users. The post linked to a PDF of the white paper which I assumed (no web connection) I’d have to remember to look up later.

And then – salvation:

The PDF had been embedded in a PDFMENOT Flash player (cf. Scribd etc.), which itself was embedded in the post. So I could read the paper at my leisure without having to connect back to the network.

Recent OU Programmes on the BBC, via iPlayer

As @liamgh will tell you, Coast is getting a quite a few airings at the moment on various BBC channels. And how does @liamgh know this? Because he’s following the open2 openuniversity twitter feed, which sends out alerts when an OU programme is about to be aired on a broadcast BBC channel.

(As well as the feed from the open2 twitter account, you can also find out what’s on from the OU/BBC schedule feed (http://open2.net/feeds/rss_schedule.xml), via the Open2.net schedule page; iCal feeds appear not to be available…)

So to make it easier for him to catch up on any episodes he missed, here’s a quick hack that mines the open2 twitter feed to create a “7 day catch up” site for broadcast OU TV programmes (the page also links through to several video playlists from the OU’s Youtube site).

The page actually displays links to programmes that are currently viewable on BBC iPlayer (either via a desktop web browser, or via a mobile browser – which means you can view this stuff on your iPhone ;-), and a short description of the programme, as pulled from the programme episode‘s web page on the BBC website. You’ll note that the original twitter feed just mentions the programme title; the TinyURLd link goes back to the series web page on the Open2 website.

Thinking about it, I could probably have done the hackery required to get iPlayer URLs from with in the page; but I didn’t… Given the clue that page is put together using a JQuery script I stole from this post on Parsing Yahoo Pipes JSON Feeds with jQuery, you can maybe guess where the glue logic for this site lives?;-)

There are three pipes involved in the hackery – the JSON that is pulled into the page comes from this OU Recent programmes (via BBC iPlayer) pipe.

THe first part grabs the feed, identifies the programme title, and then searches for that programme on the BBC iPlayer site.

The nested BBC Search Results scrape pipe searches the BBC programmes site and filters results that point to an actual iPlayer page (so we can we can watch the result on iPlayer).

Back in the main pipe, we take the list of recently tweeted OU programmes that are available on iPlayer, grab the programme ID (which is used as a key in all manner of BBC URLs :-), and then call another nested pipe that gets the programme description from the actual programme web page.

This second nested pipe just gets the programme description, creates a title and builds the iPlayer URL:

(The logic is all a bit hacked – and could be tidied up – but I was playing through my fingertips and didn’t feel like ‘rearchitecting’ the system once I knew what I wanted it to do… which it is what it does do…;-)

As an afterthought, the items in the main pipe are annotated with a link to the mobile iPlayer version of each programme:

So there you have it: a “7 day catch up” site for broadcast OU TV programmes, with replay via iPlayer or mobile iPlayer.

[18/11/08 – the site that the app runs on is down at the moment, as network security update is carried out; sorry about that – maybe I should use a cloud server?]