2.0 1.0, and a Huge Difference in Style

A couple of weeks ago I received an internal email announcing a “book project”, described on the project blog as follows:

During the summer I [Darrell Ince, an OU academic in the Computing department] read a lot about Web 2.0 and became convinced that there might be some mileage in asking our students to help develop materials for teaching. I set up two projects: the first is the mass book writing project that this blog covers …

The book writing project involves OU students, and anyone else who wants to volunteer, writing a book about the Java-based computer-art system known as Processing.

A student who wants to contribute 2500 words to the project will carry out the following tasks:

* Email an offer to write to the OU.
* We will send them a voucher that will buy them a copy of a recently published book by Greenberg.
* They will then read the first 3 chapters of the book.
* We will give them access to a blog which contains a specification of 85 chunks of text about 2500 words in length.
* The student will then write it and also develop two sample computer programs
* The student will then send the final text and the two programs to the OU.

We will edit the text and produce a sample book from a self-publisher and then attempt to interest a mainstream publisher to take the book.

[Darrel Ince Mass Writing Blog: Introduction]

A second project blog – Book Fragments – contains a list of links to blogs of people who are participating in the project, and other project related information, such as a “sample chapter”, and a breakdown of the work assigned to each “chunk” of the book (see all but the first post in the September archive; some education required there in the use of blog post tags, I think?! ;-)

This is quite an ambitious – and exciting – project, but it really feels to me like far too much like the “trad OU” authoring model, not least in that the focus is on producing a print item (a book) about an exciting interactive medium (Processing). It also seems to be using the tools from a position of inexperience about what the tools can do, or what other tools are on offer. For example, I wonder what sorts of decisions were made regarding the recommended authoring environment (Blogspot blogs).

Now just jumping in and doing it with these tools is a Good Thing, but a little bit of knowledge could maybe help extract more value from the tools? And a couple of days with a developer and a designer could probably pull quite a powerful authoring and publishing environment together that would work really well for developing in-browser, no plugin or download required, visually appealing interactive Processing related materials.

So for what it’s worth, here are some of the things I’d have pondered at some length if I was allowed to run this sort of project (which I’m not…;-)

Authoring Environment:

  • as the target output is a book, I’d have considered authoring in Google docs. (Did I mention I got a hack in Google Apps Hacks, which was authored in Google docs? ;-) Google docs supports single or multiple author access, public, shared or private documents (with a viariety of RW access privileges) and the ability to look back over historical changes. Even if authors were ecouraged to write separate drafts of their chapters, this could have been done in separate Google docs documents, linked to from a blog post.
  • would authoring chunks in a wiki have been appropriate? We can get a Mediawiki set up on the open.ac.uk domain on request, either in public or behind the firewall. Come to that, we can also get WordPress blogs set up too, either individual ones or a team blog – would a team blog have been a better approach, with sensible use of categories to partition the content? Niall would probably say the project should have used a Moodle VLE blog or wiki, but I”d respond that probably wouldn’t be a very good idea ;-)

Given that authors have been encouraged to use blogs, I’d have straightway pulled a blog roll together, maybe created a Planet aggregator site like Planet OU (here’s a temporary pipe aggregation solution), and probably indexed all the blogs in a custom search engine? And I’d have tried to interest the authors in using tags and categories.

Looking over the project blog to date, it seems there has been an issue with how to lay out code fragments in HTML (given that in vanilla HTML, white space is reduced to a single space when the HTML is rendered).

Now for anyone who lives in the web, the answer is to use a progressive enhancement library that will mark up the code in a language sensitive way. I quite like the Syntax Highlighter library, although on a quick trial with a default Blogspot template, it didn’t work:-) (That said, a couple of hours work from a competent web designer should result in a satisfactory, if simple, stylesheet that could use this library, and could then be made available to project participants).

A clunkier approach is to use something like Quick Highlighter, one of several such services that lets you paste in a block of programme code and get marked up HTML out. (The trick here is to paste the the CSS for e.g. the Java mark-up into the blog stylesheet template, and then all you have to do is paste marked up programme code into a blog post.)

A second issue I have is that I imagine that writing – and testing – the Processing code requires a download, and Java, and probably a Java authoring environment; and that’s getting too far away from the point, which is learning how to do stuff in Processing (or maybe that isn’t the point? Maybe the point is to teach trad programming and programming tools using Processing to provide some sort of context?)

So my solution? Use John Resig’s processing.js library – a port of Processing to Javascript – and then use something like the browser based Obsessing editor – write Processing code in the browser, then run it using processing.js:

A tweak to the Blogspot template should mean that processing code can be included in a post and executed using processing.js? Or if not, we could probably get it to work in an OU hosted WordPress environment?

Finally, the “worthy academic” pre-planned structure of the intended book just doesn’t work for me. I’d phrase the project in a far more playful way, and try to accrete comments and questions around mini-projects working out how to get various things working in Processing, probably in a blogged uncourse like way.

Sort of related to this, I’ve been thinking of writing something not too dissimilar from I’m Leaving, along the lines of “I’m not into dumbing down, but I’m quitting the ivory tower”, because the arrogance of academia is increasingly doing my head in. (If you’re a serious academic, you’re not allowed to use “slang” like that. You have to say you are “seriously concerned by the blah blah blah blah blah blah blah”… It does my head in ;-)

PS not liking the proposed book structure is not say I’m not into teaching proper computer science – I’d love to see us teaching compiler theory, or web services using real webservices, like some of the telecoms companies’ APIs;-) But there’s horses for courses, and this Processing stuff should be fun and accessible, right? (And that doesn’t mean it has to be substanceless…)

PPS how intersting would it have been to coolaboratively write an interatcive book, along the lines of this interactive presentation: Learning Advanced Javascript – double click in any of the code displaying slides, and you can edit – and run – the Javascript code in the browser/withi the presentation (described here: Adv. JavaScript and Processing.js, which includes a link to a downloadable version of the interactive presentation).

Printing Out Online Course Materials With Embedded Movie Links

Although an increasing number of OU courses include the delivery of online course materials, written for online delivery as linked HTML pages, rather than just as print documents viewable online, we know (anecdotally at least, from requests that printing options be made available to print off whole sections of a course with a single click) that many students want to be able to print off the materials… (I’m not sure we know why they want to print off the materials, though?)

Reading through a couple of posts that linked to my post on Video Print (Finding problems for QR tags to solve and Quite Resourceful?) I started to ponder a little bit more about a demonstrable use case that we could try out in a real OU course context over a short period of time, prompted by the following couple of comments. Firstly:

So, QR codes – what are they good for? There’s clearly some interest – I mentioned what I was doing on Twitter and got quite a bit of interest. But it’s still rare to come across QR codes in the wild. I see them occasionally on blogs/web-pages but I just don’t much see the point of that (except to allow people like me to experiment). I see QR codes as an interim technology, but a potentially useful one, which bridges the gap between paper-based and digital information. So long as paper documents are an important aspect of our lives (no sign of that paper-less office yet) then this would seem to be potentially useful.
[Paul Walk: Quite Resourceful?]

And secondly:

There’s a great idea in this blog post, Video Print:

By placing something like a QR code in the margin text at the point you want the reader to watch the video, you can provide an easy way of grabbing the video URL, and let the reader use a device that’s likely to be at hand to view the video with…

I would use this a lot myself – my laptop usually lives on my desk, but that’s not where I tend to read print media, so in the past I’ve ripped URLs out of articles or taken a photo on my phone to remind myself to look at them later, but I never get around to it. But since I always have my phone with me I’d happily snap a QR code (the Nokia barcode software is usually hidden a few menus down, but it’s worth digging out because it works incredibly well and makes a cool noise when it snaps onto a tag) and use the home wifi connection to view a video or an extended text online.

As a ‘call to action’ a QR tag may work better than a printed URL because it saves typing in a URL on a mobile keyboard.
[Mia Rdige: Finding problems for QR tags to solve]

And the hopefully practical idea I came up with was this: in the print option of our online courses that embed audio and/or video, design a stylesheet for the print version of the page that will add a QR code that encodes a link to the audio or video asset in the margin of the print out or alongside a holding image for each media asset. In the resources area of the course, provide an explanation of QR codes, maybe with a short video showing how they are used, and links (where possible) to QR reader tools for the most popular mobile devices.

So for example, here is a partial screenshot of material taken from T184 Robotics and the Meaning of Life (the printout looks similar):

And here’s what a trivial change to the stylesheet might produce:

The QR code was generated using the Kaywa QR-code generator – just add a URL as a variable to the generator service URL, and a QR code image appears :-)

Here’s what the image embed code looks like (the link is to the T184 page on the courses and qualifications website – in practice, it would be to the video itself):

<img src=”http://qrcode.kaywa.com/img.php?s=6&d=http%3A%2F%2Fwww3.open.ac.uk%2Fcourses%2Fbin%2Fp12.dll%3FC01t184&#8243; alt=”qrcode” />

Now anyone familiar with OU production processes will know that many of our courses still takes years – that’s right, years – to put together, which makes ‘rapid testing’ rather difficult at times ;-)

But just making a tiny tweak to the stylesheet of the print option in an online course is low risk, and not going to jeopardise quality of course (or a student’s experience of it). But it might add utility to the print out for some students, and it’s a trivial way of starting to explore how we might “mobilise” our materials for mixed online and offline use. And any feedback we get is surely useful for going forwards?

Bung the Common Craft folk a few hundred quid for a “QR codes in Plain English” video and we’re done?

Just to pre-empt the most obvious OU internal “can’t do that because” comment – I know that not everyone prints out the course materials, and I know that not everyone has a mobile phone, and I know that of those that do, not everyone will have a phone that can cope with reading QR codes or playing back movies, and that’s exactly the point

I’m not trying to be equitable in the sense of giving everyone exactly the same experience of exactly the same stuff. Because I’m trying to find ways of providing access to the course materials in a way that’s appropriate to the different ways that students might want to consume them.

As to how we’d know whether anyone was actually using the QR codes – one way might be to add a campaign tracking code onto each QR coded URL, so that at least we’d be able to tell which of the assets were were hosting were being hit from the QR code.

So now here’s a question for OU internal readers. Which “innovation pipeline” should I use to turn the QR code for video assets idea from just an idea into an OU innovation? The CETLs? KMi? IET (maybe their CALRG?) The new Innovation office? LTS Strategic? The Mobile Learning interest group thingy? The Moodle/VLE team? Or shall I just take the normal route of an individual course team member persuading a developer to do it as a favour on a course I’m currently involved with (a non-scalable result in terms of taking the innovation OU -wide, because the unofficial route is an NIH route…!)

And as a supplementary question, how much time should I spend writing up the formal project proposal (CETLs) or business case (LTS Strategic, Innovation Office(?)) etc, and then arguing it through various committees, bearing in mind I’ve spent maybe an hour writing this blog post and the previous one (and also that there’s no more to write – the proof now is in the testing ;-), and it’d take a developer maybe 2 hours to make the stylesheet change and test it?

I just wonder what would happen if any likely candidates for the currently advertised post of e-Learning Developer, in LTS (Learning and Teaching Solutions) were to mention QR codes and how they might be used in answer to a question about how they might “demonstrate a creative but pragmatic approach to delivering the ‘right’ solution within defined project parameters”?! Crash and burn, I suspect!;-)

(NB on the jobs front, the Centre for Professional Learning and Development is also advertising at the moment, in particular for a Interactive Media Developer and a Senior Learning Developer.)

Okay, ranty ramble over, back to the weekend…

PS to link to a sequence that starts so many minutes and seconds in, use the form: http://uk.youtube.com/watch?v=mfv_hOFT1S4#t=9m49s.

PPS for a good overview of QR codes and mobile phones, see Mobile codes – an easy way to get to the web on your mobile phone.

PPPS [5/2010] Still absolutely no interest in the OU for this sort of thing, but this approach does now appear to be in the wild… Books Come Alive with QR Codes & Data in the Cloud

Referrer Traffic from Amazon – WTF?!

As it happens, I have been know to look at my blog stats from time (!), and today I noticed something odd in the referrer stats:

A referral from Amazon. WTF?

The link goes to a book detail page for a book about Wikipedia:

Scrolling down a bit, I found this:

A blog post syndicated in the product page from one of the book’s authors that linked to my post on Data Scraping Wikipedia with Google Spreadsheets.

My immediate thought – is there any way we can blog info about courses that use set textbooks back into the related Amazon product page?

Visualising the OU Twitter Network

Readers of any prominent OU bloggers will probably have noticed that we appear to have something of Twitter culture developing within the organisation (e.g. “Twitter, microblogging and living in the stream“). After posting a few Thoughts on Visualising the OU Twitter Network…, I couldn’t resist the urge to have a go at drawing the OpenU twittergraph at the end of last week (although I had hoped someone else on the lazyweb might take up the challenge…) and posted a few teaser images (using broken code – oops) via twitter.

Anyway, I tidied up the code a little, and managed to produce the following images, which I have to say are spectacularly uninteresting. The membership of the ‘OU twitter network’ was identified using a combination of searches on Twitter for “open.ac.uk” and “Open University”, coupled with personal knowledge. Which is to say, the membership list may well be incomplete.

The images are based on a graph that plots who follows whom. If B follows A, then B is a follower and A is followed. In the network graphs, an arrow goes from A to B if A is followed by B (so in the network graph, the arrows point to people who follow you. The graph was constructed by making calls to the Twitter API for the names of people an individual followed, for each member of the OU Twitter network. An edge appears in the graph if a person in the OU twitter network follows another person in the OU Twitter network. (One thing I haven’t looked at is to see whether there are individuals followed by a large number of OpenU twitterers who aren’t in the OpenU twitter network… which might be interesting…)

Wordle view showing who in the network has the most followers (the word size is proportional to the number of followers, so the bigger your name, the more people there are in the OU network that follow you). As Stuart predicted, this largely looks like a function of active time spent on Twitter.

We can compare this with a Many Eyes tag cloud showing how widely people follow other members of the OU network (the word size is proportional to the number of people in the OU network that the named individual follows – so the bigger your name, the more people in the OU network you follow).

Note that it may be interesting to scale this result according to the total number of people a user is following:

@A’s OU network following density= (number of people @A follows in OU Twitter network)/(total number of people @A follows)

Similarly, maybe we could also look at:

@A’s OU network follower density= (number of people in OU Twitter network following @A)/(total number of people following @A)

(In the tag clouds, the number of people following is less than the number of people followed; I think this is in part because I couldn’t pull down the names of who a person was following for people who have protected their tweets?)

Here’s another view of people who actively follow other members of the OU twitter network:

And who’s being followed?

These treemaps uncover another layer of information if we add a search…

So for example, who is Niall following/not following?

And who’s following Niall?

I’m not sure how useful a view of the OU Twittergraph is itself, though?

Maybe more interesting is to look at the connectivity between people who have sent each other an @message. So for example, here’s how Niall has been chatting to people in the OU twitter network (a link goes from A to B if @A sends a tweet to @B):

ou personal activer twittermap

We can also compare the ‘active connectivity’ of several people in the OU Twitter network. For example, who is Martin talking to, (and who’s talking to Martin) compared with Niall’s conversations?

2008-10-13_0157

As to why am I picking on Niall…? Well, apart from making the point that by engaging in ‘public’ social networks, other people can look at what you’re doing, it’s partly because thinking about this post on ‘Twitter impact factors’ kept me up all night: Twitter – how interconnected are you?.

The above is all “very interesting”, of course, but I’m not sure how valuable it is, e.g. in helping us understand how knowledge might flow around the OU Twitter network? Maybe I need to go away and start looking at some of the social network analysis literature, as well as some of the other Twitter network analysis tools, such as Twinfluence (Thanks, @Eingang:-)

PS Non S. – Many Eyes may give you a way of embedding a Wordle tagcloud…?)

Thoughts on Visualising the OU Twitter Network…

“Thoughts”, because I don’t have time to do this right now, (although it shouldn’t take that long to pull together? Maybe half a day, at most?) and also to give a glimpse into to the sort of thinking I’d do walking the dog, in between having an initial idea about something to hack together, and actually doing it…

So here’s the premise: what sort of network exists within the OU on Twitter?

Stuff I’d need – a list of all the usernames of people active in the OU on Twitter; Liam is aggregating some on PlanetOU, I think?, and I seem to remember I’ve linked to an IET aggregation before.

Stuff to do (“drafting the algorithm”):

– for each username, pull down the list of the people they follow (and the people who follow them?);
– clean each list so it only contains the names of OU folks (we’re gonna start with a first order knowledge flow network, only looking at links within the OU).
– for each person, p_i, with followers F_ij, create pairs username(p_i)->username(F_ij); or maybe build a matrix: M(i,j)=1 if p_j follows p_i??
– imagine two sorts of visualisation: one, an undirected network graph (using Graphviz) that only shows links where following is reciprocated (A follows B AND B follows A); secondly, a directed graph visualisation, where the link simply represents “follows”.

Why bother? Because we want to look at how people are connected, and see if there are any natural clusters (this might be most evident in the reciprocal link case?) cf. the author clusters evident in looking at ORO co-authorship stuff. Does the network diagram give an inkling as to how knowledge might flow round the OU? Are there distinct clusters/small worlds connected to other distinct clusters by one or two individuals (I’m guessing people like Martin who follows everyone who follows him?). Are there “supernodes” in the network that can be used to get a message out to different groups?

Re: the matrix view: I need to read up on matrices… maybe there’s something we can do to identify clusters in there?

Now if only I had a few hours spare…

Video Print

Sitting in a course team meeting of 6 for over 3 hours today (err, yesterday…) discussing second drafts of print material for a course unit that will be delivered for the first time in March 2010 (third drafts are due mid-December this year), it struck me that we were so missing the point as the discussion turned to how best to accommodate a reference from print material to a possible short video asset in such a way that a student reading the written print material might actually refer to the video in a timely way…

Maybe it’s because the topic was mobile telephony, but it struck me that the obvious way to get students reading print material to watch a video at the appropriate point in the text would be to use something like this:

By placing something like a QR code in the margin text at the point you want the reader to watch the video, you can provide an easy way of grabbing the video URL, and let the reader use a device that’s likely to be at hand to view the video with…

I have to admit the phrase “blended learning” has to date been largely meaningless to me… But this feels like the sort of thing I’d expect it to be… For example:

Jane is sitting at the table, reading a study block on whatever, her mobile phone on the table at her side. As she works through the material, she annotates the text, underlining key words and phrases, making additional notes in the margin. At a certain point in the text, she comes across a prompt to watch a short video to illustrate a point made in the previous paragraph. She had hoped not to have to use her PC in this study session – it’s such a hassle going upstairs to the study to turn it on… Maybe she’ll watch the video next time she logs in to the VLE (if she remembers…). Of course, life’s not like that now. She picks up her phone, takes a picture of the QR code in the margin, and places her phone back on the table, next to the study guide. The video starts, and she takes more notes as it plays…

Thinking about it, here’s another possibility:

Jim is in lean back mode, laying on the sofa, feet up, skimming through this week’s study guide. The course DVD is in the player. As he reads through the first section, there’s a prompt to watch an explanatory video clip. He could snap the QR code in the margin and watch the video on his phone, but as the course DVD is all cued up, it’s easy enough to select the block menu, and click on the appropriate clip’s menu item. Of course, it’d be just as easy to use the Wii connected to the TV to browse to the course’s Youtube page and watch the clips that way, but hey, the DVD video quality is much better…

This is quite an old OU delivery model – for years we expected students to record TV programmes broadcast in the early hours of the morning, or we’d send them video cassettes. But as video delivery has got easier, and the short form (2-3 minute video clip) has gained more currency, I get the feeling we’ve been moving away from the use of video media because it’s so expensive to produce and so inconvenient to watch…

iTunes in Your Pocket… Almost…

Having been tipped off about about a Netvibes page that the Library folks are pulling together about how to discover video resources (Finding and reusing video – 21st century librarianship in action, methinks? ;-) I thought I’d have a look at pulling together an OU iTunes OPML bundle that could be used to provide access to OU iTunes content in a Grazr widget (or my old RadiOBU OpenU ‘broadcast’ widget ;-) and maybe also act as a nice little container for viewing/listening to iTunes content on an iPhone/iPod Touch.

To find the RSS feed for a particular content area in iTunesU, navigate to the appropriate page (one with lists of actual downloadable content showing in the bottom panel), make sure you have the right tab selected, then right click on the “Subscribe” button and copy the feed/subscription URL (or is there an easier way? I’m not much of an iTunes user?):

You’ll notice in the above case that as well as the iPod video (mp4v format?), there is a straight video option (.mov???) and a transcript. I haven’t started to think about how to make hackable use of the transcripts yet, but in my dreams I’d imagine something like these Visual Interfaces for Audio/Visual Transcripts! ;-) In addition, some of the OU iTunesU content areas offer straight audio content.

Because finding the feeds is quite a chore (at least in the way I’ve described it above), I’ve put together an OU on iTunesU OPML file, that bundles together all the separate RSS from the OU on iTunesU area (to view this file in an OPML widget, try here: OU iTunesU content in a Grazr widget).

The Grazr widget lets you browse through all the feeds, and if you click on an actual content item link, iit should launch a player (most likely Quicktime). Although the Grazr widget has a nice embedded player for MP3 files, it doesn’t seem to offer an embedded player for iTunes content (or maybe I’m missing something?)

You can listen to the audio tracks well enough in an iPod Touch (so the same is presumably true for an iPhone?) using the Grazr iphone widget – but for some reason I can’t get the iPod videos to play? I’m wondering if this might be a mime-type issue? or maybe there’s some other reason?

(By the by, it looks like the content is being served from an Amazon S3 server… so has the OU bought into using S3 I wonder? :-)

For completeness, I also started to produce a handcrafted OPML bundle of OU Learn Youtube playlists, but then discovered I’d put together a little script ages ago that will create one of these automatically, and route each playlist feed through a feed augmentation pipe that adds a link to each video as a video enclosure:

http://ouseful.open.ac.uk/xmltools/youtubeUserPlaylistsOPML.php?user=oulearn

Why would you want to do this? Because if there’s a video payload as an enclosure, Grazr will provide an embedded player for you… as you can see in this screenshot of Portable OUlearn Youtube playlists widget (click through the image to play with the actual widget):

These videos will play in an iPod Touch, although the interaction is a bit clunky, and actually slight cleaner using the handcrafted OPML: OUlearn youtube widget for iphone.

PS it’s also worth remembering that Grazr can embed Slideshare presentations, though I’m pretty sure these won’t work on the iPhone…