An Essential Part of My Workflow

A couple of days ago, on of those reminders about how reliant we are on various pieces of technology was forced upon me: Jing died on me….

For those of you who don’t know it, Jing is a screencapture/screencasting tool that is integrated with flickr (free version, for screenshots) and Youtube (pro version, for screencasts). It’s producd by Techsmith, who also publish the more comprehensive SnagIt and Camtasia tools, so the technical underpinnings of the app are excellent.

Anyway, I’ve been using the free version of Jing for what seems like forever, using it to grab screenshots at will and send them direct to flickr, then typically pasting the embed code that is magically popped into my clipboard directly into my WordPress editor. But I’ve decided that I really need to do more screencasts, and whilst Jing automates video uploads to screencast.com, I really wanted the ability to post screencasts direct to Youtube. So on Sunday I upgraded, and after a couple of battles getting the upgrade to take, uploaded a couple of test screencasts to Youtube, easy as anything.

And then, on Tuesday, late on Tuesday, at a time when Tuesday had bcome Wednesday and I really wanted to call an end to the day, save for finishijng off a post with a couple of screenshots, Jing died. Every time I restarted it, it claimed I was no longer a Pro user, and died.

So I reinstalled, and tried again. Same thing. Reboot my Mac, and try again. Still no joy, Crate a new, free account, and whenever I started Jing, it crashed.

Superstition kicked in and I blamed the upgrade, trying (maybe successfully, maybe not) to send a help request to Techsmith. (Finding the help was a nightmare, I think I had to create a new account on a help system somewhere along the way, and on posting a help email, I couldn’t tell whether it had been submitted or not.) The typical online help rigmarole, essentially. Even if you don’t start off angry, you’re likely to end up furious. (Plus I was really flagging by now and maybe not thinking as clearly as I might!)

A search on Twitter turned up a @techsmith account, and the contact details of someone at Camtasia, who I emailed. But it was passed days’ end, even in the US, so I went looking for an alternative. (I could of course have just used the Mac screengrab tools to do what I needed, and then uploaded the images to flickr using flock, but I was looking to punish to Camtasia by finding an alternative to Jing that worked just as well!)

In the end, I settled on Skitch, and it sort of worked okay, but it was nothing like as painless as Jing. For every screenshot I took, I just wanted Jing back…

…anyway, I picked up a friendly email from Techsmith yesterday saying there had been problems, and a tweeted prompt from Techsmith last night asking if Jing was now working for me again (it was/is). The problem, it seems, was at the Techsmith end, an issue that caused Jing on Mac Tiger to crash (I’m intrigued as to how a problem on the webservice end and kill an app running on the desktop? This is a harbinger of things to come more generally with web apps, maybe?)

So what do I take from this experience? Firstly, Jing is part of what I do, and it does just what it needs to for me. Secondly, without twitter I’d have had a really crap customer experience trying to understand what was going on (had something gone bad with my Pro upgrade? Was it a Jing problem or my problem? and so on..).

As it’s turned out, rather than writing a ranty post saying I’ve now changed my screencapture tool because of blah, blah, blah, if anyone asks what tool I use for screencaptures, I’d still say Jing. And from the ease of use in uploading screencaptured videos to Youtube, I’d also recommend the upgrade to Jing Pro if quick’n’easy raw screencasts are your thing.

A Glimpse of Work In Progress

Prompted by a couple of comments from @Josswinn to be more transparent (?!;-), here’s a glimpse into how I set about learning how to do something.

It potentially won’t make a lot of sense, but it’s typical of the sort of process I go through when I hack something together…

So, there’s usually an initiator:

Then there’s a look to see if there’s anything to play with that is on my current list of things I thing I’d like to, (or need to;-), know more about:

Hmm, ok… Google spreadsheets. I’ve just learned about how to write queries against Google spreadsheets using the visualisation API query language, so can I push that another step forward…? What don’t I know how to do that could be useful, and that I could try to demo in an app using this resource?

How about this: a web page that lets me pull the result out for a searched for by name university.

Hmm, what would that mean? A demo around a single subject, maybe, or should the user be able to specify the subject? The subject areas are listed, but how do get those in to a form? Copy the list of tab names from the spreadsheet? Hmm… Or how about entering the name of a single university and displaying the results for that HEI in each of the categories. That would also require me to find out how many sheets there were in the spreadsheet, and then interrogate each one…

Okay, so it’d be nice to be able to search for the results of a given university in a given subject area, or maybe even compare the results of two universities in a given subject area?

So to do that, do I need to learn how to do something new? If not, there’s no point, it’s just makework.

Well, I don’t know how to grab a list of worksheet names from a Google spreadsheet, so that’d be something I could learn… So how to do that?

Well, the query language only seems to work within a sheet, but there is a Google spreadsheets API I think? Let’s have a look: Google Spreadsheets APIs and Tools: Reference Guide. F**k it, why haven’t I looked at this before…?!

[Go there now – go on… have a look…]

Blah, blah, blah – ah: Spreadsheets query parameters reference. Quick scan… hmm, nothing obvious there about getting a list of worksheets. How about further up the page…?

Ah: “The worksheets metafeed lists all the worksheets within the spreadsheet identified by the specified key, along with the URIs for list and cells feeds for each worksheet”

I have no idea what the “list and cells feeds” means, but I’m not interested in that; “lists all the worksheets within the spreadsheet identified by the specified key” is what I want. Okay, so where’s a URL pattern I can crib?

http://spreadsheets.google.com/feeds/worksheets/key/visibility/projection

Scan down a bit looking for keywords “visibility” and “projection” (I’m guessing key is the spreadsheet key…). Okay, visibility public and projection basic just to check it works…

http://spreadsheets.google.com/feeds/worksheets/reBYenfrJHIRd4voZfiSmuw/public/basic

Okay, that works… No obvious way of getting the gid of the worksheet number though, unless maybe I count the items and number each one…? The order of worksheets in the feed looks to be the sheet order, so I just need to count them 0,1,2 etc from the top of the list to gd the worksheet gid. Ah, there could be an opportunity here to try out the YQL Execute in a pipe trick? After all, the demo for that was an indexer for feed items, and because the API is chucking out RSS I need to use something like a pipe anyway to get a JSON version I can pull into my web page.

Hmmm, what else is there on the docs page? “alt, start-index, max-results Supported in all feed types. ” I wonder? Does alt stand for alternative formats maybe? Let’s try adding ?&alt=json to the URL – it may work, or it may relate to something completely other…. [Success] heh heh :-) Okay, so that means I don’t need the pipe?

What else – anything that could be useful in the future? Hmm, seems like the Spreadsheets API actually supports queries too? So e.g. I can run a query to see if there is a sheet that contains “geo” maybe?

http://spreadsheets.google.com/feeds/worksheets/reBYenfrJHIRd4voZfiSmuw/public/basic?title=geo

Okay – lots of other query stuff there; remember that for another day…

So: to recap, the above process took maybe 10-15 mins and went from:

– initiator: see a tweet;
– follow-up: look at the resource;
– question: is there something I could do with that data that I don’t know how to do?
– question refined: how about I pull out a list of the worksheets from the spreadsheet, and use that in e.g. a drop down box so students can choose a subject area from a list, then search for one, or compare two, HEIs in that subject area. I don’t know how to get the list, and I’m not sure about the best way of comparing two items, so I’ll probably learn something useful.
– solution finding: check out the Google spreadsheets API documentation; (If that had failed, I’d have done a blogsearch along the lines of ‘feed list worksheets google spreadsheet’
– plan: err, okay, the plan is a form that pulls a list of worksheets from the HEI spreadsheet via JSON, indexes each one to give me the worksheet gid number (this is a possibly flakey step? Could I index the spreadsheet by name?) then builds a query on that worksheet using an input from one or more text boxes containing the name of HEIs (or maybe a single text box with comma separated HEI names?)

Normally I’d have then spent up to an hour messing around with this (it is (working) lunchtime i.e. playtime after all), but instead I spent forty five mins writing this blog post… which means there is no demo…:-(

Scripting Charts WIth GraphViz – Hierarchies; and a Question of Attitude

A couple of weeks ago, my other was finishing off corrections to her PhD thesis. The layout of one of the diagrams – a simple hierarchy written originally using the Draw tools in an old versioof MS-Word – had gone wrong, so in the final hours before the final printing session, I offered to recreate it.

Not being a draughtsman, of course I decided to script the diagram, using GraphVIz:

The labels are added to the nodes using the GraphViz label command, such as:

n7[label="Trait SE"];

The edges are defined in the normal way:

n4->n8;
n4->n9;

But there was a problem – in the above figure, two nodes are placed by the GraphvViz layout in the wrong place – the requirement was that the high and low nodes were ordered according to their parents, and as, indeed, they had been ordered in the GraphViz dot file.

A bit of digging turned up a fix, though:

graph [ ordering="out" ];

is a switch that forces GraphViz to place the nodes in a left-to-right fashion in the order in which they are declared.

During the digging, I also found the following type of construct

{rank=same;ordering=out;n8;n9;n10;n11;n12;n13;n14;n15

which will force a set of nodes to be positioned along the same horizontal row. Whilst I didn’t need it for the simple graph I was plotting, I can see this being a useful thing to know.

There are a few more things, though, that i want to point out about this whole exercise.

Firstly, I now tend to assume that I probably should be able to script a diagram, rather than have to draw it. (See also, for example, Writing Diagrams, RESTful Image Generation – When Text Just Won’t Do and Progressive Enhancement – Some Examples.)

Secondly, when the layout “went wrong”, I assumed there’d be a fix – and set about searching for it – and indeed found it, (along with another possibly useful trick along the way).

This second point is an attitudinal thing; knowing an amount of programming, I know that most of the things I want to do most of the time are probably possible because they the exactly the sorts of problems are likely to crop up again and again, and as such solutions are likely to have been coded in, or workarounds found. I assume my problem is nothing special, and I look for the answer; and often find it.

This whole attitude thing is getting to be a big bugbear of mine. Take a lot of the mashups that I post here on OUseful.info. They are generally intended not to be one off solutions. This blog is my notebook, so I use it to record “how to” stuff. And a lot of the posts are contrived to demonstrate minimally worked examples of how to do various things.

So for example, in a recent workshop I demonstrated the Last Week’s Football Reports from the Guardian Content Store API (with a little dash of SPARQL).

Now to me, this is a mashup that shows how to :

– construct a relative date limited query on the Guardian content API;
– create a media RSS feed from the result;
– identify a convention in the Guardian copy that essentially let me finesse metadata from a free text field;
– create a SPARQL query over dbpedia and use the result to annotate each result from the Guardian content API;
– create a geoRSS feed from the result that could be plotted directly on a map.

Now I appreciate that no-one in the (techie) workshop had brought a laptop, and so couldn’t really see inside the pipe (the room layout was poor, the projection screen small, my presentation completely unprepared etc etc), but even so, the discounting of the mashup as “but no-one would want to do anything with football match reviews” was…. typical.

So here’s an issue I’ve some to notice more and more. A lot of people see things literally. I look at the football match review pipe and I see it as giving me a worked example of how to create a SPARQL query in a Yahoo pipe, for example (as well as a whole load of other things, even down to how to construct a complex string, and a host of other tiny little building blocks, as well as how to string them together).

Take GraphViz as another example. I see a GraphViz file as a way of rapidly scripting and laying out diagrams using a representation that can accommodate change. It is possible to view source and correct a typo in a node label, whereas it might not be so easy to see how to do that in a jpg or gif.

“Yes but”, now comes the response, “yes, but: an average person won’t be able to use GraphViz to draw a [complicated] diagram”. Which is where my attitude problem comes in again:

1) most people don’t draw complicated diagrams anyway, ever. A hierarchical diagram with maybe 3 layers and 7 or 8 nodes would be as much as they’d ever draw; and if it was more complicated, most people wouldn’t be able to do it in Microsoft Word anyway… I.e. they wouldn’t be able to draw a presentable diagram anyway…

2) even if writing a simple script is too hard, there are already drag and drop drop interfaces that allow the construction of GraphViz drawings that can then be tidied up by the layout engine.

So where am I at? I’m going to have a a big rethink about presenting workshops (good job I got rejected from presenting at the OU’s internal conference, then…) to try to help people to see past the literal and to the deeper truth of mashup recipes, and try to find ways of helping others shift their attitude to see technology as an enabler.

And I also need a response to the retort that “it won’t work for complicated examples” along the lines of: a) you may be right; but b) most people don’t want to do the complicated things anyway…

The Invisible Library, For Real…

Somewhen last year I started thinking about what the consequences of an “invisible library” might actually be (Joining the Flow – Invisible Library Tech Support and The Invisible Library (Presentation)) and it seems like one consequence might be – no books!

Following a series of workshops on Library futures, it seems as if the OU Library is going to get rid of it’s book stock… Now this isn’t actually as daft as it first might sound: the OU Library doesn’t loan out physical books to students as a rule (except maybe to local students) and the book stock is maintained for scholarly (course writing) purposes, as well as to support research.

It also turns out that maintaining the book stock is expensive: the cost of shelf space and overheads on top of the costs associated with issuing loans and returns, as well as restacking books, binding and cataloguing (i.e. the total cost of ownership of the book) means that the annual cost per book loan per year exceeds the cost of users just buying the equivalent books for themselves and reclaiming the costs.

So it seems that the Library will be ramping up its disposal policy and getting rid of its book stock over the next year, apart from a small collection of books donated to the University by the books’ authors (the “vanity collection”, apparently?!) and books authored by members of the university (the “repository collection”).

In place of the book stock, university members will be encouraged to purchase books themselves, and reclaim the costs via a faculty managed fund. Once the book has been finished with, it will place on the ‘virtual bookshelf’ (i.e. an ‘invisible’ bookshelf ;-), using one of the first devlab_alpha applications, a revamping of the old KMI bookshelf application. (This application allowed individuals to maintain a list of ISBNs of books they had in their office on a personal profile page, so that other people could see what books were available ‘down the corridor’ and then borrow them at a local/personal level.)

I’m hoping that a variant of my Library Traveller script will become part of this invisible library play, though rather than looking up books on the soon to be redundant OPAC, it’ll look books up on the Virtual Library shelves, as well as integrating with the expenses claims system (so when you buy a book on Amazon, for example, you can automatically file a claim at the same time).

I’m also hoping that the incredible Fran Thom, who’s managed to argue this initiative through, will be able to come up to the second Mashed Libraries event in July – Mash Oop North – and motivate some of the other HE libraries that will be gathered there to drop some of their collections too…

PS it seems that user surveys ranked the smell of books in the library higher than the actual book stock in terms of what people expected from the new library building when it was being designed, which maybe explains why we have the scented air in the library? At the moment, they pipe in an aroma somewhere between pine forests and olive groves, on top of the natural smell of the building, but whether this is to mask the disappearing smell of the book collection when it does go, or to allow the Library staff to pipe in a replacement “essence of books, number 23” scent when the book collection disappears, I don’t really know?)

PS Always check the date stamp of a post..;-) But it makes you think, doesn’t it…?!

OU DevLabsAlpha

Oh, great day! It seems that keen to jump on the bandwagon, the OU will soon be opening up a “Uni-API”, in part inspired by the opening of the Grauniad and New York Times APIs. And taking a lead from Google in more ways than one, the new OU site will pilot not yet for mainstream use services (in much the same way that Google Labs does), on the “devlabs_alpha” site (http://dvla.open.ac.uk I think, but I need to check that…)

Hopefully a couple of applications I’ve been involved with will make it on to devlabs_alpha, such as the Course Profiles Facebook app (not sure how many users it has now? I’d hope upwards of 6,000?) and the OU/iPlayer 7 day Catch Up iPhone webpage.

One of the features of the site will be a voting mechanism for people to vote up the applications they like, and feed into the more traditional process of allocating formal resource to a project and developing it as a fully blown production system.

The Google influence goes further with the adoption in LT/AAC-S of 10:10 time, based on the famous (apocryphal?) 10% time that allows Google employees to work on development projects of their own devising. In order to regain some semblance of control, 10:10 requires two developers to each dedicate their personal 10% time to the same project, and work on it using a pair programming approach. It is hoped this will guarantee that useful rather than frivolous projects will result (in part because anyone with an idea has to persuade someone else to work on it too…). (A cynic like me would see this as introducing friction to the system in that hope that a Prisoner’s Dilemma situation occurs, no-one pairs up and no 10% is used up; but more fool me, maybe… ;-)

The pair programming feature is there to get around the lack of a formal development cycle, in the hope that pairwise testing will result in pretty robust code (good enough to be rapidly upgraded to production code if the service is adopted as a mainstream service).

Anyway, I think this beats the likes of MIT to this sort of initiative (I don’t think we’ll ever forgive them for letting them get to be the first institution to take their wares open!) and hopefully we’ll see this as just the first of many such similar offerings….

PS Always check the date stamp of a post..;-) But it makes you think, doesn’t it…?!

404 “Page Not Found” Error pages and Autodiscoverable Feeds for UK Government Departments

Around the time of the IWMW event last year, I put together a couple of quick pages that published the 404/page not found error pages for all the UK HEI homepages I could find (UK HEI “Page Not Found” Error Pages) and all the autodiscoverable RSS feeds that could be found on the HEI web homepages (Back from Behind Enemy Lines, Without Being Autodiscovered(?!)).

(Rather tellingly, some of the 404 pages are still, err, rather basic, and and many of the sites still haven’t quite got the idea of the utility of this RSS malarkey yet…)

So given that I’ve started poking around various government department websites, here’s a page that pulls back images of their 404/page not found pages, as well as links to any RSS feeds that are autodiscoverable from the department’s home web page: UK Government Department webpage auditor.

The list of department homepage URLs is scraped from the central government department sites page on the Number10 website via this Yahoo pipe- UK Gov Dept Website Audit pipe, which scrapes a list of links from the central government department sites page HTML. (If there’s a more authoritative list somewhere, feel fee to post a link in the comments to this post.)

The pipe then annotates each department item with a non-existent page link and tries to autodiscover any RSS feeds that are linked to from the department homepage.

The pipe output feed is then loaded into the auditor webpage, and pulls in a thumbail for each 404 page from the Thummer service. (I actually hit this quite hard over the weekend… Sorry, Matt… However, the thumbnail generating code is available from the site, so if anyone fancies hosting a copy an maybe setting up a tracking service so we can see how government department website 404 pages change over the coming weeks, that’d be a neat thing to do..;-)

So what sorts of feed might be good to find on a Government department website? (It’s worth remembering you can link to several.) Typical offerings include news feeds and job ads. As of a week or two ago, a quick win has become available for grabbing the job ads from the Civil Service Job Service API on the Civil Service (beta) Developers page. And if that’s too hard, Steph Gray’s Civil Service jobs, your way describes a service he knocked together in no time that will “[g]enerate an RSS feed of jobs from any specific department”: Government Jobs Direct. So for example, here’s a Jobs feed from DIUS that could be made autodiscoverable from the DIUS homepage? ;-)

I’d quite like to see a feed of current consultations (and maybe one with a full list of recent consultations, both open and closed). As a quick win, the maintainers of the department websites could even just link to a feed of consultations being held by their department from Tell Them What You Think. For example, here’s where you can grab a feed of recent consultations from the Home Office:

(Harry, have you thought of making the feeds autodiscoverable from those pages too?;-)

Okay, that’s more than enough for now – I’ve probably already done more than enough to cause a few people grief this morning ;-) Just to recap: here’s a link to the UK Government Department 404 and feed autodiscovery page.

PS digging through my Pipes collection, I found another one to do with feed autodiscovery from government websites: Autodiscover Government Consultation feeds. This uses a pipe that grabs a list of Government Department consultation websites (via TellThemWhatYouThink) and then runs those pages through a feed autodiscovery routine.

When I get a chance, I’ll add this info to the auditor web page…

Anti-tags and Quick and Easy Block (Un)commenting

Looking back over the comments to @benosteen‘s post on Tracking conferences (at Dev8D) with python, twitter and tags just now, I noticed this comment from him replying to a comment from @miaridge about “app noise” appearing in the hashtag feed:

@mia one thing I was considering was an anti-tag – e.g. #!dev8d – so that searches for ‘dev8d’ would hit it, but ‘#dev8d’ shouldn’t.

The other tweak to mention is that booleans work on the twitter search:

‘#dev8d -from:randomdev8d’ would get all the #dev8d posts, but exclude those from randomdev8d.

Likewise, to get all the replies to a person, you can search for ‘to:username’, handy to track people responding to a person.

Brilliant:-)

Note also that one thing worth bearing in mind when searching on Twitter is that a search for @psychemedia is NOT the same as a search for to:psychemedia. That is, those two searches may well turn up different results.

The “to:” only searches for tweets that START with “@pscyhemedia”; so id @psychemedia appears elsewhere in the tweet (e.g. “waht the ??? is @psychemedia talking about?”), the “to:” search will not find it, whereas the “@” search will.

Why’s this important? Well, a lot of people new to using Twitter use the Twitter website interface to manage their interactions, the the “Replies” screen is powered like the “to:” search. Which means if someone “replies” to you in a “multiple addressee” tweet – e.g. “@mweller @psychemedia are you gonna make some more edupunk movies?”, then if you’re not the first named person, the @Replies listing won’t show the tweet… the only way you can discover them is to search twitter for “@psychemedia”, for example.

The Twitter advanced search option to search for posts “Referencing a person” is simply a search of the @person form.

(Note that Twitter search lets you subscribe to search results – so you can always subscribe to an ego search feed and receive updates that way; or you can use a client such as Tweetdeck which runs the search automatically.)

(I’m not sure what happens if someone actually replies to one of your tweets and then prepends some text before your name? Will twitter still spot this as a reply? If anyone knows, can you please comment back?)

Just by the by, the “anti-tag” trick reminds me of this code commenting pattern/trick (I don’t remember where I came across it offhand?) that makes it easy to comment and uncomment blocks of code (e.g. in PHP or Javascript):

Before the comment…]
/*
This will be commented out…
//*/
After the comment…

To uncomment out the block of code, just add a leading “/” to the first comment marker to turn it into a single line comment:

Before the comment…]
//*
This will NO LONGER be commented out…
//*/
After the comment…

The block comment closing tag is now itself single line commented out.

(I seem to remember it was described far more eloquently than that when I came across it!;-)

PS Ah ha, here we are – where I first saw the commenting trick: Every keystroke is a prisoner – a neat commenting trick.

QR Payments

Over dinner one evening at Dev8D, we fell to chatting about payment mechanisms in restaurants, and how the credit card payment model requires you to hand over your card so that it can read in a third party carder reader – that is, a device that is not under your control.

How much easier it would be if you could be handed your bill with a QR-code attached, which, when scanned, created a Paypal style payment that you could pay via a client on your phone. That is, your phone could become the payment appliance; the transaction is exectued on your mobile phone, using your PayPal account. A web-enabled till could then be used to confirm that the payment had been made.

Easy – and probably hack togetherable via the PayPal or Amazon Flexible Payments API?

For example, you could on the fly create a short-lived web page detailing the bill with a PayPal or Amazon “Pay now” button on it (or ideally, a mobile payments site, such as PayPal’s Mobile Checkout); generate a URL to the page in the form of a QR code; let the user grab the URL with their phone and go to the appropriate payment page on Paypal or Amazon Payments. Job done?

PS it seems there’s probably a patent or two out there already trying to lay claim to this sort of idea, such as this one for a Distributed Payment System and Method.

Which raises a question for me. Patents allow invents a period of grace to recoup expenses incurred during a process of invention. So if you can easily hack a solution together using bits of string and RESTful APIs you can find scattered around the web, what is it that actually merits the right to protection?

PS and lo, it came to pass… Now There’s Even an App That Lets You Pay for Coffee at Starbucks. See also Starbucks Launches First Dedicated iPhone App for Stored-Value Cards for screenshots.

Global Sunrise

This post is as much a thought out loud as much as anything, but who knows – maybe it’ll go somewhere…;-)

Last week, we did our first “special” with the BBC World Service Digital Planet programme (Exploring the GeoWeb with Digital Planet). Over the next week or two, we’ll be chatting over how it went and identifying – now we now a little more clearly about how we can support the programme on open2.net – what sorts of support we might be able to offer to wrap around future programmes.

So I started riffing around around the idea of travel bugs, geo-coded photos, the intereactive photo exhibits that grew up around Obama’s Presidential inauguration and such like, and wondered about a global participatory event… a global distributed photo shoot…

So here’s what I was wondering – at the next equinox (‘cos we know when that is) or the summer solstice (cos we all know when that is, too) we try to get people from all over the world to photograph the moment of sunrise (or sunset) and upload their geocoded, time stamped photos, taken on that day, just thjat day, and that day only, to flickr (or wherever). And then we make a movie of it: “Global Sunrise”.

So whaddya think?:-)

(Or has it been done already…?)

Social Telly? The Near Future Evolution of TV User Interfaces

In When One Screen Controls Another I pulled together a few links that showed how devices like the iPhone/iPodTouch might be used to provide rich UI, touchscreen interfaces to media centres, removing the need for on-screen control panels such as electronic programming guides and recorder programming menus by moving those controls to a remote handset. But there’s another direction in which things may evolve, and that’s towards ever more “screen furniture”.

For example, a prototype demoed last year by the BBC and Microsoft shows how it might be possible to “share” content you are viewing with someone in your contact list, identify news stories according to location (as identified on a regional or world map), or compile your own custom way through a news story by selecting from a set of recommended packages related to a particular news piece. (The latter demo puts me in mind of a course topic that is constructed by a student out of pre-prepared “learning objects’).

You can read more about the demo here – Will viewers choose their own running order? – (which I recommend you do…) but if that’s too much like hard work, at least make time to watch the promo video:

For another take on the software underpinning the Microsoft Media Room software that underpins the BBC demo, check out this MediaRoom promo video:

For alternative media centre interfaces, it’s worth checking out things like Boxee (reviewed here: Boxee makes your TV social), XBMC and MythTV.

It’s also worth bearing in mind what current, widely deployed set-top box interfaces look like, such as the Sky Plus interface:

In contrast to the media centre approach, Yahoo is making a pitch for Connected TV: Widget Channel (e.g. as described here: Samsung, Yahoo, Intel Put TV Widget Pieces in Place, showing how the widget channel can be buot directly into digital TVs, as well as set-top boxes).

(Remember Konfabulator, anyone? It later became Yahoo widgets which have now morphed, in turn, into content for the widget channel. In contrast, Yahoo’s media centre/PVR download – Yahoo! Go™ for TV – appears to have stalled, big time…)

The widget channel has emerged from a collaboration between Yahoo and Intel and takes the idea of desktop widgets (like Konfabulator/Yahoo widgets, Microsoft Vista Sidebar gadgets, Google Desktop gadgets , or Mac Dashboard) on to the TV screen, as an optional overlay that pops up on top of your normal TV content.

Here’s a demo video:

So – which approach will play out and hit the living room first? Who knows, and maybe even “who cares…?!”

PS maybe, maybe, the should OU care? As an institution, our reputation and brand recognition was arguably forged by our TV broadcasts, back in a time when telly didn’t start till lunchtime, and even when it did start, you were likely to find OU “lecture-like” programmes dominating the early afternoon schedule):

Where’s the brand recognition going to come from now? 1970s OU programming on the BBC showed how the OU could play a role as a public service broadcast educator, but I’m not sure we fulfill that mission any more, even via our new web vehicles (Youtube, iTunesU, OU podcasts etc.)? I’d quite like to see an OU iPlayer, partly because it allows us to go where iPlayer goes, but I also wonder: do we need to keep an eye on the interfaces that might come to dominate the living room, and maybe get an early presence in there?

For example, if the BBC get into the living room with the Canvas set-top box, would we want a stake somewhere in the interface?

PS just so you know, this post was written days ago, (and scheduled for delivery), way before the flurry of other posts out there on this topic that came out this week… ;-)