Applying SEO to the Course Catalogue

Just before Christmas I gave a talk at the department awayday that I’d intended to do in the style of a participatory lecture but, as is the way of these things, it turned into a total palaver and lost most of the lunch-addled audience within the first 20s;-)

Anyway, anyway, one of the parts of the talk was to get everyone to guess what course was being described based on a tag cloud analysis of the course description on the corresponding page of the course catalogue (got that?)

Here’s the relevant part of the presentation:

(The course codes were actually click-revealed during the presentation.)

Note that the last slide actually shows a tag cloud of the search terms that brought visitors into the OU website and delivered visitors to the specified course page, rather than a tag cloud of the actual course description.

See if you can spot which is which – remember, one of the following is generated from the actual course description, the other from incoming search terms to that page:

2009-01-12_2319

T209 description tag cloud

I’m not going to explore what any of this “means” in this post (my blogging time is being increasingly sidelined, unfortunately:-( suffice to say that whilst I was giving the original presentation I heard my self strongly arguing something along the lines of the following:

It’s pointless writing the course description on the course catalogue web pages using the terminology you want students to come out of the course with (that is, using the language you expect the course to teach them). What the course description has to do is attract people who want to learn those terms; so YOU have to use the words that they are likely be using on Google to find the course in the first place.

It strikes me that a similar sense of before/after language might also apply to the way we phrase learning objectives at the start of a learning activity in everyday, why we’re bothering learning thisat all, type language, and then clarify the learning outcomes in jargon heavy, terminology laden, worthy sounding terms at the end of the activity?;-)

See also: Measuring the Success of the Online Course Catalog, which looks at the design of a course catalogue from an SEO/actionable analytics point of view.

When One Screen Controls Another

In earlier posts, I’ve pondered on the rise of “dual screen” activity (e.g. Dual View Media Channels), but what about when one screen provides the control surface for another?

Earlier this week, the new release of the Apple Mac iWork office productivity suite included an announcement about an iPhone remote control app for Keynote (Keynote is the Mac equivalent of Microsoft’s Powerpoint presentation software). Here’s a demo video showing how it works:

I’d been looking for something like this for some time (and have been tempted to try out the free Telekinesis universal remote, though I’ve not had chance to get round to installing it yet) so it was good to see what Apple’s “official solution” looks like.

Whilst taking the dog out for a walk, it occurred to me that using an iPhone/iPod touch like remote could be really handy for many other home entertainment appliances – like the telly, for example. Want to know what’s on the other side without changing channel (or using a picture-in-picture pop up? Why not preview it on your remote? Or how about checking out the programme guide? It’s a real pain having to steal the screen to view the guide, so why not check it out on your remote instead? Programming the DVD/HDD recorder is another activity that prompts the “can I just set the video” routine, as you change to the ever popular “schedule recording” channel. Duh – why not just do it on the remote…? And so on…

Of course, it seems that several “screen remote” clients are already out there… like the Apple official “Remote” app for iTunes/Apple TV (review here), or the rather more elaborate Remote Buddy, as shown in this video:

And if “Remote Buddy” isn’t to your taste, how about iSofa (video):

(For an up-to-the-minute review of iSofa, check out Get Yer Feet Off iSofa; as well as “remote-ing” your Mac, iSofa lets you open “not a web browser, but a file one. It allows you to navigate your user directory on the computer, and open files that can be opened in Safari on the iThing- images, Word files, PDFs, etc” – thanks for that, Alan:-)

As ever, it seems as if the future really is out there… So for example, take that “EPG remote on an iPhone” idea? MythTV viewers can already try it out with MyMote:

Do you ever get the feeling you’re living in a William Gibson novel?

PS thanks to Owen for the pointer to Air Mouse (use your iPodTouch as a mouse’n’keyboard combo for your Mac), and @oxfordben for a “fwiw” pointer to the MythWeb web interface to MythTV.

PPS See also Steps Towards Making Augmented Reality A Reality, which shows how to use an iPhone/iPodTouch as part of an augmented reality setup:-)

From Sketch-Up to Mock Up…

I know Christmas is over for another year, but how much would you like a tool that lets you create 3D drawings with ease, import those drawings as models into an interactive 3D world, and then maybe print out your designs as physical 3D models using a 3D printer?

For those of you who haven’t come across SketchUp, it’s a 3D-design/drawing package published by Google that allows you to construct simple three dimensional models with very little training, and far more involved models with a bit of practise (for some example tutorial videos showing just how easy it is to use, see 3D Modeling with SketchUp).

Part of the attraction of SketchUp is the ready integration of SketchUp models into Google Earth – so anything you design in Sketchup can be viewed within that environment. This feature provided part of the rationale for my pitch to the “Show Us a Better Way” call last year on 3D Planning applications. The idea there was that planning applications to local authorities might come with 3D plans that could be viewed using a geo-interface – so at a glance I’d be able to see markers for planning applications on the Isle of Wight, for example, and then zoom in to see them in more detail (specifically, 3D detail – because not everybody knows how to “read” a 2D plan, right?!;-) An educational extension to this idea imagined school pupils in a DT lesson creating 3D models based on current planning applications in their locale, and then having this marked according to how well they corresponded to the original 2D planning application drawings. High scoring models (produced in a timely fashion) could then be made available “for real” within the planning consultation exercise.

(SketchUp is already being used in the real-world by companies like Simplified Building Concepts, who solicit user-designs of constructions assembled from a particular range of tubular building components.)

Another attractive feature of SketchUp is the ability to import models into the 3D Cobalt virtual world, as shown in this tutorial video – Using Google 3D Warehouse to Build Cobalt & Edusim Virtual Worlds (obtained via Cobalt – Edusim Quick Start Tutorials):

(Cobalt/Open Croquet rely on the user running an instance of a world in a client on their own computer, and then optionally connecting to other people who are also running OpenCroquet on their computer.)

For more on the educational use of Cobalt, and other 3D worlds, visit the Cobalt/Edusim Group.

As well as viewing models in virtual worlds, it’s also possible to “print out” scale models using 3D-printing technology, as Sweet Onion Creations describe. For example, the following video shows how to print out a 3D model from Google SkethcUp warehouse.

Oh yes, did I mention SketchUp is also scriptable – so you can write code to create your models? Google SketchUp Ruby API (e.g. architecture related Ruby plugins).

Trackbacks, Tweetbacks and the Conversation Graph, Part I

Whenever you write a blog post that contains links to other posts, and is maybe in turn linked to from other blog posts, how can you keep track of where you blog post sits “in the wider scheme of things”?

In Trackforward – Following the Consequences with N’th Order Trackbacks, I showed a technique for tracking the posts that link to a particular URI, the posts that link to those posts and so on, suggesting a way of keeping track of any conversational threads that are started by a particular post. (This is also related to OUseful Info: Trackback Graphs and Blog Categories.)

In this post, I’ll try to generalise that thinking a little more to see if there’s anything we might learn by exploring that part of the “linkgraph” in the immediate vicinity of a particular URI. I’m not sure where this will go, so I’ve built in the possibility of spreading this thought over several posts.

So to begin with, imagine I write a post (POST) that contains links to three other posts (POST1, POST2, POST3). (Graphs are plotted using Ajax/Graphviz.)

In turn, two posts (POSTA, POSTB) might link back to my post:

So by looking at the links from my post to other posts, and looking at trackbacks to my post (or using the link: search limit applied to the URI of my post on a search engine) I can locate my post in its immediate “link neighbourhood”:

Now it might be that I want to track the posts that refer to posts that referred to my post (which is what the trackforward demo explored).

You might also be interested in seeing what else the posts that have referred to my original post have linked to:

Another possibility is tracking posts that refer to posts that I referred to:

It might be that one of those posts also refers to my post:

So what…. so I need to take a break now – more in a later post…

See also: Tweetbacks, a beta service that provides a trackback like service from tweets that reference a particular URL.

PS and also BackType and BackTweets

Social Telly? The Near Future Evolution of TV User Interfaces

In When One Screen Controls Another I pulled together a few links that showed how devices like the iPhone/iPodTouch might be used to provide rich UI, touchscreen interfaces to media centres, removing the need for on-screen control panels such as electronic programming guides and recorder programming menus by moving those controls to a remote handset. But there’s another direction in which things may evolve, and that’s towards ever more “screen furniture”.

For example, a prototype demoed last year by the BBC and Microsoft shows how it might be possible to “share” content you are viewing with someone in your contact list, identify news stories according to location (as identified on a regional or world map), or compile your own custom way through a news story by selecting from a set of recommended packages related to a particular news piece. (The latter demo puts me in mind of a course topic that is constructed by a student out of pre-prepared “learning objects’).

You can read more about the demo here – Will viewers choose their own running order? – (which I recommend you do…) but if that’s too much like hard work, at least make time to watch the promo video:

For another take on the software underpinning the Microsoft Media Room software that underpins the BBC demo, check out this MediaRoom promo video:

For alternative media centre interfaces, it’s worth checking out things like Boxee (reviewed here: Boxee makes your TV social), XBMC and MythTV.

It’s also worth bearing in mind what current, widely deployed set-top box interfaces look like, such as the Sky Plus interface:

In contrast to the media centre approach, Yahoo is making a pitch for Connected TV: Widget Channel (e.g. as described here: Samsung, Yahoo, Intel Put TV Widget Pieces in Place, showing how the widget channel can be buot directly into digital TVs, as well as set-top boxes).

(Remember Konfabulator, anyone? It later became Yahoo widgets which have now morphed, in turn, into content for the widget channel. In contrast, Yahoo’s media centre/PVR download – Yahoo! Go™ for TV – appears to have stalled, big time…)

The widget channel has emerged from a collaboration between Yahoo and Intel and takes the idea of desktop widgets (like Konfabulator/Yahoo widgets, Microsoft Vista Sidebar gadgets, Google Desktop gadgets , or Mac Dashboard) on to the TV screen, as an optional overlay that pops up on top of your normal TV content.

Here’s a demo video:

So – which approach will play out and hit the living room first? Who knows, and maybe even “who cares…?!”

PS maybe, maybe, the should OU care? As an institution, our reputation and brand recognition was arguably forged by our TV broadcasts, back in a time when telly didn’t start till lunchtime, and even when it did start, you were likely to find OU “lecture-like” programmes dominating the early afternoon schedule):

Where’s the brand recognition going to come from now? 1970s OU programming on the BBC showed how the OU could play a role as a public service broadcast educator, but I’m not sure we fulfill that mission any more, even via our new web vehicles (Youtube, iTunesU, OU podcasts etc.)? I’d quite like to see an OU iPlayer, partly because it allows us to go where iPlayer goes, but I also wonder: do we need to keep an eye on the interfaces that might come to dominate the living room, and maybe get an early presence in there?

For example, if the BBC get into the living room with the Canvas set-top box, would we want a stake somewhere in the interface?

PS just so you know, this post was written days ago, (and scheduled for delivery), way before the flurry of other posts out there on this topic that came out this week… ;-)

What Makes a Good API? A Call to Arms…

One of the sessions I attended at last year’s CETIS get together was the UKOLN organised Technological Innovation in a World of Web APIs session (see also My CETIS 2008 Presentations and What Makes A Good API? Doing The Research Using Twitter).

This session formed part of a project being co-ordinated by UKOLN’s homeworking Marieke GuyJISC “Good APIs” project (project blog) – which is well worth getting involved with because it might just help shape the future of JISC’s requirements when they go about funding projects…

(So if you like SOAP and think REST is for wimps, keep quiet and let projects that do go for APIs continue to get away with proposing overblown, unfriendly, overengineered ones…;-)

So how can you get involved? By taking this survey, for one thing:

The ‘Good APIs’ project aims to provide JISC and the sector with information and advice on best practice which should be adopted when developing and consuming APIs.

In order to collate information the project team have written a very brief research survey asking you about your use of APIs (both providing and consuming).

TAKE THE What makes a good API? SURVEY.

I don’t know if the project will have a presence at the JISC “Developer Happiness” Days (the schedule is still being put together) but it’d be good if Marieke or Brian were there on one of the days (at least) to pitch in some of the requirements of a good API that they’ve identified to date;-)

PS here’s another fun looking event – Newcastle Maker Faire.

Getting Bits to Boxes

Okay – here’s a throwaway post for the weekend – a quick sketch of a thought experiment that I’m not going to follow through in this post, though I may do in a later one…

  • The setting: “the box” that sits under the TV.
  • The context: the box stores bits that encode video images that get played on the TV.
  • The thought experiment: what’s the best way of getting the bits you want to watch into the box?

That is, if we were starting now, how would we architect a bit delivery network using any or all of the following:

1) “traditional” domestic copper last mile phone lines (e.g. ASDL/broadband);
2) fibre to the home;
3) digital terrestrial broadcast;
4) 3G mobile broadband;
4.5) femtocells, hyperlocal, domestic mobile phone base stations that provide mobile coverage within the home or office environment, and use the local broadband connection to actually get the bits into the network; femtocells might be thought of as the bastard lovechild of mobile and fixed line telephony!
5) digital satellite broadcasts (sort of related: Please Wait… – why a “please wait” screen sometimes appear for BBC red button services on the Sky box…).

Bear in mind that “the box” is likely to have a reasonable sized hard drive that can be used to cache, say, 100 hrs of content alongside user defined recordings.

All sorts of scenarios are allowed – operators like BT or Sky “owning” a digital terrestrial channel; the BBC acting as a “public service ISP”, with a premium rate BBC license covering the cost of a broadband landline or 3G connection; Amazon having access to satellite bursts for a couple of hours a day; and so on…

Hybrid return paths are possible too – the broadband network, SMS text messages, a laptop on your knee or – more likely – an iPhone or web capable smartphone in your hand, and so on. Bear in mind that the box is likely to be registered with an online/web based profile, so you can change settings on the web that will be respected by the box.

If you want to play the game properly, you might want to read the Caio Review of Barriers to investment in Next Generation Broadband first.

PS If this thought experiment provokes any thoughts in you, please share them as a comment to this post:-)