Revisiting the Library Flip – Why Librarians Need to Know About SEO

What does information literacy mean in the age of web search engines? I’ve been arguing for some time (e.g. in The Library Flip) that one of the core skills going forward for those information professionals who “help people find stuff” is going to be SEO – search engine optimisation. Why? Because increasingly people are attuned to searching for “stuff” using a web search engine (you know who I’m talking about…;-); and if your “stuff” doesn’t appear near the top of the organic results listing (or in the paid for links) for a particular query, it might as well not exist…

Whereas once academics and students would have traipsed into the library to ask the one of the High Priestesses to perform some magical incantation on a Dialog database through a privileged access terminal, for many people research now starts with a G. Which means that if you want your academics and students to find the content that you’d recommend, then you have to help get that content to the top of the search engine listings.

With the rate of content production growing to seventy three tera-peta-megabits a second, or whatever it is, does it make sense to expect library staffers to know what the good content is, any more (in the sense of “here, read this – it’s just what you need”)? Does it make even make sense to expect people to know where to find it (in the sense of “try this database, it should contain what you need”)? Or is the business now more one of showing people how to go about finding good stuff, wherever it is (in the sense of “here’s a search strategy for finding what you need”) and helping the search engines see that stuff as good stuff?

Just think about this for a moment. If your service is only usable by members of your institution and only usable within the locked down confines of your local intranet, how useful is it?

When your students leave your institution, how many reusable skills are they taking away? How many people doing informal learning or working within SMEs have access to highly priced, subscription content? How useful is the content in those archives anyway? How useful are “academic information skills” to non-academics and non-students? (I’m just asking the question…;-)

And some more: do academic courses set people up for life outside? Irrespective of whether they do or not, does the library serve students on those courses well within the context of their course? Does the library provide students with skills they will be able to use when they leave the campus and go back to the real world and live with Google. (“Back to”? Hah – I wonder how much traffic on HEI networks is launched by people clicking on links from pages that sit on the google.com domain?) Should libraries help students pass their courses, or give them skills that are useful after graduation? Are those skills the same skills? Or are they different skills (and if so, are they compatible with the course related skills?)?

Here’s where SEO comes in – help people find the good stuff by improving the likelihood that it will be surfaced on the front page of a relevant web search query. For example, “how to cite an article“. (If you click through, it will take you to a Google results page for that query. Are you happy with the results? If not, you need to do one of two things – either start to promote third party resources you do like from your website (essentially, this means you’re doing off-site SEO for those resources) OR start to do onsite and offsite SEO on resources you want people to find on your own site.

(If you don’t know what I’m talking about, you’re well on the way to admitting that you don’t understand how web search engines work. Which is a good first step… because it means you’ve realised you need to learn about it…)

As to how to go about it, I’d suggest one way is to get a better understanding of how people actually use library or course websites. (Another is Realising the Value of Library Data and finding ways of mining behavioural data to build recommendation engines that people might find useful.)

So to start off – find out what search terms are the most popular in terms of driving traffic to your Library website (ideally relating to some sort of resource on your site, such as a citation guide, or a tutorial on information skills); run that query on Google and see where you page comes in the results listing. If it’s not at the top, try to improve its ranking. That’s all…

For example, take a look at the following traffic (as collected by Google Analytics) coming in to the OU Library site over a short period some time ago.

A quick scan suggests that we maybe have some interesting content on “law cases” and “references”. For the “references” link, there’s a good proportion of new visitors to the OU site, and it looks from the bounce rate that half of those visited more than one page on the OU site. (We really should do a little more digging at this point to see what those people actually did on site, but this is just for argument’s sake, okay?!;-)

Now do a quick Google on “references” and what do we see?

On the first page, most of the links are relating to job references, although there is one citation reference near the bottom:

Leeds University library makes it in at 11 (at the time of searching, on google.co.uk):

So here would be a challenge – try to improve the ranking of an OU page on this results listing (or try to boost the Leeds University ranking). As to which OU page we could improve, first look at what Google thinks the OU library knows about references:

Now check that Google favours the page we favour for a search on “references” and if it does, try to boost it’s ranking on the organic SERP. If Google isn’t favouring the page we want as its top hit on the OU site for a search on “references”, do some SEO to correct that (maybe we want “Manage Your References” to come out as the top hit?)

Okay, enough for now – in the next post on this topic I’ll look at the related issue of Search Engine Consequences, which is something that we’re all going to have to become increasingly aware of…

PS Ah, what the heck – here’s how to find out what the people who arrived at the Library website from a Google search on “references” were doing onsite. Create an advanced segment:

Google analytics advanced segment

(PS I first saw these and learned how to use them at a trivial level maybe 5 minutes ago;-)

Now look to see where the traffic came in (i.e. the landing pages for that segment):

Okay? The power of segmentation – isn’t it lovely:-)

We can also go back to the “All Visitors” segment, and see what other keywords people were using who ended up on the “How to cite a reference” page, because we’d possibly want to optimise for those, too.

Enough – time for the weekend to start :-)

PS if you’re not sure what techniques to use to actually “do SEO”, check on Academic Search Premier (or whatever it’s called), because Google and Google Blogsearch won’t return the right sort of information, will they?;-)

Realising the Value of Library Data

For anyone listening out there in library land who hasn’t picked up on Dave Pattern’s blog post from earlier today – WHY NOT? Go and read it, NOW: Free book usage data from the University of Huddersfield:

I’m very proud to announce that Library Services at the University of Huddersfield has just done something that would have perhaps been unthinkable a few years ago: we’ve just released a major portion of our book circulation and recommendation data under an Open Data Commons/CC0 licence. In total, there’s data for over 80,000 titles derived from a pool of just under 3 million circulation transactions spanning a 13 year period.

http://library.hud.ac.uk/usagedata/

I would like to lay down a challenge to every other library in the world to consider doing the same.

So are you going to pick up the challenge…?

And if not, WHY NOT? (Dave posts some answers to the first two or three objections you’ll try to raise, such as the privacy question and the licensing question.)

He also sketches out some elements of a possible future:

I want you to imagine a world where a first year undergraduate psychology student can run a search on your OPAC and have the results ranked by the most popular titles as borrowed by their peers on similar courses around the globe.

I want you to imagine a book recommendation service that makes Amazon’s look amateurish.

I want you to imagine a collection development tool that can tap into the latest borrowing trends at a regional, national and international level.

DON’T YOU DARE NOT DO THIS…

See also a presentation Dave gave to announce this release – Can You Dig It? A systems Perspective:

What else… Library website analytics – are you making use of them yet? I know the OU Library is collecting analytics on the OU Library website, although I don’t think they’re using them? (Knowing that you had x thousand page views last week is NOT INTERESTING. Most of them were probably people flailing round the site failing to find what they wanted? (And before anyone from the Library says that’s not true, PROVE IT TO ME – or at least to yourself – with some appropriate analytics reports.) For example, I haven’t noticed any evidence of changes to the website or A/B testing going on as a result of using Googalytics on the site??? (Hmmm – that’s probably me in trouble again…!;-)

PS I’ve just realised I didn’t post a link to Course Analytics presentation from Online Info last week, so here it is:

Nor did I mention the follow up podcast chat I had about the topic with Richard Wallis from Talis: Google Analytics to analyse student course activity – Tony Hirst Talks with Talis.

Or the “commendation” I got at the IWR Information Professional Award ceremony. I like to think this was for being the “unprofessional” of the year (in the sense of “unconference”, of course…;-). It was much appreciated, anyway :-)

Arise Ye Databases of Intention

In what counts as one of my favourite business books (“The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture), John Battelle sets the scene with a chapter entitled “The Database of Intentions”.

The Database of Intentions is simply this.: the aggregate results of every search ever entered, every result list ever tendered, and every path taken as a result.

(Also described in this blog post: Database of Intentions).

The phrase “the database of intentions” is a powerful one; but whilst I don’t necessarily agree with the above definition any more (and I suspect Battelle’s thinking about his own definition of this term may also have moved on since then) I do think that the web’s ability to capture intentional data is being operationalised in a far more literal form than even search histories reveal.

Here’s something I remarked to myself on this topic a couple of days ago, following a particular Google announcement:

The announcement was this one: New in Labs: Tasks and it describes the release of a Task list (i.e. a to do list) into the Google Mail environment.

As Simon PErry notes in hos post on the topic (“Google Tasks: Gold Dust Info For Advertisers).

It’s highly arguable that no piece of information could be more valuable to Google that what your plans / tasks / desire are.

In the world of services driven by advertising in exchange for online services, this stuff is gold dust.

We’d imagine that Google will be smart enough not to place ads related to your todo list directly next to the list, as that could well freak people out. Don’t forget that as soon as Google know this info about you, they can place the adverts where ever and when ever they feel like.

“Don’t forget that as soon as Google know this info about you, they can place the adverts where ever and when ever they feel like.”… Visit 10 web pages that run advertising, and I would bet that close to the majority are running Google Adsense.

Now I know everybody knows this, but I suspect most people don’t…

How does Google use cookies to serve ads?
A cookie is a snippet of text that is sent from a website’s servers and stored on a web browser. Like most websites and search engines, Google uses cookies in order to provide a better user experience and to serve relevant ads. Cookies are set based on your viewing of web pages in Google’s content network and do not contain any information that can identify you personally. This information helps Google deliver ads that are relevant to your interests, control the number of times you see a given ad, and measure the effectiveness of ad campaigns. Anyone who prefers not to see ads with this level of customization can opt out of advertising cookies. This opt-out will be specific only to the browser that you are using when you click the “opt out” button. [Advertising Privacy FAQ]

Now I’m guessing that “your viewing of web pages in Google’s content network” includes viewing pages in GMail, pages which might include your “to do” list… So a consequence of adding an item to your Task list might be an advert that Google serves to you next time you do a search on Google or visit a site running Google Ads.

Here’s another way of thinking about those ads: as predictive searches. Google knows what you intend to do from your “to do” list, so in principle it can look at what people with similar “to do” items search for, and serve the most common of these up next time you go to http://getitdone.google.com. Just imagine it – whereas you go to google.com and see an empty search box, you go to getitdone.google.com and you load a page that has guessed what query you were going to make, and runs it for you. So if the item on your task list for Thursday afternoon was “buy car insurance”, you’ll see something like this:

Heh, heh ;-) Good old Google – just being helpful ;-)

Explicit intention information is not just being handed over to Google in increasingly literal and public ways, of course. A week or so ago, John Battelle posted the following (Shifting Search from Static to Real-time):

I’ve been mulling something that keeps tugging at my mind as a Big Idea for some time now, and I may as well Think Out Loud about it and see what comes up.

To summarize, I think Search is about to undergo an important evolution. It remains to be seen if this is punctuated equilibrium or a slow, constant process (it sort of feels like both), but the end result strikes me as extremely important: Very soon, we will be able to ask Search a very basic and extraordinarily important question that I can best summarize as this: What are people saying about (my query) right now?
Imagine AdSense, Live. …

[I]magine a service that feels just like Google, but instead of gathering static web results, it gathers liveweb results – what people are saying, right now (or some approximation of now – say the past few hours or so) … ? And/or, you could post your query to that engine, and you could get realtime results that were created – by other humans – directly in response to you? Well, you can get a taste of what such an engine might look like on search.twitter.com, but that’s just a taste.

A few days later, Nick Bilton developed a similar theme (The Twitter Gold Mine & Beating Google to the Semantic Web):

Twitter, potentially, has the ability to deliver unbelievably smart advertising; advertising that I actually want to see, and they have the ability to deliver search results far superior and more accurate to Google, putting Twitter in the running to beat Google in the latent quest to the semantic web. With some really intelligent data mining and cross pollination, they could give me ads that makes sense not for something I looked at 3 weeks ago, or a link my wife clicked on when she borrowed my laptop, but ads that are extremely relevant to ‘what I’m doing right now’.

If I send a tweet saying “I’m looking for a new car does anyone have any recommendations”, I would be more than happy to see ‘smart’ user generated advertising recommendations based on my past tweets, mine the data of other people living Brooklyn who have tweeted about their car and deliver a tweet/ad based on those result leaving spammers lost in the noise. I’d also expect when I send a tweet saying ‘I got a new car and love it!’ that those car ads stop appearing and something else, relevant to only me, takes its place.

(See also Will Lack of Relevancy be the Downfall of Google?, where I fumble around a thought or too about whether Google will lose out on identifying well liked content because links are increasingly being shared in real time in places that Google doesn’t index.)

It seems to me that To Do/Task lists, Calendars, search queries and tweets all lie somewhere different along a time vs. commitment graph of our intentions. The to do list is something you plan to do; searching on Google or Amazon is an action executed in pursuit of that goal. Actually buying your car insurance is completing the action.

Things like wishlists also blend in our desires. Calendars, click-thrus and actual purchases all record what might be referred to as commitments (a click thru on an ad could be seen as a very weak commitment to buy, for example; an event booked into your calendar is a much stronger commitment; handing over your credit card and hitting the “complete transaction” button is the strongest commitment you can make regarding a purchase decision).

Way back when, I used to play with software agents, constructed according to a BDI (“beady eye”) model – Beliefs, Desires and Intentions. I also looked at agent teams, and the notion of “joint persistent goals” (I even came up with a game for them to play – “DIFFOBJ – A Game for Exercising Teams of Agents – although I’m not sure the logic was sound!). Somewhere in there is the basis of a logic for describing the Database of Intentions, and the relationship between an individual with a goal and an engine that is trying to help the searcher achieve that goal, whether by serving them with “organic” content or paid for content.

PS I don’t think I’ve linked to this yet? All about Google

It’s worth skimming through…

PPS See also Status and Intent: Twoogle, in which I idly wonder whether a status update from Google just before I start searching there could provide explicit intent information to Google about the sort of thing I want to achieve from a particular search.

More Remarks on the Tesco Data Play

A little while ago, I posted some notes I’d made whilst reading “Scoring Points”, which looked at the way Tesco developed it’s ClubCard business and started using consumer data to improve a whole range of operational and marketing functions within the tesco operation (The Tesco Data Business (Notes on “Scoring Points”)). For anyone who’s interested, here are a few more things I managed to dig up Tesco’s data play, and their relationship with Dunnhumby, who operate the service.

[UPDATE – most of the images were removed from this post because I got a take down notice from Dunnhumby’s lawyers in the US…]

Firstly, here’s a couple of snippets from a presentation by Giles Pavey, Head of Analysis at dunnhumby, presented earlier this year. The first thing to grab me was this slide summarisign how to turn data into insight, and then $$$s (the desired result of changing customer behaviour from less, to more profitable!):

In the previous post, I mentioned how Tesco segment shoppers according to their “lifestyle profile”. This is generated by looking at the data generated by a shopper, in terms of what they buy, when they buy it, what stories you can tell about them as a result.

So how well does Tesco know you, for example?

(I assume Tesco knows Miss Jones drives to Tesco on a Saturday because she uses her Clubcard when topping up on fuel at the Tesco petrol station…).

Clustering shopped for items in an appropriate way lets Tesco identify the “Lifestyle DNA” of each shopper:

(If you self-categorise according to those meaningful sounding lifestyle categories, I wonder how well it would match the profile Tesco has allocated to you?!)

It’s quite interesting to see what other players in the area think is important, too. One way of doing this is to have a look around at who else is speaking at the trade events Giles Pavey turns up at. For example, earlier this year was a day of impressive looking talks at The Business Applications of Marketing Analytics.

Not sure what “Marketing Analytics” are? Maybe you need to become a Master of Marketing Analysis to find out?! Here’s what appears to be involved:

The course website also features an interview with three members of dunnhumby: Orlando Machado (Head of Insight Analysis), Martin Hayward (Director of Strategy) and Giles Pavey (head of Customer Insight) [view it here].

You can see/hear a couple more takes on dunnhumby here:
Martin Hayward, Director of Consumer Strategy and Futures at dunnhumby on the growth of dunnhumby;
Life as an “intern” at dunnhumby.

And here’s another event that dunnhumby presented at: The Future of Geodemographics – 21st Century datasets and dynamic segmentation: New methods of classifying areas and individuals. Although the dunnhumby presentation isn’t available for download, several others are. I may try to pull out some gems from them in a later post, but in the meantime, here are some titles to try to tease you into clicking through and maybe pulling out the nuggets, and adding them as comments to this post, yourself:
Understanding People on the Move in London (I/m guessing this means “Oyster card tracking”?!);
Geodemographics and Privacy (something we should all be taking an interest in?);
Real Time Geodemographics – New Services and Business Opportunities from Analysing People in Time and Space: real-time? Maybe this ties in with things like behavioural analytics and localised mobile phone tracking in shopping centres?

So what are “geodemographics: (or “geodems”, as they’re known in the trade;-)? No idea – but I’m guessing it’s the demographics of a particular locales?

Here’s one of the reasons why Tesco are interested, anyway:

An finally (for now at least…) it seems that Tesco and dunnhumby may be looking for additional ways of using Clubcard data, in particular for targeted advertising:

Tesco is working with Dunnhumby, the marketing group behind Tesco Clubcard, to integrate highly targeted third-party advertising across Tesco.com when the company’s new-look site launches next year.
Jean-Pierre Van Lin, head of markets at Dunnhumby, explained to NMA that, once a Clubcard holder had logged in to the website, data from their previous spending could be used to select advertising of specific relevance to that user.
[Ref: Tesco.com to use Clubcard data to target third-party advertising (thanks, Ben:-)]

Now I’m guessing that this will represent a change in the way the data has been used to date – so I wonder, have Tesco ClubCard Terms and Conditions changed recently?

Looking at the global reach of dunnhumby, I wonder whether they’re building capacity for a global targeted ad service, via the back door?

Does it matter, anyway, if profiling data from our offline shopping habits are reconciled with our online presence?

In “Diving for Data”, (Supermarket News, 00395803, 9/26/2005, Vol. 53, Issue 39), Lucia Moses reports that the Tesco Clucbcard in the UK “boasts 10 million households and captures 85% of weekly store sales”, along with 30% of UK food sales. The story in the US could soon be similar, where dunnhumby works with Kroger to analyse “6.5 million top shopper households”, (identified as the “slice of the total 42 million households that visit Kroger stores that drive more than 50% of sales”). With “Kroger claim[ing] that 40% of U.S. households hold one of its cards”, does dunnhumby’s “goal … to understand the customer better than anyone” rival Google in its potential for evil?!

OpenLearn WordPress Plugins

Just before the summer break, I managed to Patrick McAndrew to use some of his OpenLearn cash to get a WordPress-MU plugin built that would allow anyone to republish OpenLearn materials across a set of WordPress Multi-User blogs. A second WordPress plugin was commissioned that would allow any learners happening by the blogs would subscribe to those courses using “Daily feeds”, that would deliver course material to them on a daily basis.

The plugins were coded by Greg Gaughan at Isotoma, and tested by Jim and D’Arcy, among others… (I haven’t acted on your feedback yet – sorry, guys…:-( For all manner of reasons, I didn’t post the plugins (partly because I wanted to do another pass on usability/pick up on feedback, but mainly because I wanted set up a demo site first… but I still haven’t done that… so here’s a link to the plugins anyway in case anyone fancies having a play over the next few weeks: OpenLearn WordPress Plugins.

I’ll keep coming back to this post – and the download page – to add in documentation and some of the thoughts and discussions we had about how to evolve the WPMU plugin workflow/UI etc, as well as the daily feeds widget functionality.

In the meantime, here’s the minimal info I gave the original testers:

The story is:
– one ‘openlearn republisher’ plugin, that will take the URL of an RSS feed describing OpenLearn courses (e.g. on the Modern Languages page, the RSS: Modern Languages feed)) , and suck those courses into WPMU, one WPMU blog per course, via the full content RSS feed for each course.

– one “daily feeds” widget; this can be added to any WP blog and should provide a ‘daily’ feed of the content from that blog, that sends e.g. one item per day to the subscriber from the day they subscribe. The idea here is if a WP blog is used as a content publishing sys for ‘static’, unchanging content (e.g. a course, or a book, where each chapter is a post, or a fixed length podcast series), users can still get it delivered in a paced/one-per-day fashion. This widget should work okay…

Here’s another link the page where you can find the downloads: OpenLearn WordPress Plugins. Enjoy – all comments welcome. Please post a link back here if you set up any blogs using either of these two plugins.

Decoding Patents – An Appropriate Context for Teaching About Technology?

A couple of nights ago, as I was having a rummage around the European patent office website, looking up patents by company to see what the likes of Amazon, Google, Yahoo and, err, Technorati have been posting recently, it struck me that IT and engineering courses might be able to use patents in the similar way to the way that Business Schools use Case Studies as a teaching tool (e.g. Harvard Business Online: Undergraduate Course Materials)?

This approach would seem to offer several interesting benefits:

  • the language used in patents is opaque – so patents can be used to develop reading skills;
  • the ideas expressed are likely to come from a commercial research context; with universities increasingly tasked with taking technology transfer more seriously, looking at patents situates theoretical understanding in an application area, as well as providing the added advantage of transferring knowledge in to the ivory tower, too, and maybe influencing curriculum development as educators try to keep up with industrial inventions;-)
  • many patents locate an invention within both a historical context and a systemic context;
  • scientific and mathematical principles can be used to model or explore ideas expressed in a patent in more detail, and in a the “situated” context of an expression or implementation of the ideas described within the patent.

As an example of how patents might be reviewed in an uncourse blog context, see one of my favourite blogs, SEO by the SEA, in which Bill Slawski regularly decodes patents in the web search area.

To see whether there may be any mileage in it, I’m going to keep an occasional eye on patents in the web area over the next month or two, and see what sort of response they provoke from me. To make life easier, I’ve set up a pipe to scrape the search results for patents issued by company, so I can now easily subscribe to a feed of new patents issued by Amazon, or Yahoo, for example.

You can find the pipe here: European Patent Office Search by company pipe.

I’ve also put several feeds into an OPML file on Grazr (Web2.0 new patents, and will maybe look again at the styling of my OPML dashboard so I can use that as a display surface (e.g. Web 2.0 patents dashboard).

Immortalising Indirection

So it seems that Downes is (rightly) griping again;-) this time against “the whims of corporate software producers (that’s … why I use real links in th[e OLDaily] newsletter, and not proxy links such as Feedburner – people using Feedburner may want to reflect on what happens to their web footprint should the service disappear or start charging)”.

I’ve been thinking about this quite a bit lately, although more in the context of the way I use TinyURLs, and other URL shortening services, and about what I’d do if they ever went down…

And here’s what I came up with: if anyone hits the OUseful.info blog (for example) via a TinyURL or feedburner redirect, I’m guessing that the server will see something to that effect in the header? If that is the case, then just like WordPress will add trackbacks to my posts when other people link to them, it would be handy if it would also keep a copy of TinyURLs etc that linked there. Then at least I’d be able to do a search on those tinyURLs to look for people linking to my pages that way?

Just in passing, I note that the Twitter search engine has a facility to preview shortened URLs (at least, URLs shortened with TinyURL):

I wonder whether they are keeping a directory of these, just in case TinyURL were to disappear?

Corporate Foolery and the Abilene Paradox

…or, a little bit about how I see myself…

I can’t remember the context now, but a little while ago I picked up the following tweet from Pete Mitton:

The Abilene Paradox? So what’s that when it’s at home, then?

The Abilene Paradox is a phenomenon in which the limits of a particular situation seems to force a group of people to act in a way that is the opposite of what they actually want. This situation can occur when groups continue with misguided activities which no group member desires because no member is willing to raise objections, or displease the others.
[Ref.]

The paradox was introduced and illustrated by means of the following anecdote, recounted in an article from 1974 – “The abilene paradox: The management of agreement” by Jerry Harvey [doi:10.1016/0090-2616(74)90005-9]:

On a hot afternoon visiting in Coleman, Texas, the family is comfortably playing dominoes on a porch, until the father-in-law suggests that they take a trip to Abilene [53 miles north] for dinner. The wife says, “Sounds like a great idea.” The husband, despite having reservations because the drive is long and hot, thinks that his preferences must be out-of-step with the group and says, “Sounds good to me. I just hope your mother wants to go.” The mother-in-law then says, “Of course I want to go. I haven’t been to Abilene in a long time.”
The drive is hot, dusty, and long. When they arrive at the cafeteria, the food is as bad as the drive. They arrive back home four hours later, exhausted.
One of them dishonestly says, “It was a great trip, wasn’t it.” The mother-in-law says that, actually, she would rather have stayed home, but went along since the other three were so enthusiastic. The husband says, “I wasn’t delighted to be doing what we were doing. I only went to satisfy the rest of you.” The wife says, “I just went along to keep you happy. I would have had to be crazy to want to go out in the heat like that.” The father-in-law then says that he only suggested it because he thought the others might be bored.
The group sits back, perplexed that they together decided to take a trip which none of them wanted. They each would have preferred to sit comfortably, but did not admit to it when they still had time to enjoy the afternoon.

Hence the need for the “corporate fool”, a role I aspire to…;-)

the curious double-act of king and fool, master and servant, substance and shadow, may thus be seen as a universal, symbolic expression of the antithesis lying at the heart of of the autocratic state between the forces of order and disorder, of structured authority and incipient anarchy, in which the conditional nature of the fool’s licence (‘so far but not further’) gives reassurance that ultimately order will prevail. The fool, though constrained, continually threatens to break free in pushing to its limits whatever freedom he is given. He is the trickster of myth in an historical strait-jacket from which he is forever struggling to escape. And if the king, the dominant partner, sets the tone of their exchanges and the fool has everything to gain from a willing acceptance of his subservient role, his participation can never be forced. If, for whatever reason, he should come to feel that his master has reneged on the unwritten contract between them (the rules of the game), it is always open to him to refuse to play, however costly to himself the refusal might prove to be. He thus retains – and needs to retain if he is to achieve the full potential of his role – a degree of independence. Like the actor on stage in a live performance, success is inevitably accompanied by the possibility of failure. …
But there was a danger on both sides of this balancing act. If the fool risked going too far in his banter and tricks, the king was also vulnerable to the fool’s abuse of the licence he was given. [“Fools and Jesters at the English Court“, J Southworth, p3.]

See also: OMG…There are spies everywhere sabotaging our organizations!!, which reveals some tricks about how to destroy your organisation from within (“General Interference with Organizations and Production”), via the uncompromising OSS Simple Sabotage Manual [Declassified] (PDF).

I once started putting together an “anti-training” course based around this sort of thing, called “Thinking Inside the Box”. It’s a shame I never blogged the notes – all that knowledge is lost, now ;-)

Other sources of profound unwisdom: Dilbert, xkcd, Noise to Signal.

PS Here’s an example of a piece of corporate sabotage I started exploring: The Cost of Meetings – How Much Return on Investment Do YOU Get? (Meeting taxi meter).

OU Podcasts Site Goes Live

Another day, another OU web play… Realising that the OU on iTunesU presence has its downside (specifically – having to use iTunes), you can now get hold of OU “enhanced podcasts” from the OU Podcasts (beta) site.

The architecture of the site borrows heavily from the OU Youtube presence, offering Learn, Research and Life options (even if they aren’t populated yet).

I’m not sure whether there is duplication (or even triplication) of content across the Podcast, iTunesU and Youtube sites, but then again – would it matter if it was? And I’m not sure if there is a pipeline that allows content to be “deposited” once behind the firewall, then published on the podcast, Youtube and/or iTunesU sites in one go, as required (can anyone from any of the respective project teams comment on how the publishing process works, and whether there is any particular content strategy in place, or is content being grabbed and posted on the sites howsoever it can?!;-)

The pages for actual “programme” elements contains an embedded (though not shareable or embeddable?) player, along with subscription feeds for the topic area the “programme” is assigned to.

The programme page has a rather redundant “Permalink for this page” (err – it’s in the browser address bar?), and there doesn’t appear to be a link to the actual audio file, which might be useful going forward, but there is a range of topic/channel podcast subscription feeds.

I don’t think the podcast page resyndicates audio content from the open2.site, podcast feeds from OU/BBC Radio programmes or archived (real player, bleurghhh:-( content co-produced by the OU that is still available on the BBC website. (For examples, see the far from complete RadiOBU player.)

Design wise, I wonder how well this sort of page design would cope as a container for OU/BBC TV content? Maybe I should try to steal elements of the CSS stylesheet to tart up the OU/BBC 7 day catch-up service?! (Or maybe one of the podcast team fancy a quick doodle on the side?;-)

The URL design looks neat enough, too, taking the form: http://podcast.open.ac.uk/oulearn/arts-and-humanities/history-of-art/ (that is, http://podcast.open.ac.uk/oulearn/TOPIC/PROGRAMME/).

The eagle-eyed amongst you may notice that there is an option (for OU Staff?) to Login, which leads to the option to “Join [the] hosting service”:

So while it doesn’t look like there is much benefit to logging in at the moment, it seems as though there is a possibility that the site will be offering hosting for individually produced podcasts (using Amazon S3, I believe…) in the near future?

I’m not sure where individually produced podcasts would live on the podcasts site, though? In the appropriate topic area?

Once again, great job folks… :-) [Disclaimer: I have nothing to do with the OU podcasts site.]

PS a couple more minor quibbles, just because…;-) The favicon is a KMI favicon. This doesn’t really fit, IMHO. The release of the Podcasts site has not been reflected (yet) with a mention on the /use site, which looks increasingly “stale” (there’s no mention of Platform there, either…).

Although there doesn’t appear to be an opportunity for Faculties or Departments to have a presence, as such, on the site (unless they provide content for topic areas?), I wonder whether the podcast site back end could actually be used as a content delivery service for Departmental content (e..g. the content on the Department of Communication and Systems website).

PS see also: Getting Open University Podcasts on your TV with MythStream

Merging Several Calendar iCal Feeds With Yahoo Pipes

Following up on Displaying Events from Multiple Google Calendars in a Single Embedded Calendar View, and picking up on a quip Jim Groom made in the post that started this thread (“Patrick suggested Yahoo Pipes!, you ever experiment with this? “), I did have a quick play with pipes, and this is what I found..,

The “Fetch Feed” block is happy to accept iCal feeds, as this iCal Merge pipe demonstrates:

(I grabbed the iCal feeds from pages linked to from the Stanford events page. A websearch for “ical lectures events” should pull up other sources;-)

If you import an iCal feed into a Yahoo pipe, you get an iCal output format option:

You can then render this feed in an online calendar such as 30 boxes: pipes merged iCal feeds in 30 boxes (here’s the 30 boxes config page for that calendar).

(NB it’s worth noting that 30 boxes will let you generate a calendar view that will merge up to 3 iCal feeds anyway.)

Using the Pipe’s output iCal URL to try to add the merged calendar feed to Google Calendar didn’t seem to work, but when I converted the URL to a TinyURL (http://tinyurl.com/67bg2d) and used that as the import URL, it worked fine.

Do this:

then this:

and get this:

(I couldn’t get the Yahoo pipe iCal feed to work in iCal on my Mac, nor could I resyndicate the feed from the Google Calendar. I think the problem is with the way the Pipes output URL is constructed… which could be worked around by relaying/republishing the Pipe iCal feed through something with a nice URL, maybe?)

That okay for you, Reverend? :-)

PS having to add the feeds by hand to the pipe is a pain. So how about if we list a set of iCal feeds in an RSS feed (which could be a shared bookmark feed, built around a common tag), then pull that bookmark feed (such as the feed from a delicious page (e.g. http://delicious.com/psychemedia/ical+feedtest)) into a pipe and use it to identify what iCal feeds to pull into the pipe?

Got that? The Loop block grabs the URL for each iCal feed listed in the input RSS feed, and pulls in the corresponding iCal events. It seems to work okay, too:-) That is, the feed powered iCal merge pipe will aggregate events from all the iCal feed listed in the RSS feed that is pulled into the pipe.

So now the workflow, which could possibly be tidied a little, is this:
– bookmark iCal feed URLs to a common somewhere (this can be as weak as shared tags, which are then used as the basis for aggregation of feed URLs);
– take the feed from that common somewhere and pop it into the feed powered iCal merge pipe.
– get the TinyURL of the iCal output from the pipe, and subscribe to it in Google Calendar, (for a personal calendar view).

Hmm… we still can’t publish the Google Calendar though, because we don’t “own” the calendar dates (the iCal feed does)? But I guess we can still use 30boxes as the display surface, and provide a button to add the calendar to Google Calendar?

OKAY – it seems that when you import the feed, it makes sense to tick the box that says “allow other people to find this calendar”:

… because then you can generate some embed code for the calendar, provide a link for anyone else to see the calendar (like this one), and use the tidied up iCal feed that Google calendar now provides to view the calendar in something like iCal:

PPS To make things a little easier, I tweaked the feed powered pipe so now you can just provide it with an RSS feed that points to one or more iCal feeds:

I also added a block to sort the dates in ascending date order. It’s simple enough to add the feed to iGoogle etc, or as a badge in your blog, using the Yahoo Pipes display helper tools:

Hmm, it would be nice if Pipes also offered a “calendar” output view when it knew there was iCal data around, just like it generates a map for when it sniffs geo-data, and a slideshow view when it detects appropriately addressed media objects? Any chance of that, I wonder?