OU Library Home Page – Normalised Click Density

The OU Library website has been running Google Analytics for ages, but from what I can tell they haven’t done a hug amount with the results in terms of making the analytics actionable and using them to improve the site design (I’d love for someone to correct me with a blog post or two about how analytics have been used to improve site performance. If anyone would like to publish such a post, I’ll happily give you a guest slot here on OUseful.info…:-)

(As a bit of background, see Library Analytics, (Part 1), Library Analytics, (Part 2), Library Analytics, (Part 3), Library Analytics, (Part 4), Library Analytics, (Part 5), Library Analytics, (Part 6), Library Analytics, (Part 7) and Library Analytics, (Part 8))

Anyway, here’s the Library homepage (August 2009):

And here are two the real OU Library homepages:

(See also: Where is the Open University Homepage?;-)

And here’s the OU Library homepage as treemap, where the block size shows where the traffic goes (as recorded over the last month) as a percentage of all traffic to the OU LIbrary homepage.

OU Library homepage - normalised click density

So if each click was equally valuable, and each pixel on the screen was equally valuable, then that’s how the screen area should be allocated… (Hmm – that could be, err, interesting – an adaptive homepage where there’s one block element per link, and a treemap algorithm that allocates the area each block has when the page is rendered? Heh heh :-)

I did think about showing a heatmap of where on the homepage the clicks were made, but I figure I’ve probably already upset the Library folk enough by now. I also considered doing a treemap showing the realtive proportions of different keywords on Google that drove traffic to the OU Library homepage, but I figure that may be commercially sensitive in terms of bidding for Adsense keywords…

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

15 thoughts on “OU Library Home Page – Normalised Click Density”

  1. I have the OU Library’s “Library: Journals” page on speed dial. It’s also used as the start page in “Papers” (http://mekentosj.com/papers/), my PDF searching/storage and library browsing application. I’d be surprised if other people haven’t done something similar.

    Does anyone find the “One-Stop Search” useful, by the way? Somehow I rarely do.

  2. I’ve got the Libary’s ‘Basic Search’ bookmarked (is that the one you mean, Eingang?) It’s clunky to use, but it’s the way I access the library. When I have to get to the library any other way I get frustrated because I haven’t got it bookmarked and I always waste time clicking likely looking links on the search page before I give up and Google it.

    1. No. I meant the one that shows up as a big block in Tony’s graphic: http://library.open.ac.uk/find/journals/index.cfm

      The reason I use that a lot is I often know the journal where something appears and I’m just looking to download the article. This happens frequently because articles are often mentioned in course materials or in other articles, but an electronic link isn’t already provided via a DOI (digital object identifier) or some other URL.

    2. The “One-Stop Search” I mentioned is accessible from the main library page as the link “https://css2.open.ac.uk/search/signin/check.aspx” which resolves into all manner of different things. It goes to a page that has basic and advanced search. It’s a kind of subject-specific database search. You pick an area and it searches various metadata repositories within that subject area.

  3. “This happens frequently because articles are often mentioned in course materials or in other articles, but an electronic link isn’t already provided via a DOI (digital object identifier) or some other URL.”

    In T151 I pretty much mandated that our links to resources that are made available through the OU library *should* be presented using a libezproxy/DOI resolving link keyed by the DOI of the resource we were referencing. [Editor’s UPDATE: That said, I think many some of the links are actually of the form whatever.crap.publisher.com.libezproxy.open.ac.uk/doi-etc; which is a little more brittle; in fact, a lot more brittle; if we used the DOI resolver in the link, then it wouldn’t matter if the Library subscriptions to the provider of the DOI’d identified articles changed. UPDATE 2: here’s another pattern that is being used in the course: http://learn.open.ac.uk/local/libezproxylink.php?url=http%3A%2F%2Fdx.doi.org%2F10.1109%2FMS.2004.1259221 (that VLE local libezproxy service path is new to me). I’m not sure I like that? Does the Library still control that service, or has it moved out of their realm?)]

    I also put in a request for a webservice that would generate a formal OU style citation/reference thing (whatever they’re called?! ;-) from the DOI, though you can imagine what kind of response I got back…

    One solution I proposed was to have a way of marking up a DOI in an OU_XML/structured authoring tag that could work nicely with the webservice I never really expected them to accept; again, you can imagine the response – (“You want MORE structure…?”)

    As ever, LTS folk are too busy working on project plans and specifications for heavywight features and applications to take the lazy option and build the micro-apps that provide an immediate timesaving benefit to someone, and may actually scratch other people’s itches too and result in some sort of useful innovation that makes some things just a little bit better…

    1. I made a suggestion in a TU100 unit that I reviewed that all journal articles included in the references should, if possible, have DOI URLs using the link structure http://dx.doi.org.libezproxy.open.ac.uk/DOI we discussed earlier today on Twitter. I suggested it was appropriate across the entire course as it would enable students to easily access the source documents with all the right permissions to immediately download the articles and produce much shorter, neater URLs. Some of the other ones are pretty hideous and long (bad for usability when read aloud by screenreaders, etc. too). As you mention, using the DOI form also makes it more future-proof to database provider changes as well, which I hadn’t thought of.

      I suspect most students use the same method I normally use: locate the journal in the list of journals then drill down to find the correct issue. Actually, I suspect most students probably don’t bother. If we make it easier for them, perhaps they’ll be more inclined to go to the source.

    2. “One solution I proposed …” How did you propose it? Who did you propose it to?

      I would guess that is a thin wrapper round the library’s URL, probably as some part of a simple system for making DOI-based links. No harm in that surely.

      By the way, I have no idea what a DOI is, although I think I can guess. You are speaking librarian-jargon. That, and slagging off LTS is unlikely to lead to co-operation, I would have thought.

  4. Thanks Tony, another thought provoking post.

    We are as you say using GA (and are also looking at how we can build on our current use) on the library website and in fact this has been a very useful tool to highlight problems in some areas. We are in the process of redesigning the website home page now and GA has indeed helped us shape the draft concept. We do also have to consider data beyond GA too, information that looks at the needs of students, courses and staff. These different sources combined are helping us to build a better picture of how the website needs to continually develop. Of course we also welcome all other feedback too!

    I’m not personally involved in the TELSTAR project (http://www.open.ac.uk/telstar/), but it seems as though it might address some of the linking issues mentioned in the comments?

  5. @Tim ““One solution I proposed …” How did you propose it? Who did you propose it to?”

    In a CT meeting to an LTS media project manager – the people CT are supposed to go through for all LTS dealings.

    re: the thin wrapper – yes, it appears to be that; but I was just wondering why the need for another layer of indirection/URL rewrite? Why not use the Library’s URL? Just wondering… If you can let me know the reason why it was adopted, I’m happy to be persuaded it’s the best solution. It just seems redundant to me… And no-one in the Library who I thought might know weren’t aware of the VLE formatted URL, which I think I’d have expected them to (e.g. in case they got helpdesk queries about it?)

    re: DOI – it may be jargon, but the post is part of an ongoing series of posts that have mentioned DOIs – digital object identifiers – many times.

    As for LTS, what can I say? From where I’m looking, I think if more agile/less managerial processes were to be supported, life could be a lot easier for everyone… But that’s IMHO, of course, and my personal opinion only…

    1. From bitter experience we know that, no matter how stable an API/URL is alleged to be, there is a small but significant chance that it may change in the future. Therefore, it pays to stick in an indirection layer. It may be belt and braces, but it is still sensible.

      I assume we are not expecting people to construct those URLs by hand. They look automatically generated to me.

      I have been away from the OU and LTS for a year. I officially start back next Tuesday, so I am a bit hesitant to comment because I only know how things worked a year ago. But I can make some obvious points most of which you have probably already thought of:

      1. LTS employs ~200 people. Any organisation that big seems inevitably, if depressingly, to become bureaucratic and managerial.

      2. LTS has internal structure. In particular LTS media deals with producing individual courses, and part of LTS strategic deals with the ongoing development of the VLE platform. Communication within LTS is not perfect. I am not surprised if a technical suggestion mentioned to a Media Project Manager got lost before it reached the VLE team.

      3. The VLE team does operate in a fairly agile way. On the other hand, we have to be fairly cautions and thorough. If we screw up and the VLE goes down following an upgrade, then that is a lot of pissed of students and tutors and course teams.

      4. The Moodle Open Source community (for the last year I was working at http://moodle.com/hq/) seems to cope with having multiple channels by which people can throw suggestions into the pot. I don’t see why the VLE team cannot work more like that. The last time I suggested it (more than a year ago) the reply was that if we solicited suggestions, someone would have to respond to them, and no one has time to do that.

      5. The bit about not having time is valid. There already is a huge list of things people want in the VLE. If we just wished to keep busy for the next year or so, we do not need to solicit new suggestions.

      6. However, it is inevitable that with a system as big and complex as the VLE that is used a lot, lots of people using it will have suggestions for making it better at a rate that is far higher than they can be implemented. We should be embracing that, and recording all the good ideas as they come in, so at every stage we can pick the best ideas to implement next. (Actually we do the recording and prioritising ideas bit OK, it is just the soliciting ideas bit that we don’t do.)

      7. In the absence of effective official lines of communication, there are always unofficial channels like twitter and blogs. I don’t know how many of the other developers are hooked into those networks, but the number will be going up by one next week.

      In many ways I am looking forwards to being back at the OU. However, I am worried about whether I can cope with the grief of working for a large institution again, after my year of ‘freedom’.

  6. @Tim “From bitter experience we know that, no matter how stable an API/URL is alleged to be, there is a small but significant chance that it may change in the future. Therefore, it pays to stick in an indirection layer. It may be belt and braces, but it is still sensible.”

    I though one of the reasons for using indirection was to provide an abstraction layer? The URI that the VLE link uses references one particular DOI resolver explicitly:

    A cleaner proxy URI would be something like:

    All the VLE link does is rewrite the OU path and filename? So by implication, does this mean that eg the dx.doi.org URI is assumed to be stable/persistent, and the OU Library one isn’t?;-)

    If you want the VLE link to be persistent, then all it needs to encode is the mutable element – the actual DOI. The path to the actual resolver and the Library’s authentication proxy (libezproxy) should be handled as part of the indirection?

  7. What you say is very sensible, but we are just guessing what the purpose of the code is from the outside. I would rather wait until I can see the code and the CVS history before I comment further.

    1. Just remembered to look into this.

      It turns out that the reason we added our own redirect script was that, at least in 2006 when this was implemented, URLs like http://libezproxy.open.ac.uk/login?url=… did a redirect in a silly way that broke the browser’s back button.

      This code has not been touced since December 2006. It’s bug 1337 in our bug database, if you know how to get at that.

Comments are closed.

%d bloggers like this: