Having had a wonderful time at ILI2007 last year (summary of my talk, according to Brian Kelly – “For most of the people, most of the time, Google’s good enough – get over it…”, though I like to think I was actually talking about the idea of search hubs), I’ve joined forces with Hassan Sheikh from the OU Library on a paper this year’s ILI2008 on the topic of using Google analytics to track user behaviour on the Library website…
First up, it’s probably worth pointing out the unique organisation of the OU, because this impacts on the way the Library website is used.
The OU is a distance learning organisation with tens of thousands active, offsite students; a campus, which is home to teaching academics (course writers), researchers, “academic related” services (software developers, etc.), and administrators; several regional offices; and part-time Associate Lecturers (group tutors), who typically work from home, although they may also work full- or part-time for other educational institutions.
The Library is a “trad” Library, in that it is home to books and a physical journal collection (as well as an OU course materials archive and several other collections) that are typically used by on-campus academics and researchers. The Library has also been quite go-ahead in obtaining online access to journal, ebook, image and reference collections – online access means that these services can be delivered to our student body (whereas the physical collections are used in the main by OU academic and research staff…. I assume…!;-)).
Anyway, to ease myself back into thinking about “Library Analytics”, (I haven’t looked at the Library stats for several months now), here are some warm-up exercises/starting point observations I made, for whatever they’re worth… (i.e. statements of the bleedin’ obvious;-)
Firstly, can we segment users into onsite and offsite users? (I’m pretty sure Hassan was running separate reports for these different gorups, but if he is, I don’t have access to them…)
Even from just the headline report, it appears that a ‘just about significant’ amount of traffic is coming from the intranet.
Just to get my eye in, is this traffic coming from the OU campus at Walton Hall? If we look at the intranet as the traffic source, and segment according to the Network Location of the user (that is, the IP network they’re on), we can see the traffic predominantly local:
By the by, if I’m reading the following report correctly, we can also see that most of the intranet traffic is incoming from the intranet homepage…
And as you might expect, this traffic comes on weekdays…
So here’s a working assumption then (and one that we could probe later for real insight in any principled cases where it doesn’t hold true!): most referrals from the OU intranet occur Monday to Friday, from onsite users, via the intranet homepage.
Secondly, how well is the Library front page working? Whilst not as quick to read as a heat map, the Google Analytics site overlay can provide a quick way way of summarising the most popular links on a page (notwithstanding it’s faults, such as appearing not to disambiguate certain links…)
A quick glimpse suggests the search links need dumping, and more real estate should be given over to the “Journals” and “Databases” links that are currently in the left hand sidebar, and which get 20% and 19% of the click-thrus respectively. Despite the large areas of the screen given over to the image-based navigation, they aren’t pulling much traffic. (That said, if we segment the users it might well be the case that the images in the middle of the page disproportionately attract clicks from certain sorts of user? I don’t think it’s possible to segment this out in the general report, however? For that, I guess we need to define some separate reports that are pre-segmented according to referrer?)
Just chasing the traffic a little more, I wonder if there are a few, popular databases or whether traffic is distributed over all of them equally? The Library databases page is pretty horrible – a long alphabetical list of databases – so can the analytics suggests ways of helping people find the pages they want?
So how are things distributed?
Well – it seems like some databases are more popular than others… but just how true is that observation…?
Let’s do a bit more drilling to see what people are clicking through to from the databases pages… I have to admit that here I start to get a bit confused, because the analytics are giving me two places where databases are being reached from, whereas I can only find one of the paths on the website…
Here’s the one I can find – traffic from:
And here’s what I can’t find on the website – traffic from:
They both identify the same databases as most popular though, though which databases those are I’ll leave for another day…because as you’ll see in a minute, this might be false popularity…
Why? Well let’s just see where the traffic for one of the most popular databases is coming from over the sample period I’ve been playing with:
Any idea why the traffic isn’t coming from the OU, but is coming form other HEIs???
Well, I happen to know that Bath, Brighton and Durham are used for OU residentlal schools, so I suspect that residential school students, after a reminder about the OU online Library services, are having a play, and maybe even participating in some information literacy activities that the OU Library trainers (as well as some of the courses) run at residential school…
Data – don’t ya just love it…? ;-) It sets so many traps for you to fall into!
In Library Analytics (Part 1), I did a few “warm-up” exercises looking at the OU LIbrary website from a Google Aanlytics perspective.
In this post, I’m going to do a little more scene setting, looking at how some of Google Analytics visual reports can provide a quick’n’dirty, glanceable display of the most heavily trafficked areas of the Library website, along with the most significant sources of traffic.
It may seem like the observations are coming from all over the place, but there is a method to the madness as I’ll hopefully get round to describing in a later post!
Whilst these reports are pretty crude, they do provide a good starting point for taking the first steps in a series of more refined questions which, in turn, will hopefully start to lead us towards some sort of insight about which areas of the website are serving which users and maybe even for what purpose… And as that rather clunky sentence suggests, this is likely to be quite a long journey, with the likelihood of more than a few wrong turns!
Most Popular Pages
Here’s a glimpse of the most heavily trafficked pages for the Library website – just to check there are no ‘artefacts’ arising from things like residential schools, I’ve compared the data for two consecutive two month periods (the idea being that if the bimonthly averages are similar, we can hope that this is a reasonably fair ‘steady state’ report of the state of the site).
Most significant pages (“by eye” – that is, using the pie chart display):
To view the proportions excluding the homepage, we can filter the report using a regular expression:
What this does is exclude the “/” page – that is, the library homepage. (IMHO, some understanding of regular expressions is a core information skill ;-)
A bar graph allows us to compare the bimonthly figures – they seem to be reasonably correlated (we could of course do a rank correlation, or similar, to see if the top pages ordering is really the same…):
So to summarise – the top pages (homepage aside) are (from the URLs):
The eResources URL actually refers to the subject collection (“Online collections by subject”) page.
The top three pages are all linked to from the same navigation area on the OU Library website homepage – the left-hand navigation sidebar:
The eResources link (that is, the subject collections/online collections by subject page) is actually the Your subject link.
Going forward, a good thing to find out at the next level down would be to see which are the most popular databases, journals and resource collections and maybe check that these are in line with Library expectations.
We might also want to explore the extent to which different user segments (students, researchers etc.) use the different areas of the site in similar or different ways. (Going deeper into the analysis (i.e. to a deeper level of user segmentation), we might even want to track the behaviour of students on different courses (or residential school maybe?) and report these findings back to the appropriate course team.)
Top Content areas
The previous report gave the top page views on the site – but what are the most heavily used “content areas”? The Library site is, in places, reasonably disciplined in its use of a hierarchical URL structure, so by the using the content drilldown tool, we should be able to see which are the most heavily used areas of the website:
The “/find” page/path element is a bit of kludge, really, (as a note to self: explore the use of this page in some detail…)
If we drill down into the content being hit below http://www.open.ac.uk/find/*, we find that the eresources area (i.e. subject collections/Your subject) is actually a hotbed of activity:
So what can we say? The front page is driving lots of traffic to database, journal and subject collection/”Your subject” areas, and lots of activity is going on in the subject area in particular.
Questions we might want to bear in mind going forward – how well does activity in different subject areas compare?
Top traffic sources
Again using pie chart display, we can looking at the top traffic sources by eye:
Again, let’s just check (by eye) that the bi-monthly reports are “typical”:
(It’s interesting to see the College of Law cropping up in there… Do we run a course from a learning environment on that domain, I wonder?)
learn.open.ac.uk is the Moodle/VLE domain, so it certainly seems like traffic is coming in from there, which is a Good Thing:-). From the previous post, we can guess that most of the intranet traffic is coming from people onsite at the OU – i.e. they’re staff or researchers.
Just to check it’s the students that are coming in from the VLE, rather than OU staff, we can use the technique used in the previous post in this series, (where we found that most intranet sourced traffic is coming from the OU campus) to check the Network Location view of users referred from learn.open.ac.uk:
So, we can see that the learn.open.ac.uk traffic is in the main not coming from the OU campus (network location: open university), which is as we’d expect, because we have no significant numbers of onsite undergraduate students.
In a traditional university library, you;d maybe expect way more traffic to be coming from onsite computer facilities, and in that case you may be able to find a way of segmenting users according to how they are accessing the network – via personal wifi connected laptops, for example, or public access machines in the library itself.
(Just by the by, I don’t know whether the ISP data is valuable (particularly if you look at analytics from the http://www.open.ac.uk domain, which gets way more traffic than the library) in terms of being information we can sell to ISPs or use as the basis for exploring a partnership with a particular ISP?)
Okay, that’s enough for today, a bit of a ramble again, but we’re trying to get our eye in, right, and see what sorts of questions we might be able to ask, whilst checking along the way that the bleedin’ obvious actually is…;-)
And today’s insight? The inconsistency in naming around “Your Subject”, “Online Collections by Subject”, http://library.open.ac.uk/find/eresources etc makes reading the report tricky. This could be addressed by using a filter to rewrite the URLs etc as displayed in the report, but it also indicates possible confusion for users in the site design itself? There’s also a recurrence of the potential confusion around http://library.open.ac.uk/databases and http://library.open.ac.uk/find/databases, that I picked up on in the previous post?
A second insight? The content drilling view helps show where most of the onsite activity is taking place – in the collections by subject area.
In this third post of an open-ended series looking at the OU Library website under Google analytics, I’ll pick out some ‘headline’ reports that describe the most popular items in one of the most popular content areas identified in Library Analytics (Part 2): databases.
Most Popular Databases
I can imagine that a headline report that everyone will go “ooh” about (notwithstanding the fact that the report is more likely to be properly interesting when you start to segment out the possibly different databases being looked at by different user segments;-) is the list of “top databases” (produced by filtering the top content report page on URLs that contain the term “database”)
So how do we work out what those database URLs actually point to? Looking at the HTML of the http://library.open.ac.uk/find/databases page, here’s where the reference to the most popular database crops up:
The implied URL http://ouseful.open.ac.uk/databases/database/337296 doesn’t actually go anywhere real… it’s an artefact created for the analytics tracking (though it does contain the all important internal OU Library database ID (337296 in this case)).
For now though, here’s a quick summary of the top 5 databases, worked out by code inspection!
- LexisNexis Butterworths (337296);
- JSTOR (271892);
- Westlaw (338947)
- PsycINFO (208607)
- Academic Search Complete (403673)
Just to show you what I mean by things being more interesting when you start to segment the most popular databases by identifiable, here’s a comparison of the referral source for users looking at Academic Search Complete (403673), PsycINFO (208607) and Westlaw (338947).
Firstly, Academic Search Complete:
In this case, there is a large amount of traffic coming from the intranet. Bearing in mind a comment on the first post, this traffic may be coming from personal bookmarks?
I may be in the error bar (i.e. outlier), but I do almost all my research / library work at home – but I log into the OU and go onto the library via the “my links” bit set to the OU journals and OU databases www page. So that would show as in intranet user? but I work remotely.
I could be wrong of course – so that’s one question to file away for a later day…
Secondly, PsycINFO (208607), the Content Detail report for which is easily enough found by searching on the Content Detail report page:
Here’s the source of traffic that spends some time looking at PsycINFO:
Here, we find a different effect. Most of the identifiable traffic is coming from direct links or the VLE, and the intranet is nowhere to be seen.
Note however the large amount of direct/unidentifiable traffic – this could hide a multitude of sins (and mask a multitude of user origins), so we should just remain wary and open to the idea we may have been misled!
So how can we try to gain an insight into that direct referral traffic (the traffic that arises from people typing the URL directly into their browser, or clicking on a browser bookmark)?
Well, to check that the traffic isn’t coming from direct traffic/bookmarks from users on the OU network other than via the intranet, we can look at the Network Location segment:
No sign of open university there in any significant numbers – so it seems that PsycINFO is more of a student resource than an onsite researcher resource.
Thirdly, Westlaw (338947). Who’s using this database?
It seems here that the majority single referrer is actually the College of Law.
We can segment against network location just to check the direct traffic isn’t coming from users on campus via browser bookmarks:
But some of it is coming from the College of Law? Hmmm.. Could that be a VPN thing, I wonder, or do they have an actual physical location?
So what insight(s) have we picked up in this post? Firstly, a dodgy ranking of most popular databases (dodgy in that the databases appear to be used by different constituencies of user). Secondly, a crude technique for getting a feel for who the users of a particular database are, based on original source/referrer and network location segmentations.
I guess there’s also a recommendation – that the buyer or owner of each database checks out the analytics to see if the users appear to be who they expect…!
And finally, to wrap this part up, it’s worth being sceptical no matter what precautions you put in place when trying to interpret the results; for example: How Does Google Analytics Track Conversion Referrals?.
One of the things I fondly remember about doing physics at school was being told, at the start of each school year, how what we had been told the previous year was not quite exactly true, and that this year we would actually learn how the world worked properly…
And so as this series of posts of about “Library Analytics” continues (that is, this series about the use of web analytics as applied to public Library websites), I will continue to show you examples of headline reports I have found initially compelling (or not), and then show why they are not quite right, and actually confusing at best, or misleading at worst…
Most Popular Journals
In the previous post in this series, we saw the most popular databases that were being viewed from the databases page. Is the same possible from the journals area? A quick look at the report for the find/journals/journals page suggests that such a report should be possible, but something is broken:
From the small amount of data there, the most popular journals/journal collections were as follows:
- JSTOR (271892);
- Academic Search Complete (403673);
- Blackwell Synergy (252307);
- ACM Digital Library (208448);
- IEEE Xplore (208545);
As with the databases, segmenting the traffic visiting these collections may provide insight as to which category of user (researcher, student) and maybe even which course is most active for a particular collection.
But what happened to the reporting anyway? Where has all the missing data gone?
I just had a quick look – the reporting from within the Journals area doesn’t currently appear to be showing anything…. err, oops?
it’s not the same structure as the working code on the databases pages (which you may recall from the previous post in this series uses the tracking function pageTracker._trackPageview).
Looking at which tracking script is being used on the journals page (google-analytics.com/ga.js), I think the pageTracker._trackPageview function should be being used. urchinTracker is a function from an old tracking function. Oops… I wonder whether anyone has been (not) looking at Journal use indicators lately (or indeed, ever…?!)
Where is Journal Traffic Coming From (source location)?
So what sites are referring traffic to the journals area?
Well it looks as if there’s a lot of direct traffic coming in (so it may be worth looking at the network location report to see if we can tunnel into that), but there’s also a good chunk of traffic coming from the VLE (learn.open.ac.uk). It’d be handy to know which courses were sending that traffic, so we’ll just bear that in mind as a question for a later post.
Where is Journal Traffic Coming From (network locations)?
To get a feel for how much of the traffic to the journals “homepage” is coming from on campus (presumably OU researchers?) we can segment the report for the journals homepage according to network location.
The open university network location corresponds to traffic coming in from an OU domain. This report potentially gives us the basis for an “actionable” report, and maybe even a target… That is, to increase the number of page views (if not the actual proportion of traffic from on campus – we may be wanting to grow absolute traffic numbers from the VLE too) from the OU domain, as a result of increasing the number of researchers looking up journals from the Library journals homepage whilst at work on campus.
At this point, it’s probably a good a time as any to start to think about how we might interpret data such ‘number of pages per visit’, ‘average time on site’ and bounce rate (see here for some definitions).
Just looking at the numbers going across the columns, we can see that there are different sorts of groupings of the numbers.
ip pools and open university have pages/visit around 12, an average time on site tending towards 4 minutes, about 16% new visits in the current period (down from 36% in the first period, so people keep coming back to the site, which is good, though it maybe means we’re not attracting so many new visitors), and a bounce rate a bit less than 60%, down from around 70% in the earlier period (so fewer people are entering at the journals page and then leaving the site immediately).
Compare this to addresses ip for home clients and greenwich university reports, where there are just over 1 page per visit, only a few seconds on site, hardly any new visits (which I don’t really understand?) and a very high bounce rate. These visitors are not getting any value from the site at all, and are maybe being misdirected to it? Whatever the case, their behaviour is very different to the open university visitors.
Now if I was minded to, I’d run this data through a multidimensional clustering algorithm to see if there were some well defined categories of user, but I’m not in a coding mood, so maybe we’ll just have a look to see what patterns are visually identifiable in the data.
So, taking the top 20 results from the most recent reporting period shown above, lets upload it to Many Eyes and have a play (you can find the data here).
First up, let’s see if we can spot trending in time on site and pages/visit (which is exactly what we’d expect, of course) (click through the image to see the interactive visualisation on Many Eyes):
Okay – so that looks about right; and the higher bounce rates seem to be correspond to low average time on site/low pages per visit, which is what we’d expect too. (Note that by hovering over a data point, we can see which network location the data corresponds to.)
We can also see how the scatterplot gives us a way of visualising 3 dimensions at the same time.
If we abuse the histogram visualisation, we have an easy way of looking at which network locations have a high bounce rate, or time on site (a visual equivalent of ‘sort on column’, I guess? ;-)
Finally, a treemap. Abusing this visualisation gives us a way of comparing two numerical dimensions at the same time.
Note that using network location here is not necessarily that interesting as a base category… I’m just getting my eye in as to what Many Eyes visualisations might be useful! For the really interesting insights, I reckon a grand or two per day, plus taxes and expenses, should cover it ;-) But to give a tease, here’s the raw data relating to the Source URLs for trafifc that made it to the Journals area:
Can you see any clusters in there?! ;-)
Okay – enough for now. Take homes are: a) the wrong tracking function is being used on the journals page; b) the VLE is providing a reasonable an amount of traffic to the journals area of the Library website, though I haven’t identified (yet!) exactly which courses are sourcing that traffic; c) Many Eyes style visualisations may provide a glanceable, visual view over some of the Google Analytics data.
Another day, another Library Analytics post… Today, a quick glimpse at another popular content area on the OU Library website, the “Subject Resource Collections” that dangle off http://library.open.ac.uk/find/eresources/.
Most Popular Subject Resource Collections
The distribution of visits to subject resource collections is pretty flat, as the following report shows:
That said, the most popular categories are:
- the law/law collection:
- the Law_legislation page:
- the Psychology collection;
- the Education collection;
- the Science – General collection.
Thinking back to the previous post in this series, and the example of using Many Eyes to visualise multiple data dimensions at the same time, a similar technique might be useful here just to check that each resource is attracting similar usage stats in respect of time on site, average pages per visit, bounce rates, etc.?
Just by the by, if we look at the Entrance Source for traffic that ends up on the selector page for Psychology eresources, we can see that most of the traffic is coming in from the VLE.
The College of Law appears to be providing most of the Law/Law traffic though…
Going forwards, it would probably be useful for the collections whose traffic sourced from the VLE to try to identify which courses were providing that traffic. This information might then provide the basis for “KPIs” relating to the performance of particular Library resources on a particular course.
Onsite Search Behavior
One of the optional reports on Google Analytics (that is, one that needs to be enabled) is tracking of onsite search behaviour using the website’s own search tool. Popular search terms identified by this report may well indicate failures in support for navigation-through-browsing – in the case of the OU Library site, it seems that information about “Athens” isn’t the easiest thing to find just by clicking…
The following report is particularly interesting from a trends point of view:
The step change at the end of March, with the higher incidence of internal search terms prior to then, suggests a change in user behaviour (given that all the other reports have been showing pretty steady traffic numbers over the whole period). I’m guessing – and this is checkable – that there was a Library website redesign at the end of March, although step changes (particularly in the case of users segmented by course, if such a thing were possible) might also be indicative of participation in scheduled Library related activities within a course in presentation. I’ll try to post a bit more about that at a later date…)
Another informative report describes the proportion of visits in which the user engages in onsite search. Users tend to navigate websites either by browsing (clicking on links) or by search. A high incidence of serch may indicate weaknesses in navigation design via clickable links. So how does the Library website appear do?
Well – it seems that users are clicking their way to pages rather than searching for them… (though this may in turn reflect issues with discovery and design of the search page…!)
The Help Page
Another source of information about how well the site is working for visitors is to look at usage around the Help page. I’m not going to go into this page in any depth, but here’s an inkling of what sorts of information we might be able to extract from it…
Who’s looking at how to cite a reference?
Seems like Google traffic is high up here? So maybe another role for the Library website is outreach, in the sense of informal education? And maybe the “How to cite a reference” page would be a good place to place a link to the free Safari info skills minicourse, and an ad for TU120 Beyond Google? ;-)
In this post, I’m going to have a quick look at some filtered reports I set up a few days ago to see if they are working as I expected.
What do I mean by this? Well, Google Analytics lets you create filters that can be used to create reports for a site that focus on a particular area of the website or user segment.
At their simplest, filters work in one of two main ways (or a combination of both). Firstly, you can filter the report so that it only covers activity on a subset of the website as a whole (such as all pages along the path http://library.open.ac.uk/find/databases). Secondly, you can filter the report so that it only covers traffic that is segmented according to user characteristics (such as users arriving from a particular referral source).
Here are a couple of examples: firstly, a filter that will just report on traffic that has been referred from the VLE:
Using this filter will allow us to create a report that tracks how the Library website is being used by OU students.
Another filter in a similar vein lets us track just the traffic arriving from the College of Law:
A second type of filter allows us to just provide a report on activity within the eresources area of the Library website:
Note that multiple filters can be applied to a single report profile, so I could for example create a report profile that just looked at activity in the Journals area of the website (by applying a subdirectory filter) that came from users on the OU campus (by also applying a user segment filter).
So how does this help?
If we assume there are several different user types on the Library website (students, onsite researchers, students on partner courses (such as with the College of Law), users arriving from blond Google searches, and so on), then we can use filters to create a set of reports, each one covering a different user segment. Adding all the separate reports together would give us the “total” website report that I was using in the first five posts in this series. Looking at each report separately allows us to understand the different needs and behaviours of the different user types.
Although it is possible to segment reports from the whole site report, as I have shown previously, segmenting the report ‘on the way in’ through the application of one or more filters allows you to use the whole raft of Google Analytics reports to look at a particular segment of the data as a whole.
So for example, here’s a view of the report filtered by referrer (college of law):
Where is the traffic from the College of Law landing?
Okay – it seems like all the traffic is coming in to one page on the Library website from the College of Law?! Now this may or may not be true (there may be a single link on the College of Law website to the OU Library), it may or may not reflect an error in the way I have crafted the rule. One to watch…
How about the report filtered by users referred from the VLE?
This report looks far more natural – users are entering the site at a variety of locations, presumably from different links in the VLE.
Which is all well and good – but it would be really handy if we knew which courses the students were coming from, and/or which VLE pages were actually sending the traffic.
The way to do this is to capture the whole referrer URL (not just the “http://learn.open.ac.uk” part) and report this as a user defined value, something we can do with another filter:
Segmenting the majority landing page data (the Library homepage) by this user defined value gives the following report:
The full referrer URLs are, in the main, really nasty Moodle URLs that obfuscate the course behind an arbitrary resource ID number.
Having a quick look at the pages, the top five referrers over the short sample period the report has been running (and a Bank Holiday weekend at that!) are:
- EK310-08: Library Resources (53758);
- E891-07J: Library Resources (36196);
- DD308-08: Library Resources (54466);
- DD303-08: Library Resources (49710);
- DXR222-08E: Library Resources (89798);
If we knew all the VLE pages in a particular course that linked to the Library website, we could produce a filtered report that just recorded activity on the Library website that came from that course on the VLE.
In the previous post in this series, I showed how it’s possible to identify traffic referred from particular course pages in the OU VLE, by creating a user defined variable that captured the complete (nasty) VLE referrer URL.
Now I’m not definitely sure about this, but I think that the Library provides URLs to the VLE via an RSS feed. That is, the Library controls the content that appears on the Library Resources page when a course makes such a page available.
In the Googe Analytics FAQ answer How do I tag my links?, a method is described for adding additional tags to a referrer URL that Google Analytics can use to segment traffic referred from that URL. Five tags are available (as described in Understanding campaign variables: The five dimensions of campaign tracking):
Source: Every referral to a web site has an origin, or source. Examples of sources are the Google search engine, the AOL search engine, the name of a newsletter, or the name of a referring web site.
Medium: The medium helps to qualify the source; together, the source and medium provide specific information about the origin of a referral. For example, in the case of a Google search engine source, the medium might be “cost-per-click”, indicating a sponsored link for which the advertiser paid, or “organic”, indicating a link in the unpaid search engine results. In the case of a newsletter source, examples of medium include “email” and “print”.
Term: The term or keyword is the word or phrase that a user types into a search engine.
Content: The content dimension describes the version of an advertisement on which a visitor clicked. It is used in content-targeted advertising and Content (A/B) Testing to determine which version of an advertisement is most effective at attracting profitable leads.
Campaign: The campaign dimension differentiates product promotions such as “Spring Ski Sale” or slogan campaigns such as “Get Fit For Summer”.
(For an alternative description, see Google Analytics Campaign Tracking Pt. 1: Link Tagging.)
The recommendation is that campaign source, campaign medium, and campaign name should always be used (I’m not sure if Google Analytics requires this, though?)
So here’s what I’m proposing: how about we treat a “course as campaign”? What are sensible mappings/interpretations for the campaign variables?
- source: the course?
- medium: the sort of link that has generated the traffic, such as a link on the Library resources page?
- campaign: the mechanism by which the link got into the VLE, such as a particular class of Library RSS feed or the addition of the link by a course team member?
By creating URLs that point back to the Library website for the display in the VLE tagged with “course campaign” variables, we can more easily track (i.e. segment) user activity on the Library website that results from students entering the Library site from that link referral.
Where course teams upload Library URLs themselves, we could maybe provide a “URL Generator Tool” (like the “official” Tool: URL Builder) that will accept a library URL and then automatically add the course code (source), a campaign flag saying the link was course team uploaded, a medium flag saying the link is provided as part of assessment, or further information. The “content” variable might capture a section number in the course, or information about what activity in particular the resource related to?
For example, the tool would be able to create something like:
Annotating links in this way would allow Library teams to see what sorts of link (in terms of how they get into the VLE) are effective at generating traffic back to the Library, and could also enable the provision of reports back to course teams showing how effectively students on a particular course are engaging with Library resources from links on the VLE course pages.
In Library Analytics (Part 7), I posted a couple of ideas about how it might be an idea if the Library started crafting URLs for the the Library resources pages for individual courses in the Moodle VLE that contained a campaign tracking code, so that we could track the behaviour of students coming into the Library site by course.
From a quick peak at a handful of courses in the VLE, that recommendation either doesn’t appear to have been taken up, or it’s just “too hard” to do, so that’s another couple of months data we don’t have easy access to in the Google Analytics environment. (Or maybe the Library have moved over to using the OU’s SIte Analytics service for this sort of insight?)
Just to recall, we need to put some sort of additional measures in place because Moodle generates crappy URLs (e.g. URLs of the form http://learn.open.ac.uk/mod/resourcepage/view.php?id=119070) and crafting nice URLs or using mod-rewrite (or similar) definitely is way too hard for the VLE’n’network people to manage;-) The default set up of Google Analytics dumps everything after the “?”, unless they are official campaign tracking arguments or are captured otherwise.
(From a quick scan of Google Analytics Tracking API, I’m guessing that setting pageTracker._setCampSourceKey(“id”); in the tracking code on each Library web page might also capture the id from referrer URLs? Can anyone confirm/deny that?)
Aside: from what I’ve been told, I don’t think we offer server side compression for content served from most http://www.open.ac.uk/* sites, either (though I haven’t checked)? Given that there are still a few students on low bandwidth connections and relatively modern browsers, this is probably an avoidable breach of some sort of accessibility recommendation? For example, over the lat 3 weeks or so, here’s the number of dial-up visits to the Library website:
A quick check of the browser stats shows that IE breaks down almost completely as IE6 and above; all of which cope with compressed files, I think?
[Clarification (?! heh heh) re: dial-in stats - "when you’re looking at the dial-up use of the Library website is that we have a dial-up PC in the Library to replicate off-campus access and to check load times of our resources. So it’s probably worth filtering out that IP address (***.***.***.***) to cut out library staff checking out any problems as this will inflate the perceived use of dial-up by our students. Even if we’ve only used it once a day then that’s a lot of hits on the website that aren’t really students using dial-up" - thanks, Clari :-)]
Anyway – back to the course tracking: as a stop gap, I created a few of my own reports that use a user defined argument corresponding to the full referrer URL:
We can then view reports according to this user defined segment to see which VLE pages are sending traffic to the Library website:
Clicking through on one of these links gives a report for that referrer URL, and then it’s easy to see which landing pages the users are arriving at (and by induction, which links on the VLE page they clicked on):
If we look at the corresponding VLE page:
Then we can say that the analytics suggest that the Open University Library – http://library.open.ac.uk/, the Online collections by subject – http://library.open.ac.uk/find/eresources/index.cfm and the Library Help & Support – http://library.open.ac.uk/about/index.cfm?id=6939 are the only links that have been clicked on.
[Ooops... "Safari & Info Skills for Researchers are our sites, but don’t sit within the library.open.ac.uk domain ([ http://www.open.ac.uk/safari ]www.open.ac.uk/safari and [ http://www.open.ac.uk/infoskills-researchers ]www.open.ac.uk/infoskills-researchers respectively) and the Guide to Online Information Searching in the Social Sciences is another Moodle site.” – thanks Clari:-) So it may well be that people are clicking on the other links… Note to self – if you ever see 0 views for a link, be suspicious and check everything!]
(Note that I have only reported on data from a short period within the lifetime of the course, rather than data taken from over the life of the course. Looking at the incidence of traffic over a whole course presentation would also give an idea of when during the course students are making use of the Library resource page within the course.)
Another way of exploring how VLE referrer traffic is impacting on the Library website is to look at the most popular Landing pages and then see which courses (from the user defined segment) are sourcing that traffic.
So for example, here are the VLE pages that are driving traffic to the elluminate registration page:
One VLE page seems responsible:
How about the VLE pages driving traffic to the ejournals page?
And the top hit is….
… the article for question 3 on TMA01 of the November 2008 presentation of M882.
The second most popular referrer page is interesting because it contains two links to the Library journals page:
Unfortunately, there’s no way of disambiguating which link is driving the tracking – which is one good reason why a separate campaign related tracking code should be associated with each link.
(Do you also see the reference to Google books in there? Heh heh – surely they aren’t suggesting that students try to get what they need from the book via the Google books previewer?!;-)
Okay – enough for now. To sum up, we have the opportunity to provide two sorts of report – one for the Library to look at how VLE sourced traffic as a whole impacts on the Library website; and a different set of reports that can be provided to course teams and course link librarians to show how students on the course are using the VLE to access Library resources.
PS if you havenlt yet watch Dave Pattern’s presentation on mining lending data records, do so NOW: Can You Dig It? A Systems Perspective.
For what it’s worth, slides from my presentation yesterday… As ever, they’re largely pointless without commentary…
… and even with the commentary, it was all a bit more garbled than usual (I forgot to breathe, had no real idea in my own mind what I wanted to say, etc etc…)
On reflection, here’s what I took from thinking back about what I should have tried to say:
- my assumption is that folk who are interested in asking data related questions should feel as if they can actually work with the data itself (direct data manipulation); I appreciate this is already way off the mark for some people who want someone else to work the data and then just read reports about it – but then that means you can’t ask or discover your own questions about the data, just read answers (maybe) to questions that someone else has asked, presented in a way they decided;
- you need to feel confident in working with data files – or at least, you need to be prepared to have a go at working with data files! (Bear in mind that many of the blog posts I write are write ups – of a sort – of how to do something I didn’t know how to do a couple of hours before… The web usually has answers to most of the questions that I come up against – and if I can’t find the answers, I can often request them via things like Twitter or Stack Overflow…) This can range from using command line tools, to using applications that let you take data in using one format and getting it out as another);
- different tools do different things; if you can get a dataset into a tool in the right way, it may be able to do magical things very very easily indeed…
- three tools that can do a lot without you having to know a lot (though you may have to follow a tutorial or two to pick up the method/recipe….or at least recognise a picture you like and a dataset whose shape you can replicate using your own data, and then the ability to see which bits you need to cut and paste into the command line…):
-=- Gephi: great for plotting networks and graphs. It can also be appropriated to draw line charts (if you can work out how to ‘join the dots’ in the data file by turning the line into a set of points connected by edges) or scatter plots (just load in nodes – no edges connecting them – and lay it out using Gephi’s geolayout tool which also lets you plot “rectilinear” plots based on x and y axis values; (I haven’t worked out a reliable way of working with CSV in Gephi – yet…); it’s amazing what you can describe as a graph when you put your mind to it…
-=- gnuplot: command line tool for plotting scatter plots and line graphs (eg from time series) using data stored in simple text file (e.g. TSV or CSV)
-=- R (and ggplot if you’re feeling adventurous and want :pretty”, nicely designed graphs out); another command line tool (I find R-Studio helps) that again loads in data from a CSV file; R can generate statistical graphs very easily from the command line (it does the stats calculations for you given the raw data).
- Visual analytics/graphical data analysis is a process – you tease out questions and answers through directly manipulating the data and engaging with it in a visual way;
- when you see a visualisation you like, look at it closely: what do you see? Spending five mins or so looking at a Gestalt psychology/visual perception tutorial will give you all sorts of tricks and tips for how to construct visualisations so that structure your eye can detect will jump out at you;
- I think I may have confused folk talking about “dimensions”: what I meant what, how many columns could you represent in a given visulisation at the same time, if each data point corresponds to a single row in a data set. So for example, if you have an x-y plot (2 dimensions), with different symbols (1 dimension) available for plotting the points, as well as different colours (1 dimension) and different possible size (1 dimension) for each symbol, along with a label (1 dimension) for each point, and maybe control over the size (1 dimension), colour (1 dimension) and even font (1 dimension) applied to the label, you might find you can actually plot quite a few columns/dimensions for each data point on your chart… Whether or not you can actually decipher it is another matter of course! My Gephi charts generally have 2 explicit dimensions (node size and colour), as well as making use of two spatial dimensions (x, y) to lay out points that are in some sense “close” to each other in network space. It’s worth remembering though, that if you’re using a tool to engage in a conversation with a dataset as you try to get it to tell its story to you, it may not matter that the visualisation looks a mess to anyone else (a bit like an involved conversation may not make sense if someone else suddenly tries to join it). (Presentation graphics, on the other hand, are usually designed to communicate something that the data is trying to say to another person in a very explicit way.)
- working with data is a tactile thing… you have to be prepared to get your hands dirty…
The OU Library website has been running Google Analytics for ages, but from what I can tell they haven’t done a hug amount with the results in terms of making the analytics actionable and using them to improve the site design (I’d love for someone to correct me with a blog post or two about how analytics have been used to improve site performance. If anyone would like to publish such a post, I’ll happily give you a guest slot here on OUseful.info…:-)
(As a bit of background, see Library Analytics, (Part 1), Library Analytics, (Part 2), Library Analytics, (Part 3), Library Analytics, (Part 4), Library Analytics, (Part 5), Library Analytics, (Part 6), Library Analytics, (Part 7) and Library Analytics, (Part 8))
Anyway, here’s the Library homepage (August 2009):
And here are two the real OU Library homepages:
(See also: Where is the Open University Homepage?;-)
And here’s the OU Library homepage as treemap, where the block size shows where the traffic goes (as recorded over the last month) as a percentage of all traffic to the OU LIbrary homepage.
So if each click was equally valuable, and each pixel on the screen was equally valuable, then that’s how the screen area should be allocated… (Hmm – that could be, err, interesting – an adaptive homepage where there’s one block element per link, and a treemap algorithm that allocates the area each block has when the page is rendered? Heh heh :-)
I did think about showing a heatmap of where on the homepage the clicks were made, but I figure I’ve probably already upset the Library folk enough by now. I also considered doing a treemap showing the realtive proportions of different keywords on Google that drove traffic to the OU Library homepage, but I figure that may be commercially sensitive in terms of bidding for Adsense keywords…