OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education

Archive for the ‘OU2.0’ Category

Visualising Pandas DataFrames With IPythonBlocks – Proof of Concept

A few weeks ago I came across IPythonBlocks, a Python library developed to support the teaching of Python programming. The library provides an HTML grid that can be manipulated using simple programming constructs, presenting the outcome of the operations in a visually meaningful way.

As part of a new third level OU course we’re putting together on databases and data wrangling, I’ve been getting to grips with the python pandas library. This library provides a dataframe based framework for data analysis and data-styled programming that bears a significant resemblance to R’s notion of dataframes and vectorised computing. pandas also provides a range of dataframe based operations that resemble SQL style operations – joining tables, for example, and performing grouping style summary operations.

One of the things we’re quite keen to do as a course team is identify visually appealing ways of illustrating a variety of data manipulating operations; so I wondered whether we might be able to use ipythonblocks as a basis for visualising – and debugging – pandas dataframe operations.

I’ve posted a demo IPython notebook here: ipythonblocks/pandas proof of concept [nbviewer preview]. In it, I’ve started to sketch out some simple functions for visualising pandas dataframes using ipythonblocks blocks.

For example, the following minimal function finds the size and shape of a pandas dataframe and uses it to configure a simple block:

def pBlockGrid(df):
    (y,x)=df.shape
    return BlockGrid(x,y)

We can also colour individual blocks – the following example uses colour to reveal the different datatypes of columns within a dataframe:

ipythinblocks pandas type colour

A more elaborate function attempts to visualise the outcome of merging two data frames:

ipythonblocks pandas demo

The green colour identifies key columns, the red and blue cells data elements from the left and right joined dataframes respectively, and the black cells NA/NaN cells.

One thing I started wondering about that I have to admit quite excited me (?!;-) was whether it would be possible to extend the pandas dataframe itself with methods for producing ipythonblocks visual representations of the state of a dataframe, or the effect of dataframe based operations such as .concat() and .merge() on source dataframes.

If you have any comments on this approach, suggestions for additional or alternative ways of visualising dataframe transformations, or thoughts about how to extend pandas dataframes with ipythonblocks style visualisations of those datastructures and/or the operations that can be applied to them, please let me know via the comments:-)

PS some thoughts on a possible pandas interface:

  • DataFrame().blocks() to show the blocks
  • .cat(blocks=True) and .merge(blocks=True) to return (df, blocks)
  • DataFrame().blocks(blockProperties={}) and eg .merge(blocks=True, blockProperties={})
  • blockProperties: showNA=True|False, color_base=(), color_NA=(), color_left=(), color_right=(), color_gradient=[] (eg for a .cat() on many dataframes), colorView=structure|datatypes|missing (the colorView reveals the datatypes of the columns, the structure origins of cells returned from a .merge() or .cat(), or a view of missing data (reveal NA/NaN etc over a base color), colorTypes={} (to set the colors for different datatypes)

Written by Tony Hirst

March 26, 2014 at 11:37 pm

So Is This Guerrillla Research?

A couple of days ago I delivered a workshop with Martin Weller on the topic of “Guerrilla Research”.

guerrilapdf

The session was run under the #elesig banner, and was the result of an invitation to work through the germ of an idea that was a blog post Martin had published in October 2013, The Art Of Guerrilla Research.

In that post, Martin had posted a short list of what he saw as “guerrilla research” characteristics:

  1. It can be done by one or two researchers and does not require a team
  2. It relies on existing open data, information and tools
  3. It is fairly quick to realise
  4. It is often disseminated via blogs and social media

Looking at these principles now, as in, right now, as a I type (I don’t know what I’m going to write…), I don’t necessarily see any of these as defining, at least, not without clarification. Let’s reflect, and see how my fingers transcribe my inner voice…

In the first case, a source crowd or network may play a role in the activity, so maybe it’s the initiation of the activity that only requires one or two people?

Open data, information and tools helps, but I’d gear this more towards pre-existing data, information and tools, rather than necessarily open: if you work inside an organisation, you may be able to appropriate resources that are not open or available outside the organisation, and may even have limited access within the organisation; you may have to “steal” access to them, even; open resources do mean that other people can engage in the same activity using the same resources, though, which provides transparency and reproducibility; open resources also make inside, outside activities possible.

The activity may be quick to realise, sort of: I can quickly set a scraper going to collect data about X, and the analysis of the data may be quick to realise; but I may need the scraper to run for days, or weeks, or months; more qualifying, I think, is that the activity only requires a relatively short number of relatively quick bursts of activity.

Online means of dissemination are natural, because they’re “free”, immediate, have potentially wide reach; but I think an email to someone who can, or a letter to the local press, or an activity that is it’s own publication, such as a submission to a consultation in which the responses are all published, could also count too.

Maybe I should have looked at those principles a little more closely before the workshop…;-) And maybe I should have made reference to them in my presentation. Martin did, in his.

PS WordPress just “related” this back to me, from June, 2009: Guerrilla Education: Teaching and Learning at the Speed of News

Written by Tony Hirst

March 21, 2014 at 8:44 am

Posted in OU2.0, Thinkses

Tagged with ,

Oppia – A Learning Journey Platform From Google…

I couldn’t get to sleep last night mulling over thoughts that had surfaced after posting Time to Drop Calculators in Favour of Notebook Programming?. This sort of thing: what goes on when you get someone to add three and four?

Part of the problem is associated with converting the written problem into mathematical notation:

3 + 4

For more complex problems it may require invoking some equations, or mathematical tricks or operations (chain rule, dot product, and so on).

3 + 4

Cast the problem into numbers then try to solve it:

3 + 4 =

That equals gets me doing some mental arithmetic. In a calculator, there’s a sequence of button presses, then the equals gives the answer.

In a notebook, I type:

3 + 4

that is, I write the program in mathematicalese, hit the right sort of return, and get the answer:

7

The mechanics of finding the right hand side by executing the operations on the left hand side are handled for me.

Try this on WolframAlpha: integral of x squared times x minus three from -3 to 4

Or don’t.. do it by hand if you prefer.

I may be able to figure out the maths bit – figure out how to cast my problem into a mathematical statement – but not necessarily have the skill to solve the problem. I can get the method marks but not do the calculation and get the result. I can write the program. But running the programme, diving 3847835 by 343, calculating the square root of 26,863 using log tables or whatever means, that’s the blocker – that could put me off trying to make use of maths, could put me off learning how to cast a problem into a mathematical form, if all that means is that I can do no more than look at the form as if it were a picture, poem, or any other piece of useless abstraction.

So why don’t we help people see that casting the problem into the mathematical form is the creative bit, the bit the machines can’t do. Because the machines can do the mechanical bit:

wolfram alpha

Maybe this is the approach that the folk over at Computer Based Math are thinking about (h/t Simon Knight/@sjgknight for the link), or maybe it isn’t… But I get the feeling I need to look over what they’re up to… I also note Conrad Wolfram is behind it; we kept crossing paths, a few years ago…I was taken by his passion, and ideas, about how we should be helping folk see that maths can be useful, and how you can use it, but there was always the commercial blocker, the need for Mathematica licenses, the TM; as in Computer-Based Math™.

Then tonight, another example of interactivity, wired in to a new “learning journey” platform that again @sjgknight informs me is released out of Google 20% time…: Oppia (Oppia: a tool for interactive learning).

Here’s an example….

oppia project euler 1

The radio button choice determines where we go next on the learning journey:

oppia euler2

Nice – interactive coding environment… 3 + 4 …

What happens if I make a mistake?

oppia euler 3

History of what I did wrong, inline, which is richer than a normal notebook style, where my repeated attempts would overwrite previous ones…

Depending how many common incorrect or systematic errors we can identify, we may be able to add richer diagnostic next step pathways…

..but then, eventually, success:

oppia euler 4

The platform is designed as a social one where users can create their own learning journeys and collaborate on their development with others. Licensing is mandated as “CC-BY-SA 4.0 with a waiver of the attribution requirement”. The code for the platform is also open.

The learning journey model is richer and potentially far more complex in graph structure terms than I remember the attempts developed for the [redacted] SocialLearn platform, but the vision appears similar. SocialLearn was also more heavily geared to narrative textual elements in the exposition; by contrast, the current editing tools in Oppia make you feel as if using too much text is not a Good Thing.

So – how are these put together… The blurb suggests it should be easy, but Google folk are clever folk (and I’m not sure how successful they’ve been getting their previous geek style learning platform attempts into education)… here’s an example learning journey – it’s a state machine:

Example learning design in oppia

Each block can be edited:

oppia state editor

When creating new blocks, the first thing you need is some content:

oppia - content - interaction

Then some interaction. For the interactions, a range of input types you might expect:

oppia interaction inputs

and some you might not. For example, these are the interactive/executable coding style blocks you can use:

oppia progr languages dialogue

There’s also a map input, though I’m not sure what dialogic information you can get from it when you use it?

After the interaction definition, you can define a set of rules that determine where the next step takes you, depending on the input received.

oppia state rules

The rule definitions allow you to trap on the answer provide by the interaction dialogue, optionally provide some feedback, and then identify the next step.

oppia - rules

The rule branches are determined by the interaction type. For radio buttons, rules are triggered on the selected answer. For text inputs, simple string distance measures:

oppia text rules

For numeric inputs, various bounds:

oppia numeric input

For the map, what looks like a point within a particular distance of a target point?

oppia map rule

The choice of programming languages currently available in the code interaction type kinda sets the tone about who might play with this…but then, maybe as I suggested to colleague Ray Corrigan yesterday, “I don’t think we’ve started to consider consequences of 2nd half of the chessboard programming languages in/for edu yet?”

All in all – I don’t know… getting the design right is what’ll make for a successful learning journey, and that’s not something you can necessarily do quickly. The interface is as with many Google interfaces, not the friendliest I’ve seen (function over form, but bad form can sometimes get in the way of the function…).

I was interested to see they’re calling the learning journeys explorations. The Digital Worlds course I ran some time ago used that word too, in the guise of Topic Explorations, but they were a little more open ended, and used a question mechanic used to guide reading within a set of suggested resources.

Anyway, one to watch, perhaps, erm, maybe… No badges as yet, but that would be candy to offer at the end of a course, as well as a way of splashing your own brand via the badges. But before that, a state machine design mountain to climb…

Written by Tony Hirst

February 27, 2014 at 9:14 pm

Posted in OU2.0

Cursory Thoughts on Virtual Machines in Distance Education Courses

One of the advantages of having a relatively long lived blog is that it gives me the ability to look back at the things that were exciting to me several years ago. For example, it was five years ago more or less to to the day when I first saw video ads on the underground; and seven and a half years ago since I remarked on the possible relevance of virtual machines (VMs) to OU teaching: Personal Computing Guidance for Distance Education Students. (At the time, I was more excited by portable applications that could be run from USB sticks, the motivating idea being that OU students might want to access course software or applications from arbitrary machines that they didn’t necessarily have enough permissions on to be able to download and install required applications.)

Since then, a couple of OU courses have dabbled with the virtual machines – the Linux course that’s now part of the course TM129 – Technologies in Practice makes use of a Linux virtual machine running in VirtualBox, and the digital forensics postgrad course (M812) makes use of a couple of VMs – a Windows box that needs analysing, and a Linux VM that contains the analysis tools.

We’re also looking at using a virtual machine for a new level three/third year equivalent course due out in October 2015 (sic…) on data stuff (short title!;-). I haven’t really been paying as much attention as I probably should have to VMs, but a little bit of playing at the end of last week and over the weekend made me realise the error of my ways…

So what are virtual machines (VMs)? You’re possibly familiar with the phrase “(computer) operating system”, and almost definitely will have heard of Windows and iOS. These are the bits of computer software that provide a desktop on top of your computer hardware, and run the services that that allow your applications to talk to the hardware and out in to the wider world. Virtual machines are boxes that allow you to run another operating system, as well as applications on top of it, on your own desktop. So a Windows machine can run a box that contains a fully working Linux computer; or if you’re like me and use a Mac, you’ll have a virtual machine that runs a copy of Windows so you can run Internet Explorer on it in order to access the OU’s expense claims system!

Now when it comes to shipping course software, we’re often faced with the problem of getting software to work on whatever operating system our students are using. In a traditional university, with computer labs, the computers in the public areas will all contain the same software, installed from a common source. (OU IT are trying to enforce a similar policy on staff machines at the moment. Referred to in reverential terms as “desktop optimisation”, the idea is that machines will only run the software that IT says can run on it. Which would rule out the possibility of me running pretty much any of the applications I use on a day to day basis. Although I think Macs are outside the optimisation fold for the moment…?)

Ideally, then, we might want students to all run the same operating system, so that we can test software on that system and write one set of instructions for how to use it. But students bring their own devices. And when it comes to installing the software tools we’d like computing students, for example, to install, there can be all sorts of problems getting the software to install properly.

So another option is to provide students with a machine that we control, that doesn’t upset their own settings, and that won’t kill their computer if something goes horribly wrong. (We can’t, for example, require students to run the OU’s optimised desktop, not least because we’d have to pay license fees for the use of Windows, but also because students would rightly get upset if we prevented them from downloading and installing Angry Birds on their own computer!) Virtual machines provide a way of doing this.

As a case in point, the new data course will probably make use of iPython Notebook, among other things. iPython Notebook is a browser accessed application that allows you to develop and execute Python programme code via an interactive, browser based user interface, which I find quite attractive from a pedagogical point of view. (This post may get read by OU folk in an OU teaching context, so I am obliged to use the p-word.)

Installing the Python libraries the course will draw on, as well as a variety of databases (PostgreSQL and MongoDB are the ones we’re thinking of using…) could be a major headache for our students, particularly if they aren’t well versed in sysadmin and library installation. But if we define a virtual machine that has the required libraries and applications preinstalled and preconfigured, we can literally contain the grief – if students run an application such as VirtualBox (which thy would have to install themselves), we can provide a preconfigured machine (known as a guest) that they can run within their own desktop (part of the host machine), that will make available services that they can access via their normal desktop browser.

So for example, we can build a virtual machine that contains iPython and all the required libraries, that can be defined to automatically run iPython Notebook when it boots, and that can make that notebook available via the host browser. And more than that, we can also configure the Notebook server running on the local guest VM so that it saves notebook files to a directory that is shared between the guest and the host. If a student then switches off, or even deletes, the guest machine, they don’t lose their work…

VMs have been used elsewhere for course delivery too, so we may also be able to learn more about the practicalities of VMs in a course context from those cases. For example, Running a next-gen sequence analysis course using Amazon Web Services describes how virtual machines running on Amazon Cloud services, (rather than in boxes running within a VirtualBox container on the user’s desktop) were used for a data analysis course that made us of very large datasets. (This demonstrates another benefit of virtualisation: we can configure a VM so that it can b run in containerised form on a student’s own computer, or run on a machine hosted on the net somewhere, and then accessed from the student’s own machine.)

Something I found really exciting were the VMs defined by @DataMinerUk and @twtrdaithi for use in data journalism applications – Infinite Interns, a range of virtual machines defined using Vagrant (which is super fun to play with!:-) that contain a rang of tools useful for data projects.

I also wonder about the extent to which the various MOOCs have made use of VMs… And whether there is an argument to be had in favour of “course boxes” in general…?

PS for a hint at something of what’s possible in using a VM to support a course, imagine Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More as course notes, The official online compendium for Mining the Social Web, 2nd Edition (O’Reilly, 2013) as the way in to your run-at-home computer lab, and Mining-the-Social-Web-2nd-Edition – issues on github as instructor/lab technician support. ’nuff said. The things we’re gonna be prepared to pay for have the potential to change…

Written by Tony Hirst

December 2, 2013 at 3:18 pm

Posted in OU2.0

Tagged with ,

Online Courses or Long Form Journalism? Communicating How the World Works…

I just spotted this:

(In case that livelink dies, it’s a tweet from @sclopit: “wouldn’t it be wonderful if somebody put together a Coursera course on Bitcoin, covering whole range: crypto, ops, economics, politics?”)

Here’s a crappy graph I’ve used before

Academia-press-policy

It hints at how I see different sensemakers working together to help inform folk about how the world works… This was may how things were – maybe hard edges and labels need changing in a reinvention of how we make sense of the world and communicate it to others?

This was wrong – Publisher led mini-courses – but it still feels like a piece in a possibly new-cut jigsaw.

By chance, I also spotted this for the first time yesterday, even though it’s been around for some time: O’Reilly School of Technology. Self-paced, online courses with emailable tutor. (Similar context – The Business of HE Moves On….)

And this today: Facts Are Sacred“A new book published by the team behind the Datablog explains how we do data journalism at the Guardian.” Books are often handy things to pin courses round, of course… (Which is to say – is there a MOOC in that?)

FutureLearn has been signing up ‘non-academic’ partners – the British Library, and the British Council, for example. I wonder if the BBC are going to join the party too? If so, then would there be a place for other publishers…?

…or does that feel wrong? Maybe the press doesn’t have the right sort of “independent voice” to deliver “academic” courses? WHich is why we maybe need to rethink the cutting of the jigsaw, or at least, a new view over it.

Who knows how the MOOC thing will play out – it reminds me in part of the educational packs companies hand out… I’m sure you know the sort of thing: Southern Water’s Waterwise packs, or ScottishPower Renewables Education Pack, Herefordshiore Council school’s waste education pack, Friends of the Earth information booklets etc etc. Propaganda? Biased to the point of distorting a “true” academic educational line? Or “legitimate” educational resources? Whatever that means? Maybe it’s more appropriate to ask if they are useful resources in the support of learning?

So are MOOCs just educational resource packs, promoting universities rather than companies or charities? But rather than catering to schools, do they maybe cater to well segmented “media consumers” looking for a new style of publication (the partwork “course”)?

And are there opportunities for media and academe to join forces producing – in quick time – long form structured pieces on the likes of, I dunno, Bitcoin, maybe, that could cover a whole range of related topics, such as in the Bitcoin case: crypto, ops, economics, politics?

Hmmm…

PS apparently FutureLearn are hiring Ruby on Rails developers (Simon Pearson/@minor9th: “On the look out for lovely Ruby on Rails devs who like working on Good Projects. FutureLearn needs you! http://www.futurelearn.com – DM me”)

Written by Tony Hirst

April 4, 2013 at 9:28 am

Posted in OU2.0

Twitter Audience Profiling – OU/BBC Feynman Challenger Co-Pro

Another strong piece of TV commissioning via the Open University Open Media Unit (OMU) aired this week in the guide of The Challenger, a drama documentary telling the tale of Richard Feynman’s role in the accident enquiry around the space shuttle Challenger disaster. (OMU also produced an ethical game if you want to try you own hand out at leading an ethics investigation.)

Running a quick search for tweets containing the terms feynman challenger to generate a list of names of Twitter users commenting around the programme, I grabbed a sample of their friends (max 197 per person) and then plotted the commonly followed accounts within that sample.

challenger_feynman

If you treat this image as a map, you can see regions where the accounts are (broadly) related by topic or interest category. What regions can you see?! (For more on this technique, see Communities and Connections: Social Interest Mapping.)

I also ran a search for tweets containing bbc2 challenger:

bbc2_challenger

Let’s peek in to some of the regions…”Space” related twitter accounts for example:

bbc2_challenger_space

Or news media:

bbc2_challenger_news

(from which we might conclude that the audience was also a Radio 4 audience?!;-)

How about a search on bbc2 feynman?

bbc2_feynman

Again, we see distinct regions. As with the other maps, the programme audience also seems to have an interest in following popular science writers:

bbc2_feynman_science

Interesting? Possibly – the maps provide a quick profile of the audience, and maybe confirm its the sort of audience we might have expected. Notable perhaps are the prominence of Brian Cox and Dara O’Briain, who’ve also featured heavily in BBC science programming. Around the edges, we also see what sorts of comedy or entertainment talent appear to the audience – no surprises to see David Mitchell, Charlton Brooker and Aianucci in there, though I wouldn’t necessarily have factored in Eddie Izzard (though we’d need to look at “proper” baseline interest levels of general audiences to see whether any of these comedians are over-represented in these samples compared to commonly followed folk in a “random” sample of UK TV watchers on Twitter. The patterns of following may be “generally true” rather than highlighting folk atypically followed by this audience.)

Useful? Who knows…?!

(I have PDF versions of the full plots if anyone wants copies…)

Written by Tony Hirst

March 20, 2013 at 9:18 am

Posted in BBC, OBU, OU2.0

MOOC Platforms and the A/B Testing of Course Materials

[The following is my *personal* opinion only. I know as much about FutureLearn as Google does. Much of the substance of this post was circulated internally within the OU prior to posting here.]

In common with other MOOC platforms, one of the possible ways of positioning FutureLearn is as a marketing platform for universities. Another might see it as a tool for delivering informal versions of courses to learners who are not currently registered with a particular institution. [A third might position it in some way around the notion of "learning analytics", eg as described in a post today by Simon Buckingham Shum: The emerging MOOC data/analytics ecosystem] If I understand it correctly, “quality of the learning experience” will be at the heart of the FutureLearn offering. But what of innovation? In the same way that there is often a “public benefit feelgood” effect for participants in medical trials, could FutureLearn provide a way of engaging, at least to a limited extent, in “learning trials”.

This need not be onerous, but could simply relate to trialling different exercises or wording or media use (video vs image vs interactive) in particular parts of a course. In the same way that Google may be running dozens of different experiments on its homepage in different combinations at any one time, could FutureLearn provide universities with a platform for trying out differing learning experiments whilst running their MOOCs?

The platform need not be too complex – at first. Google Analytics provides a mechanism for running A/B tests and “experiments” across users who have not disabled Google Analytics cookies, and as such may be appropriate for initial trialling of learning content A/B tests. The aim? Deciding on metrics is likely to prove a challenge, but we could start with simple things to try out – does the ordering or wording of resource lists affect click-through or download rates for linked resources, for example? (And what should we do about those links that never get clicked and those resources that are never downloaded?) Does offering a worked through exercise before an interactive quiz improve success rates on the quiz, and so on.

The OU has traditionally been cautious when running learning experiments, delivering fee-waived pilots rather than testing innovations as part of A/B testing on live courses with large populations. In part this may be through a desire to be ‘equitable’ and not jeopardise the learning experience for any particular student by providing them with a lesser quality offering than we could*. (At the same time, the OU celebrates the diversity and range of skills and abilities of OU students, which makes treating them all in exactly the same way seem rather incongruous?)

* Medical trials face similar challenges. But it must be remembered that we wouldn’t trial a resource we thought stood a good chance of being /less/ effective than one we were already running… For a brief overview of the broken worlds of medical trials and medical academic publishing, as well as how they could operate, see Ben Goldacre’s Bad Pharma for an intro.

FutureLearn could start to change that, and open up a pathway for experimentally testing innovations in online learning as well as at a more micro-level, tuning images and text in order to optimise content for its anticipated use. By providing course publishers with a means of trialling slightly different versions of their course materials, FutureLearn could provide an effective environment for trialling e-learning innovations. Branding FutureLearn not only as a platform for quality learning, but also as a platform for “doing” innovation in learning, gives it a unique point of difference. Organisations trialling on the platform do not face the threat of challenges made about them delivering different learning experiences to students on formally offered courses, but participants in courses are made aware that they may be presented with slightly different variants of the course materials to each other. (Or they aren’t told… if an experiment is based on success in reading a diagram where the labels are presented in different fonts or slightly different positions, or with or without arrows, and so on, does that really matter if the students aren’t told?)

Consultancy opportunities are also likely to arise in the design and analysis of trials and new interventions. The OU is also provided with both an opportunity to act according to it’s beacon status as far communicating innovative adult online learning/pedagogy goes, as well as gaining access to large trial populations.

Note that what I’m not proposing is not some sort of magical, shiny learning analytics dashboard, it’d be a procedural, could have been doing it for years, application of web analytics that makes use of online learning cohorts that are at least a magnitude or two larger than is typical in a traditional university course setting. Numbers that are maybe big enough to spot patterns of behaviour in (either positive, or avoidant).

There are ethical challenges and educational challenges in following such a course of action, of course. But in the same way that doctors might randomly prescribe between two equally good (as far as they know) treatments, or who systematically use one particular treatment over another that is equally good, I know that folk who create learning materials also pick particular pedagogical treatments “just because”. So why shouldn’t we start trialling on a platform that is branded as such?

Once again, note that I am not part of the FutureLearn project team and my knowledge of it is largely limited to what I have found on Google.

See also: Treating MOOC Platforms as Websites to be Optimised, Pure and Simple…. For some very old “course analytics” ideas about using Google Analytics, see Online Course Analytics, which resulted in OUseful blogarchive: “course analytics”. Note that these experiments never got as far as content optimisation, A/B testing, search log analysis etc. The approach I started to follow with the Library Analytics series had a little more success, but still never really got past the starting post and into a useful analyse/adapt cycle. Google Analytics has moved on since then of course… If I were to start over, I;d probably focus on creating custom dashboards to illustrate very particular use cases, as well as REDACTED.

Written by Tony Hirst

January 31, 2013 at 4:53 pm

Posted in Analytics, Infoskills, OU2.0

Tagged with

Follow

Get every new post delivered to your Inbox.

Join 809 other followers