Archive for the ‘OU2.0’ Category
A couple of days ago I delivered a workshop with Martin Weller on the topic of “Guerrilla Research”.
The session was run under the #elesig banner, and was the result of an invitation to work through the germ of an idea that was a blog post Martin had published in October 2013, The Art Of Guerrilla Research.
In that post, Martin had posted a short list of what he saw as “guerrilla research” characteristics:
- It can be done by one or two researchers and does not require a team
- It relies on existing open data, information and tools
- It is fairly quick to realise
- It is often disseminated via blogs and social media
Looking at these principles now, as in, right now, as a I type (I don’t know what I’m going to write…), I don’t necessarily see any of these as defining, at least, not without clarification. Let’s reflect, and see how my fingers transcribe my inner voice…
In the first case, a source crowd or network may play a role in the activity, so maybe it’s the initiation of the activity that only requires one or two people?
Open data, information and tools helps, but I’d gear this more towards pre-existing data, information and tools, rather than necessarily open: if you work inside an organisation, you may be able to appropriate resources that are not open or available outside the organisation, and may even have limited access within the organisation; you may have to “steal” access to them, even; open resources do mean that other people can engage in the same activity using the same resources, though, which provides transparency and reproducibility; open resources also make inside, outside activities possible.
The activity may be quick to realise, sort of: I can quickly set a scraper going to collect data about X, and the analysis of the data may be quick to realise; but I may need the scraper to run for days, or weeks, or months; more qualifying, I think, is that the activity only requires a relatively short number of relatively quick bursts of activity.
Online means of dissemination are natural, because they’re “free”, immediate, have potentially wide reach; but I think an email to someone who can, or a letter to the local press, or an activity that is it’s own publication, such as a submission to a consultation in which the responses are all published, could also count too.
PS WordPress just “related” this back to me, from June, 2009: Guerrilla Education: Teaching and Learning at the Speed of News
I couldn’t get to sleep last night mulling over thoughts that had surfaced after posting Time to Drop Calculators in Favour of Notebook Programming?. This sort of thing: what goes on when you get someone to add three and four?
Part of the problem is associated with converting the written problem into mathematical notation:
3 + 4
For more complex problems it may require invoking some equations, or mathematical tricks or operations (chain rule, dot product, and so on).
3 + 4
Cast the problem into numbers then try to solve it:
3 + 4 =
That equals gets me doing some mental arithmetic. In a calculator, there’s a sequence of button presses, then the equals gives the answer.
In a notebook, I type:
3 + 4
that is, I write the program in mathematicalese, hit the right sort of return, and get the answer:
The mechanics of finding the right hand side by executing the operations on the left hand side are handled for me.
Try this on WolframAlpha: integral of x squared times x minus three from -3 to 4
Or don’t.. do it by hand if you prefer.
I may be able to figure out the maths bit – figure out how to cast my problem into a mathematical statement – but not necessarily have the skill to solve the problem. I can get the method marks but not do the calculation and get the result. I can write the program. But running the programme, diving 3847835 by 343, calculating the square root of 26,863 using log tables or whatever means, that’s the blocker – that could put me off trying to make use of maths, could put me off learning how to cast a problem into a mathematical form, if all that means is that I can do no more than look at the form as if it were a picture, poem, or any other piece of useless abstraction.
So why don’t we help people see that casting the problem into the mathematical form is the creative bit, the bit the machines can’t do. Because the machines can do the mechanical bit:
Maybe this is the approach that the folk over at Computer Based Math are thinking about (h/t Simon Knight/@sjgknight for the link), or maybe it isn’t… But I get the feeling I need to look over what they’re up to… I also note Conrad Wolfram is behind it; we kept crossing paths, a few years ago…I was taken by his passion, and ideas, about how we should be helping folk see that maths can be useful, and how you can use it, but there was always the commercial blocker, the need for Mathematica licenses, the TM; as in Computer-Based Math™.
Then tonight, another example of interactivity, wired in to a new “learning journey” platform that again @sjgknight informs me is released out of Google 20% time…: Oppia (Oppia: a tool for interactive learning).
Here’s an example….
The radio button choice determines where we go next on the learning journey:
Nice – interactive coding environment… 3 + 4 …
What happens if I make a mistake?
History of what I did wrong, inline, which is richer than a normal notebook style, where my repeated attempts would overwrite previous ones…
Depending how many common incorrect or systematic errors we can identify, we may be able to add richer diagnostic next step pathways…
..but then, eventually, success:
The platform is designed as a social one where users can create their own learning journeys and collaborate on their development with others. Licensing is mandated as “CC-BY-SA 4.0 with a waiver of the attribution requirement”. The code for the platform is also open.
The learning journey model is richer and potentially far more complex in graph structure terms than I remember the attempts developed for the [redacted] SocialLearn platform, but the vision appears similar. SocialLearn was also more heavily geared to narrative textual elements in the exposition; by contrast, the current editing tools in Oppia make you feel as if using too much text is not a Good Thing.
So – how are these put together… The blurb suggests it should be easy, but Google folk are clever folk (and I’m not sure how successful they’ve been getting their previous geek style learning platform attempts into education)… here’s an example learning journey – it’s a state machine:
Each block can be edited:
When creating new blocks, the first thing you need is some content:
Then some interaction. For the interactions, a range of input types you might expect:
and some you might not. For example, these are the interactive/executable coding style blocks you can use:
There’s also a map input, though I’m not sure what dialogic information you can get from it when you use it?
After the interaction definition, you can define a set of rules that determine where the next step takes you, depending on the input received.
The rule definitions allow you to trap on the answer provide by the interaction dialogue, optionally provide some feedback, and then identify the next step.
The rule branches are determined by the interaction type. For radio buttons, rules are triggered on the selected answer. For text inputs, simple string distance measures:
For numeric inputs, various bounds:
For the map, what looks like a point within a particular distance of a target point?
The choice of programming languages currently available in the code interaction type kinda sets the tone about who might play with this…but then, maybe as I suggested to colleague Ray Corrigan yesterday, “I don’t think we’ve started to consider consequences of 2nd half of the chessboard programming languages in/for edu yet?”
All in all – I don’t know… getting the design right is what’ll make for a successful learning journey, and that’s not something you can necessarily do quickly. The interface is as with many Google interfaces, not the friendliest I’ve seen (function over form, but bad form can sometimes get in the way of the function…).
I was interested to see they’re calling the learning journeys explorations. The Digital Worlds course I ran some time ago used that word too, in the guise of Topic Explorations, but they were a little more open ended, and used a question mechanic used to guide reading within a set of suggested resources.
Anyway, one to watch, perhaps, erm, maybe… No badges as yet, but that would be candy to offer at the end of a course, as well as a way of splashing your own brand via the badges. But before that, a state machine design mountain to climb…
One of the advantages of having a relatively long lived blog is that it gives me the ability to look back at the things that were exciting to me several years ago. For example, it was five years ago more or less to to the day when I first saw video ads on the underground; and seven and a half years ago since I remarked on the possible relevance of virtual machines (VMs) to OU teaching: Personal Computing Guidance for Distance Education Students. (At the time, I was more excited by portable applications that could be run from USB sticks, the motivating idea being that OU students might want to access course software or applications from arbitrary machines that they didn’t necessarily have enough permissions on to be able to download and install required applications.)
Since then, a couple of OU courses have dabbled with the virtual machines – the Linux course that’s now part of the course TM129 – Technologies in Practice makes use of a Linux virtual machine running in VirtualBox, and the digital forensics postgrad course (M812) makes use of a couple of VMs – a Windows box that needs analysing, and a Linux VM that contains the analysis tools.
We’re also looking at using a virtual machine for a new level three/third year equivalent course due out in October 2015 (sic…) on data stuff (short title!;-). I haven’t really been paying as much attention as I probably should have to VMs, but a little bit of playing at the end of last week and over the weekend made me realise the error of my ways…
So what are virtual machines (VMs)? You’re possibly familiar with the phrase “(computer) operating system”, and almost definitely will have heard of Windows and iOS. These are the bits of computer software that provide a desktop on top of your computer hardware, and run the services that that allow your applications to talk to the hardware and out in to the wider world. Virtual machines are boxes that allow you to run another operating system, as well as applications on top of it, on your own desktop. So a Windows machine can run a box that contains a fully working Linux computer; or if you’re like me and use a Mac, you’ll have a virtual machine that runs a copy of Windows so you can run Internet Explorer on it in order to access the OU’s expense claims system!
Now when it comes to shipping course software, we’re often faced with the problem of getting software to work on whatever operating system our students are using. In a traditional university, with computer labs, the computers in the public areas will all contain the same software, installed from a common source. (OU IT are trying to enforce a similar policy on staff machines at the moment. Referred to in reverential terms as “desktop optimisation”, the idea is that machines will only run the software that IT says can run on it. Which would rule out the possibility of me running pretty much any of the applications I use on a day to day basis. Although I think Macs are outside the optimisation fold for the moment…?)
Ideally, then, we might want students to all run the same operating system, so that we can test software on that system and write one set of instructions for how to use it. But students bring their own devices. And when it comes to installing the software tools we’d like computing students, for example, to install, there can be all sorts of problems getting the software to install properly.
So another option is to provide students with a machine that we control, that doesn’t upset their own settings, and that won’t kill their computer if something goes horribly wrong. (We can’t, for example, require students to run the OU’s optimised desktop, not least because we’d have to pay license fees for the use of Windows, but also because students would rightly get upset if we prevented them from downloading and installing Angry Birds on their own computer!) Virtual machines provide a way of doing this.
As a case in point, the new data course will probably make use of iPython Notebook, among other things. iPython Notebook is a browser accessed application that allows you to develop and execute Python programme code via an interactive, browser based user interface, which I find quite attractive from a pedagogical point of view. (This post may get read by OU folk in an OU teaching context, so I am obliged to use the p-word.)
Installing the Python libraries the course will draw on, as well as a variety of databases (PostgreSQL and MongoDB are the ones we’re thinking of using…) could be a major headache for our students, particularly if they aren’t well versed in sysadmin and library installation. But if we define a virtual machine that has the required libraries and applications preinstalled and preconfigured, we can literally contain the grief – if students run an application such as VirtualBox (which thy would have to install themselves), we can provide a preconfigured machine (known as a guest) that they can run within their own desktop (part of the host machine), that will make available services that they can access via their normal desktop browser.
So for example, we can build a virtual machine that contains iPython and all the required libraries, that can be defined to automatically run iPython Notebook when it boots, and that can make that notebook available via the host browser. And more than that, we can also configure the Notebook server running on the local guest VM so that it saves notebook files to a directory that is shared between the guest and the host. If a student then switches off, or even deletes, the guest machine, they don’t lose their work…
VMs have been used elsewhere for course delivery too, so we may also be able to learn more about the practicalities of VMs in a course context from those cases. For example, Running a next-gen sequence analysis course using Amazon Web Services describes how virtual machines running on Amazon Cloud services, (rather than in boxes running within a VirtualBox container on the user’s desktop) were used for a data analysis course that made us of very large datasets. (This demonstrates another benefit of virtualisation: we can configure a VM so that it can b run in containerised form on a student’s own computer, or run on a machine hosted on the net somewhere, and then accessed from the student’s own machine.)
Something I found really exciting were the VMs defined by @DataMinerUk and @twtrdaithi for use in data journalism applications – Infinite Interns, a range of virtual machines defined using Vagrant (which is super fun to play with!:-) that contain a rang of tools useful for data projects.
I also wonder about the extent to which the various MOOCs have made use of VMs… And whether there is an argument to be had in favour of “course boxes” in general…?
PS for a hint at something of what’s possible in using a VM to support a course, imagine Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More as course notes, The official online compendium for Mining the Social Web, 2nd Edition (O’Reilly, 2013) as the way in to your run-at-home computer lab, and Mining-the-Social-Web-2nd-Edition – issues on github as instructor/lab technician support. ’nuff said. The things we’re gonna be prepared to pay for have the potential to change…
I just spotted this:
wouldn't it be wonderful if somebody put together a Coursera course on Bitcoin, covering whole range: crypto, ops, economics, politics?—
stefano bertolo (@sclopit) April 04, 2013
(In case that livelink dies, it’s a tweet from @sclopit: “wouldn’t it be wonderful if somebody put together a Coursera course on Bitcoin, covering whole range: crypto, ops, economics, politics?”)
Here’s a crappy graph I’ve used before…
It hints at how I see different sensemakers working together to help inform folk about how the world works… This was may how things were – maybe hard edges and labels need changing in a reinvention of how we make sense of the world and communicate it to others?
This was wrong – Publisher led mini-courses – but it still feels like a piece in a possibly new-cut jigsaw.
By chance, I also spotted this for the first time yesterday, even though it’s been around for some time: O’Reilly School of Technology. Self-paced, online courses with emailable tutor. (Similar context – The Business of HE Moves On….)
And this today: Facts Are Sacred – “A new book published by the team behind the Datablog explains how we do data journalism at the Guardian.” Books are often handy things to pin courses round, of course… (Which is to say – is there a MOOC in that?)
FutureLearn has been signing up ‘non-academic’ partners – the British Library, and the British Council, for example. I wonder if the BBC are going to join the party too? If so, then would there be a place for other publishers…?
…or does that feel wrong? Maybe the press doesn’t have the right sort of “independent voice” to deliver “academic” courses? WHich is why we maybe need to rethink the cutting of the jigsaw, or at least, a new view over it.
Who knows how the MOOC thing will play out – it reminds me in part of the educational packs companies hand out… I’m sure you know the sort of thing: Southern Water’s Waterwise packs, or ScottishPower Renewables Education Pack, Herefordshiore Council school’s waste education pack, Friends of the Earth information booklets etc etc. Propaganda? Biased to the point of distorting a “true” academic educational line? Or “legitimate” educational resources? Whatever that means? Maybe it’s more appropriate to ask if they are useful resources in the support of learning?
So are MOOCs just educational resource packs, promoting universities rather than companies or charities? But rather than catering to schools, do they maybe cater to well segmented “media consumers” looking for a new style of publication (the partwork “course”)?
And are there opportunities for media and academe to join forces producing – in quick time – long form structured pieces on the likes of, I dunno, Bitcoin, maybe, that could cover a whole range of related topics, such as in the Bitcoin case: crypto, ops, economics, politics?
PS apparently FutureLearn are hiring Ruby on Rails developers (Simon Pearson/@minor9th: “On the look out for lovely Ruby on Rails devs who like working on Good Projects. FutureLearn needs you! http://www.futurelearn.com – DM me”)
Another strong piece of TV commissioning via the Open University Open Media Unit (OMU) aired this week in the guide of The Challenger, a drama documentary telling the tale of Richard Feynman’s role in the accident enquiry around the space shuttle Challenger disaster. (OMU also produced an ethical game if you want to try you own hand out at leading an ethics investigation.)
Running a quick search for tweets containing the terms feynman challenger to generate a list of names of Twitter users commenting around the programme, I grabbed a sample of their friends (max 197 per person) and then plotted the commonly followed accounts within that sample.
If you treat this image as a map, you can see regions where the accounts are (broadly) related by topic or interest category. What regions can you see?! (For more on this technique, see Communities and Connections: Social Interest Mapping.)
I also ran a search for tweets containing bbc2 challenger:
Let’s peek in to some of the regions…”Space” related twitter accounts for example:
Or news media:
(from which we might conclude that the audience was also a Radio 4 audience?!;-)
How about a search on bbc2 feynman?
Again, we see distinct regions. As with the other maps, the programme audience also seems to have an interest in following popular science writers:
Interesting? Possibly – the maps provide a quick profile of the audience, and maybe confirm its the sort of audience we might have expected. Notable perhaps are the prominence of Brian Cox and Dara O’Briain, who’ve also featured heavily in BBC science programming. Around the edges, we also see what sorts of comedy or entertainment talent appear to the audience – no surprises to see David Mitchell, Charlton Brooker and Aianucci in there, though I wouldn’t necessarily have factored in Eddie Izzard (though we’d need to look at “proper” baseline interest levels of general audiences to see whether any of these comedians are over-represented in these samples compared to commonly followed folk in a “random” sample of UK TV watchers on Twitter. The patterns of following may be “generally true” rather than highlighting folk atypically followed by this audience.)
Useful? Who knows…?!
(I have PDF versions of the full plots if anyone wants copies…)
[The following is my *personal* opinion only. I know as much about FutureLearn as Google does. Much of the substance of this post was circulated internally within the OU prior to posting here.]
In common with other MOOC platforms, one of the possible ways of positioning FutureLearn is as a marketing platform for universities. Another might see it as a tool for delivering informal versions of courses to learners who are not currently registered with a particular institution. [A third might position it in some way around the notion of "learning analytics", eg as described in a post today by Simon Buckingham Shum: The emerging MOOC data/analytics ecosystem] If I understand it correctly, “quality of the learning experience” will be at the heart of the FutureLearn offering. But what of innovation? In the same way that there is often a “public benefit feelgood” effect for participants in medical trials, could FutureLearn provide a way of engaging, at least to a limited extent, in “learning trials”.
This need not be onerous, but could simply relate to trialling different exercises or wording or media use (video vs image vs interactive) in particular parts of a course. In the same way that Google may be running dozens of different experiments on its homepage in different combinations at any one time, could FutureLearn provide universities with a platform for trying out differing learning experiments whilst running their MOOCs?
The platform need not be too complex – at first. Google Analytics provides a mechanism for running A/B tests and “experiments” across users who have not disabled Google Analytics cookies, and as such may be appropriate for initial trialling of learning content A/B tests. The aim? Deciding on metrics is likely to prove a challenge, but we could start with simple things to try out – does the ordering or wording of resource lists affect click-through or download rates for linked resources, for example? (And what should we do about those links that never get clicked and those resources that are never downloaded?) Does offering a worked through exercise before an interactive quiz improve success rates on the quiz, and so on.
The OU has traditionally been cautious when running learning experiments, delivering fee-waived pilots rather than testing innovations as part of A/B testing on live courses with large populations. In part this may be through a desire to be ‘equitable’ and not jeopardise the learning experience for any particular student by providing them with a lesser quality offering than we could*. (At the same time, the OU celebrates the diversity and range of skills and abilities of OU students, which makes treating them all in exactly the same way seem rather incongruous?)
* Medical trials face similar challenges. But it must be remembered that we wouldn’t trial a resource we thought stood a good chance of being /less/ effective than one we were already running… For a brief overview of the broken worlds of medical trials and medical academic publishing, as well as how they could operate, see Ben Goldacre’s Bad Pharma for an intro.
FutureLearn could start to change that, and open up a pathway for experimentally testing innovations in online learning as well as at a more micro-level, tuning images and text in order to optimise content for its anticipated use. By providing course publishers with a means of trialling slightly different versions of their course materials, FutureLearn could provide an effective environment for trialling e-learning innovations. Branding FutureLearn not only as a platform for quality learning, but also as a platform for “doing” innovation in learning, gives it a unique point of difference. Organisations trialling on the platform do not face the threat of challenges made about them delivering different learning experiences to students on formally offered courses, but participants in courses are made aware that they may be presented with slightly different variants of the course materials to each other. (Or they aren’t told… if an experiment is based on success in reading a diagram where the labels are presented in different fonts or slightly different positions, or with or without arrows, and so on, does that really matter if the students aren’t told?)
Consultancy opportunities are also likely to arise in the design and analysis of trials and new interventions. The OU is also provided with both an opportunity to act according to it’s beacon status as far communicating innovative adult online learning/pedagogy goes, as well as gaining access to large trial populations.
Note that what I’m not proposing is not some sort of magical, shiny learning analytics dashboard, it’d be a procedural, could have been doing it for years, application of web analytics that makes use of online learning cohorts that are at least a magnitude or two larger than is typical in a traditional university course setting. Numbers that are maybe big enough to spot patterns of behaviour in (either positive, or avoidant).
There are ethical challenges and educational challenges in following such a course of action, of course. But in the same way that doctors might randomly prescribe between two equally good (as far as they know) treatments, or who systematically use one particular treatment over another that is equally good, I know that folk who create learning materials also pick particular pedagogical treatments “just because”. So why shouldn’t we start trialling on a platform that is branded as such?
Once again, note that I am not part of the FutureLearn project team and my knowledge of it is largely limited to what I have found on Google.
See also: Treating MOOC Platforms as Websites to be Optimised, Pure and Simple…. For some very old “course analytics” ideas about using Google Analytics, see Online Course Analytics, which resulted in OUseful blogarchive: “course analytics”. Note that these experiments never got as far as content optimisation, A/B testing, search log analysis etc. The approach I started to follow with the Library Analytics series had a little more success, but still never really got past the starting post and into a useful analyse/adapt cycle. Google Analytics has moved on since then of course… If I were to start over, I;d probably focus on creating custom dashboards to illustrate very particular use cases, as well as
A month or so on from its PR launch, and with a steady trickle of press mentions since then (though no new updates on the website?), I’m guessing that the folk over at FutureLearn must be putting the hours in trying to work out what the platform offering will actually consist of, or what the
sustainabilitybusiness model will actually be. (I have no inside information on the FutureLearn project…)
One of the things I have sort of picked up from online glimpses of things said and commented upon is that the USP is going to relate to the quality of teaching/pedagogy (erm, I think?!). I’m not sure if “proven” learning designs will be baked into the platform, constraining the way courses are delivered (in which case, there’s likely to be something of a bootstrap problem in getting the first courses out if they have to wait for the platform?) or whether the quality will flow “naturally” from the fact the the courses will be provided by British universities (?!), but if innovation is also to flow, it’ll be interesting to see how it’ll be supported…?
— Fred Garnett (@fredgarnett) January 22, 2013
…and whether it will be done through “open” means? (I can haz API? But what would it do?!?) If it is built up from open code, I wonder to what extent it might draw on code and ideas used in other learning platforms (for example, Moodle, to which the OU is already a core contributor, I think?) or Class2Go) as well as drawing on learning from whatsoever folk managed to learn from the OU’s other open learning builds – OpenLearn/Labspace (content and community), iSpot (community and reputation), Cloudworks (community and resource sharing) or the very many expensive attempts at SocialLearn (wtf?!) that never saw the light of day? I can’t imagine a FutureLearn offering being based on the Google Coursebuilder, but it wouldn’t surprise if it ended up with something being bought in… Time to start watching the tender site, maybe, though surely that would knock any start date back too far?
One thing that would be nice to see would be project using something akin to the open, agile development process used by the @GDSteam, which is opening up the backend to View Source as well as the front-end…
I also wonder about the extent to which it might be possible to reuse ideas from commercial website design and development in the way the site is architected. This will be anathema to many, but I wonder just how far the idea could be pushed? Start with the idea of analytics, and define funnels for how folk might be expected to move through course units. Associate activities with some sort of intentional action, such as popping items into a shopping basket, or maybe the equivalent of 1-click purchases. Making it through the to end of a course can be seen as completing such sort of purchase (chuck in some open badge framework badges as a reward for good measure;-). Ad-delivery mechanisms can be rethought of as personalised content delivery (eg contextual content delivery, banner ads as signage or email-pre-emptive ads). Use search data to help refine content pages, and A/B testing to try out multiple variants of course materials and exercises (weak example). (I have never understood why the OU doesn’t engage in A/B tested delivery of course materials as a matter of course? OU courses are delivered at large enough scale, and containing more than enough content, to trial different ways of delivering content and assessment without jeopardising overall outcomes for any individual student.)
All of the above – search analysis, web analytics, contextual content/ad-serving and A/B testing – can be managed through ad servers and Google Analytics (and to a lesser extent Piwik, though they are open to additional contributions), which could provide a minimum-viable product tooling basis for a testing and analytics framework that’s ready to go now? Such an approach is far too scruffy and ad hoc, of course, for a “proper” platform project…
PS by the by, I notice that JISC Advance’s Generic eMarketplace (or GeM) for Work Based Learning (“gemforwbl”, or looking at the logo, “gee em for weeble”? (will it wobble? will it fall down?) is now open and ready for business… and as for the logo, what on earth is it supposed to represent?
Answers in the comments, etc etc, please…
PPS As ever, the opinions expressed herein are not necessarily even reflective of my own, let alone those of my employer…;-)