Archive for the ‘OU2.0’ Category
I couldn’t get to sleep last night mulling over thoughts that had surfaced after posting Time to Drop Calculators in Favour of Notebook Programming?. This sort of thing: what goes on when you get someone to add three and four?
Part of the problem is associated with converting the written problem into mathematical notation:
3 + 4
For more complex problems it may require invoking some equations, or mathematical tricks or operations (chain rule, dot product, and so on).
3 + 4
Cast the problem into numbers then try to solve it:
3 + 4 =
That equals gets me doing some mental arithmetic. In a calculator, there’s a sequence of button presses, then the equals gives the answer.
In a notebook, I type:
3 + 4
that is, I write the program in mathematicalese, hit the right sort of return, and get the answer:
The mechanics of finding the right hand side by executing the operations on the left hand side are handled for me.
Try this on WolframAlpha: integral of x squared times x minus three from -3 to 4
Or don’t.. do it by hand if you prefer.
I may be able to figure out the maths bit – figure out how to cast my problem into a mathematical statement – but not necessarily have the skill to solve the problem. I can get the method marks but not do the calculation and get the result. I can write the program. But running the programme, diving 3847835 by 343, calculating the square root of 26,863 using log tables or whatever means, that’s the blocker – that could put me off trying to make use of maths, could put me off learning how to cast a problem into a mathematical form, if all that means is that I can do no more than look at the form as if it were a picture, poem, or any other piece of useless abstraction.
So why don’t we help people see that casting the problem into the mathematical form is the creative bit, the bit the machines can’t do. Because the machines can do the mechanical bit:
Maybe this is the approach that the folk over at Computer Based Math are thinking about (h/t Simon Knight/@sjgknight for the link), or maybe it isn’t… But I get the feeling I need to look over what they’re up to… I also note Conrad Wolfram is behind it; we kept crossing paths, a few years ago…I was taken by his passion, and ideas, about how we should be helping folk see that maths can be useful, and how you can use it, but there was always the commercial blocker, the need for Mathematica licenses, the TM; as in Computer-Based Math™.
Then tonight, another example of interactivity, wired in to a new “learning journey” platform that again @sjgknight informs me is released out of Google 20% time…: Oppia (Oppia: a tool for interactive learning).
Here’s an example….
The radio button choice determines where we go next on the learning journey:
Nice – interactive coding environment… 3 + 4 …
What happens if I make a mistake?
History of what I did wrong, inline, which is richer than a normal notebook style, where my repeated attempts would overwrite previous ones…
Depending how many common incorrect or systematic errors we can identify, we may be able to add richer diagnostic next step pathways…
..but then, eventually, success:
The platform is designed as a social one where users can create their own learning journeys and collaborate on their development with others. Licensing is mandated as “CC-BY-SA 4.0 with a waiver of the attribution requirement”. The code for the platform is also open.
The learning journey model is richer and potentially far more complex in graph structure terms than I remember the attempts developed for the [redacted] SocialLearn platform, but the vision appears similar. SocialLearn was also more heavily geared to narrative textual elements in the exposition; by contrast, the current editing tools in Oppia make you feel as if using too much text is not a Good Thing.
So – how are these put together… The blurb suggests it should be easy, but Google folk are clever folk (and I’m not sure how successful they’ve been getting their previous geek style learning platform attempts into education)… here’s an example learning journey – it’s a state machine:
Each block can be edited:
When creating new blocks, the first thing you need is some content:
Then some interaction. For the interactions, a range of input types you might expect:
and some you might not. For example, these are the interactive/executable coding style blocks you can use:
There’s also a map input, though I’m not sure what dialogic information you can get from it when you use it?
After the interaction definition, you can define a set of rules that determine where the next step takes you, depending on the input received.
The rule definitions allow you to trap on the answer provide by the interaction dialogue, optionally provide some feedback, and then identify the next step.
The rule branches are determined by the interaction type. For radio buttons, rules are triggered on the selected answer. For text inputs, simple string distance measures:
For numeric inputs, various bounds:
For the map, what looks like a point within a particular distance of a target point?
The choice of programming languages currently available in the code interaction type kinda sets the tone about who might play with this…but then, maybe as I suggested to colleague Ray Corrigan yesterday, “I don’t think we’ve started to consider consequences of 2nd half of the chessboard programming languages in/for edu yet?”
All in all – I don’t know… getting the design right is what’ll make for a successful learning journey, and that’s not something you can necessarily do quickly. The interface is as with many Google interfaces, not the friendliest I’ve seen (function over form, but bad form can sometimes get in the way of the function…).
I was interested to see they’re calling the learning journeys explorations. The Digital Worlds course I ran some time ago used that word too, in the guise of Topic Explorations, but they were a little more open ended, and used a question mechanic used to guide reading within a set of suggested resources.
Anyway, one to watch, perhaps, erm, maybe… No badges as yet, but that would be candy to offer at the end of a course, as well as a way of splashing your own brand via the badges. But before that, a state machine design mountain to climb…
One of the advantages of having a relatively long lived blog is that it gives me the ability to look back at the things that were exciting to me several years ago. For example, it was five years ago more or less to to the day when I first saw video ads on the underground; and seven and a half years ago since I remarked on the possible relevance of virtual machines (VMs) to OU teaching: Personal Computing Guidance for Distance Education Students. (At the time, I was more excited by portable applications that could be run from USB sticks, the motivating idea being that OU students might want to access course software or applications from arbitrary machines that they didn’t necessarily have enough permissions on to be able to download and install required applications.)
Since then, a couple of OU courses have dabbled with the virtual machines – the Linux course that’s now part of the course TM129 – Technologies in Practice makes use of a Linux virtual machine running in VirtualBox, and the digital forensics postgrad course (M812) makes use of a couple of VMs – a Windows box that needs analysing, and a Linux VM that contains the analysis tools.
We’re also looking at using a virtual machine for a new level three/third year equivalent course due out in October 2015 (sic…) on data stuff (short title!;-). I haven’t really been paying as much attention as I probably should have to VMs, but a little bit of playing at the end of last week and over the weekend made me realise the error of my ways…
So what are virtual machines (VMs)? You’re possibly familiar with the phrase “(computer) operating system”, and almost definitely will have heard of Windows and iOS. These are the bits of computer software that provide a desktop on top of your computer hardware, and run the services that that allow your applications to talk to the hardware and out in to the wider world. Virtual machines are boxes that allow you to run another operating system, as well as applications on top of it, on your own desktop. So a Windows machine can run a box that contains a fully working Linux computer; or if you’re like me and use a Mac, you’ll have a virtual machine that runs a copy of Windows so you can run Internet Explorer on it in order to access the OU’s expense claims system!
Now when it comes to shipping course software, we’re often faced with the problem of getting software to work on whatever operating system our students are using. In a traditional university, with computer labs, the computers in the public areas will all contain the same software, installed from a common source. (OU IT are trying to enforce a similar policy on staff machines at the moment. Referred to in reverential terms as “desktop optimisation”, the idea is that machines will only run the software that IT says can run on it. Which would rule out the possibility of me running pretty much any of the applications I use on a day to day basis. Although I think Macs are outside the optimisation fold for the moment…?)
Ideally, then, we might want students to all run the same operating system, so that we can test software on that system and write one set of instructions for how to use it. But students bring their own devices. And when it comes to installing the software tools we’d like computing students, for example, to install, there can be all sorts of problems getting the software to install properly.
So another option is to provide students with a machine that we control, that doesn’t upset their own settings, and that won’t kill their computer if something goes horribly wrong. (We can’t, for example, require students to run the OU’s optimised desktop, not least because we’d have to pay license fees for the use of Windows, but also because students would rightly get upset if we prevented them from downloading and installing Angry Birds on their own computer!) Virtual machines provide a way of doing this.
As a case in point, the new data course will probably make use of iPython Notebook, among other things. iPython Notebook is a browser accessed application that allows you to develop and execute Python programme code via an interactive, browser based user interface, which I find quite attractive from a pedagogical point of view. (This post may get read by OU folk in an OU teaching context, so I am obliged to use the p-word.)
Installing the Python libraries the course will draw on, as well as a variety of databases (PostgreSQL and MongoDB are the ones we’re thinking of using…) could be a major headache for our students, particularly if they aren’t well versed in sysadmin and library installation. But if we define a virtual machine that has the required libraries and applications preinstalled and preconfigured, we can literally contain the grief – if students run an application such as VirtualBox (which thy would have to install themselves), we can provide a preconfigured machine (known as a guest) that they can run within their own desktop (part of the host machine), that will make available services that they can access via their normal desktop browser.
So for example, we can build a virtual machine that contains iPython and all the required libraries, that can be defined to automatically run iPython Notebook when it boots, and that can make that notebook available via the host browser. And more than that, we can also configure the Notebook server running on the local guest VM so that it saves notebook files to a directory that is shared between the guest and the host. If a student then switches off, or even deletes, the guest machine, they don’t lose their work…
VMs have been used elsewhere for course delivery too, so we may also be able to learn more about the practicalities of VMs in a course context from those cases. For example, Running a next-gen sequence analysis course using Amazon Web Services describes how virtual machines running on Amazon Cloud services, (rather than in boxes running within a VirtualBox container on the user’s desktop) were used for a data analysis course that made us of very large datasets. (This demonstrates another benefit of virtualisation: we can configure a VM so that it can b run in containerised form on a student’s own computer, or run on a machine hosted on the net somewhere, and then accessed from the student’s own machine.)
Something I found really exciting were the VMs defined by @DataMinerUk and @twtrdaithi for use in data journalism applications – Infinite Interns, a range of virtual machines defined using Vagrant (which is super fun to play with!:-) that contain a rang of tools useful for data projects.
I also wonder about the extent to which the various MOOCs have made use of VMs… And whether there is an argument to be had in favour of “course boxes” in general…?
PS for a hint at something of what’s possible in using a VM to support a course, imagine Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More as course notes, The official online compendium for Mining the Social Web, 2nd Edition (O’Reilly, 2013) as the way in to your run-at-home computer lab, and Mining-the-Social-Web-2nd-Edition – issues on github as instructor/lab technician support. ’nuff said. The things we’re gonna be prepared to pay for have the potential to change…
I just spotted this:
wouldn't it be wonderful if somebody put together a Coursera course on Bitcoin, covering whole range: crypto, ops, economics, politics?—
stefano bertolo (@sclopit) April 04, 2013
(In case that livelink dies, it’s a tweet from @sclopit: “wouldn’t it be wonderful if somebody put together a Coursera course on Bitcoin, covering whole range: crypto, ops, economics, politics?”)
Here’s a crappy graph I’ve used before…
It hints at how I see different sensemakers working together to help inform folk about how the world works… This was may how things were – maybe hard edges and labels need changing in a reinvention of how we make sense of the world and communicate it to others?
This was wrong – Publisher led mini-courses – but it still feels like a piece in a possibly new-cut jigsaw.
By chance, I also spotted this for the first time yesterday, even though it’s been around for some time: O’Reilly School of Technology. Self-paced, online courses with emailable tutor. (Similar context – The Business of HE Moves On….)
And this today: Facts Are Sacred – “A new book published by the team behind the Datablog explains how we do data journalism at the Guardian.” Books are often handy things to pin courses round, of course… (Which is to say – is there a MOOC in that?)
FutureLearn has been signing up ‘non-academic’ partners – the British Library, and the British Council, for example. I wonder if the BBC are going to join the party too? If so, then would there be a place for other publishers…?
…or does that feel wrong? Maybe the press doesn’t have the right sort of “independent voice” to deliver “academic” courses? WHich is why we maybe need to rethink the cutting of the jigsaw, or at least, a new view over it.
Who knows how the MOOC thing will play out – it reminds me in part of the educational packs companies hand out… I’m sure you know the sort of thing: Southern Water’s Waterwise packs, or ScottishPower Renewables Education Pack, Herefordshiore Council school’s waste education pack, Friends of the Earth information booklets etc etc. Propaganda? Biased to the point of distorting a “true” academic educational line? Or “legitimate” educational resources? Whatever that means? Maybe it’s more appropriate to ask if they are useful resources in the support of learning?
So are MOOCs just educational resource packs, promoting universities rather than companies or charities? But rather than catering to schools, do they maybe cater to well segmented “media consumers” looking for a new style of publication (the partwork “course”)?
And are there opportunities for media and academe to join forces producing – in quick time – long form structured pieces on the likes of, I dunno, Bitcoin, maybe, that could cover a whole range of related topics, such as in the Bitcoin case: crypto, ops, economics, politics?
PS apparently FutureLearn are hiring Ruby on Rails developers (Simon Pearson/@minor9th: “On the look out for lovely Ruby on Rails devs who like working on Good Projects. FutureLearn needs you! http://www.futurelearn.com – DM me”)
Another strong piece of TV commissioning via the Open University Open Media Unit (OMU) aired this week in the guide of The Challenger, a drama documentary telling the tale of Richard Feynman’s role in the accident enquiry around the space shuttle Challenger disaster. (OMU also produced an ethical game if you want to try you own hand out at leading an ethics investigation.)
Running a quick search for tweets containing the terms feynman challenger to generate a list of names of Twitter users commenting around the programme, I grabbed a sample of their friends (max 197 per person) and then plotted the commonly followed accounts within that sample.
If you treat this image as a map, you can see regions where the accounts are (broadly) related by topic or interest category. What regions can you see?! (For more on this technique, see Communities and Connections: Social Interest Mapping.)
I also ran a search for tweets containing bbc2 challenger:
Let’s peek in to some of the regions…”Space” related twitter accounts for example:
Or news media:
(from which we might conclude that the audience was also a Radio 4 audience?!;-)
How about a search on bbc2 feynman?
Again, we see distinct regions. As with the other maps, the programme audience also seems to have an interest in following popular science writers:
Interesting? Possibly – the maps provide a quick profile of the audience, and maybe confirm its the sort of audience we might have expected. Notable perhaps are the prominence of Brian Cox and Dara O’Briain, who’ve also featured heavily in BBC science programming. Around the edges, we also see what sorts of comedy or entertainment talent appear to the audience – no surprises to see David Mitchell, Charlton Brooker and Aianucci in there, though I wouldn’t necessarily have factored in Eddie Izzard (though we’d need to look at “proper” baseline interest levels of general audiences to see whether any of these comedians are over-represented in these samples compared to commonly followed folk in a “random” sample of UK TV watchers on Twitter. The patterns of following may be “generally true” rather than highlighting folk atypically followed by this audience.)
Useful? Who knows…?!
(I have PDF versions of the full plots if anyone wants copies…)
[The following is my *personal* opinion only. I know as much about FutureLearn as Google does. Much of the substance of this post was circulated internally within the OU prior to posting here.]
In common with other MOOC platforms, one of the possible ways of positioning FutureLearn is as a marketing platform for universities. Another might see it as a tool for delivering informal versions of courses to learners who are not currently registered with a particular institution. [A third might position it in some way around the notion of "learning analytics", eg as described in a post today by Simon Buckingham Shum: The emerging MOOC data/analytics ecosystem] If I understand it correctly, “quality of the learning experience” will be at the heart of the FutureLearn offering. But what of innovation? In the same way that there is often a “public benefit feelgood” effect for participants in medical trials, could FutureLearn provide a way of engaging, at least to a limited extent, in “learning trials”.
This need not be onerous, but could simply relate to trialling different exercises or wording or media use (video vs image vs interactive) in particular parts of a course. In the same way that Google may be running dozens of different experiments on its homepage in different combinations at any one time, could FutureLearn provide universities with a platform for trying out differing learning experiments whilst running their MOOCs?
The platform need not be too complex – at first. Google Analytics provides a mechanism for running A/B tests and “experiments” across users who have not disabled Google Analytics cookies, and as such may be appropriate for initial trialling of learning content A/B tests. The aim? Deciding on metrics is likely to prove a challenge, but we could start with simple things to try out – does the ordering or wording of resource lists affect click-through or download rates for linked resources, for example? (And what should we do about those links that never get clicked and those resources that are never downloaded?) Does offering a worked through exercise before an interactive quiz improve success rates on the quiz, and so on.
The OU has traditionally been cautious when running learning experiments, delivering fee-waived pilots rather than testing innovations as part of A/B testing on live courses with large populations. In part this may be through a desire to be ‘equitable’ and not jeopardise the learning experience for any particular student by providing them with a lesser quality offering than we could*. (At the same time, the OU celebrates the diversity and range of skills and abilities of OU students, which makes treating them all in exactly the same way seem rather incongruous?)
* Medical trials face similar challenges. But it must be remembered that we wouldn’t trial a resource we thought stood a good chance of being /less/ effective than one we were already running… For a brief overview of the broken worlds of medical trials and medical academic publishing, as well as how they could operate, see Ben Goldacre’s Bad Pharma for an intro.
FutureLearn could start to change that, and open up a pathway for experimentally testing innovations in online learning as well as at a more micro-level, tuning images and text in order to optimise content for its anticipated use. By providing course publishers with a means of trialling slightly different versions of their course materials, FutureLearn could provide an effective environment for trialling e-learning innovations. Branding FutureLearn not only as a platform for quality learning, but also as a platform for “doing” innovation in learning, gives it a unique point of difference. Organisations trialling on the platform do not face the threat of challenges made about them delivering different learning experiences to students on formally offered courses, but participants in courses are made aware that they may be presented with slightly different variants of the course materials to each other. (Or they aren’t told… if an experiment is based on success in reading a diagram where the labels are presented in different fonts or slightly different positions, or with or without arrows, and so on, does that really matter if the students aren’t told?)
Consultancy opportunities are also likely to arise in the design and analysis of trials and new interventions. The OU is also provided with both an opportunity to act according to it’s beacon status as far communicating innovative adult online learning/pedagogy goes, as well as gaining access to large trial populations.
Note that what I’m not proposing is not some sort of magical, shiny learning analytics dashboard, it’d be a procedural, could have been doing it for years, application of web analytics that makes use of online learning cohorts that are at least a magnitude or two larger than is typical in a traditional university course setting. Numbers that are maybe big enough to spot patterns of behaviour in (either positive, or avoidant).
There are ethical challenges and educational challenges in following such a course of action, of course. But in the same way that doctors might randomly prescribe between two equally good (as far as they know) treatments, or who systematically use one particular treatment over another that is equally good, I know that folk who create learning materials also pick particular pedagogical treatments “just because”. So why shouldn’t we start trialling on a platform that is branded as such?
Once again, note that I am not part of the FutureLearn project team and my knowledge of it is largely limited to what I have found on Google.
See also: Treating MOOC Platforms as Websites to be Optimised, Pure and Simple…. For some very old “course analytics” ideas about using Google Analytics, see Online Course Analytics, which resulted in OUseful blogarchive: “course analytics”. Note that these experiments never got as far as content optimisation, A/B testing, search log analysis etc. The approach I started to follow with the Library Analytics series had a little more success, but still never really got past the starting post and into a useful analyse/adapt cycle. Google Analytics has moved on since then of course… If I were to start over, I;d probably focus on creating custom dashboards to illustrate very particular use cases, as well as
A month or so on from its PR launch, and with a steady trickle of press mentions since then (though no new updates on the website?), I’m guessing that the folk over at FutureLearn must be putting the hours in trying to work out what the platform offering will actually consist of, or what the
sustainabilitybusiness model will actually be. (I have no inside information on the FutureLearn project…)
One of the things I have sort of picked up from online glimpses of things said and commented upon is that the USP is going to relate to the quality of teaching/pedagogy (erm, I think?!). I’m not sure if “proven” learning designs will be baked into the platform, constraining the way courses are delivered (in which case, there’s likely to be something of a bootstrap problem in getting the first courses out if they have to wait for the platform?) or whether the quality will flow “naturally” from the fact the the courses will be provided by British universities (?!), but if innovation is also to flow, it’ll be interesting to see how it’ll be supported…?
— Fred Garnett (@fredgarnett) January 22, 2013
…and whether it will be done through “open” means? (I can haz API? But what would it do?!?) If it is built up from open code, I wonder to what extent it might draw on code and ideas used in other learning platforms (for example, Moodle, to which the OU is already a core contributor, I think?) or Class2Go) as well as drawing on learning from whatsoever folk managed to learn from the OU’s other open learning builds – OpenLearn/Labspace (content and community), iSpot (community and reputation), Cloudworks (community and resource sharing) or the very many expensive attempts at SocialLearn (wtf?!) that never saw the light of day? I can’t imagine a FutureLearn offering being based on the Google Coursebuilder, but it wouldn’t surprise if it ended up with something being bought in… Time to start watching the tender site, maybe, though surely that would knock any start date back too far?
One thing that would be nice to see would be project using something akin to the open, agile development process used by the @GDSteam, which is opening up the backend to View Source as well as the front-end…
I also wonder about the extent to which it might be possible to reuse ideas from commercial website design and development in the way the site is architected. This will be anathema to many, but I wonder just how far the idea could be pushed? Start with the idea of analytics, and define funnels for how folk might be expected to move through course units. Associate activities with some sort of intentional action, such as popping items into a shopping basket, or maybe the equivalent of 1-click purchases. Making it through the to end of a course can be seen as completing such sort of purchase (chuck in some open badge framework badges as a reward for good measure;-). Ad-delivery mechanisms can be rethought of as personalised content delivery (eg contextual content delivery, banner ads as signage or email-pre-emptive ads). Use search data to help refine content pages, and A/B testing to try out multiple variants of course materials and exercises (weak example). (I have never understood why the OU doesn’t engage in A/B tested delivery of course materials as a matter of course? OU courses are delivered at large enough scale, and containing more than enough content, to trial different ways of delivering content and assessment without jeopardising overall outcomes for any individual student.)
All of the above – search analysis, web analytics, contextual content/ad-serving and A/B testing – can be managed through ad servers and Google Analytics (and to a lesser extent Piwik, though they are open to additional contributions), which could provide a minimum-viable product tooling basis for a testing and analytics framework that’s ready to go now? Such an approach is far too scruffy and ad hoc, of course, for a “proper” platform project…
PS by the by, I notice that JISC Advance’s Generic eMarketplace (or GeM) for Work Based Learning (“gemforwbl”, or looking at the logo, “gee em for weeble”? (will it wobble? will it fall down?) is now open and ready for business… and as for the logo, what on earth is it supposed to represent?
Answers in the comments, etc etc, please…
PPS As ever, the opinions expressed herein are not necessarily even reflective of my own, let alone those of my employer…;-)
Lorcan Dempsey was revisiting an old favourite last week, in a discussion about inside-out and outside-in library activities (Discovery vs discoverability …), where outside-in relates to managing collections of, and access to, external resources, versus the inside-out strategy whereby the library accepts that discovery happens elsewhere, and sees its role as making library mediated resources (and resources published by the host institution) available in the places where the local patrons are likely to be engaging in resource discovery (i.e. on the public web…)
A similar notion can be applied to innovation, as fumblingly described in this old post Innovating from the Inside, Outside. The idea there was that if institutions made their resources and data public and openly licensed, then internal developers would be able to make use of them for unofficial and skunkwork internal projects. (Anyone who works for a large institution will know how painful it can be getting hold of resources that are “owned” by other parts of the institution). A lot of the tinkering I’ve done around OU services has only been possible because I’ve been able to hold of the necessary resources via public (and unauthenticated) URLs. A great example of this relates to my OpenLearn tinkerings (e.g. as described in both the above linked “Innovation” post and more recently in Derived Products from OpenLearn/OU XML Documents).
But with the recent migration of OpenLearn to the open.edu domain, it seems as if the ability to just add ?content=1 to the end of a unit URL and as a result get access to the “source” XML document (essentially, a partially structured “database” of the course unit) has been disabled:
Of course, this could just be an oversight, a switch that failed to be flicked when the migration happened; although from the unit homepage, there is no obvious invitation to download an XML version of the unit.
[UPDATE: see comments - seems as if this should be currently classed as "broken" rather than "removed".]
In a sense, then, access to a useful format of the course materials for the purpose of deriving secondary products has been removed. (I also note that the original, machine readable ‘single full list’ of available OpenLearn units has disappeared, making the practical act of harvesting harder even if the content is available…) Which means I can no longer easily generate meta-glossaries over all the OpenLearn units, nor image galleries or learning objective directories, all of which are described in the Derived Products from OpenLearn post. (If I started putting scrapes on the OU network, which I’ve considered many times, I suspect the IT police would come calling…) Which is a shame, especially at a time when the potential usefulness of text mining appears to be being recognised (eg BIS press release on ‘Consumers given more copyright freedom’, December 20, 2012: “Data analytics for non-commercial research – to allow non-commercial researchers to use computers to study published research results and other data without copyright law interfering;”, interpreted by Peter Murray Rust as the UK government says it’s legal to mine content for the purposes of non-commercial research. By the by, I also notice that the press release also mentions “Research and private study – to allow sound recordings, films and broadcasts to be copied for non-commercial research and private study purposes without permission from the copyright holder.” Which could be handy…).
This effective closing down of once open services is (deliberate or not), of course, common to anyone who plays with web APIs, which are often open and free in early beta development phase, but then get locked down as companies are faced with the need to commercialise them. Faced with the need to commercialise them.
Returning to Lorcan’s post for a moment, in which he notes “growing interest in connecting the library’s collections to external discovery environments so that the value of the library investment is actually released for those for whom it was made” on the one hand; and “a parallel interest in making institutional resources (research and learning materials, digitized special materials, faculty expertise, etc) more actively discoverable.” More actively discoverable.
If part of the mission is also to promote reuse of content, as well as affording the possibility of third parties opening up additional discovery channels (for example, through structured indices and recommendation engines), not to say creating derived and value-add products, then making content available in “source” form, where structural metadata can be mined for added value discovery (for example, faceted search over learning objectives, or images or glossary items, blah, blah, blah..) is good for everyone.
Unless you’re precious about the product of course, and don’t really want it to be open (whatever “open” means…).
As as pragmatist, and a personal learner/researcher, I often tend not to pay too much attention to things like copyright. In effect, I assert the right to read and “reuse” content for my own personal research and learning purposes. So the licensing part of openness doesn’t really bother me in that respect too much anyway. It might become a problem if I built something that I made public that started getting use and starting “stealing” from, or misrepresenting the original publisher, and then I’d have to do worry about the legal side of things… But not for personal research.
Note that as I play with things like Scraperwiki more and more, I find myself more and more attracted to the idea of pulling content in to a database so that I can add enhanced discovery services over the content for my own purposes, particularly if I can pull structural elements out o the scraped content to enable more particular search queries. When building scrapers, I tend to limit myself to scraping sites that do not present authentication barriers, and whose content is generally searchable via public web search engines (i.e. it has already been indexed and is publicly discoverable).
Which brings me to consider a possibly disturbing feature of MOOC platforms such as Coursera. The course may be open (if you enrol, but the content of, and access to, the materials ins’t discoverable. That is, it’s not open as to search. It’s not open as to discovery. (Udacity on the other hand does seem to let you search course content; e.g. search with limits site:udacity.com -site:forums.udacity.com)
I’m not sure what the business model behind FutureLearn will be, but when (if?!) the platform actually appears, I wonder whether course content will be searchable/outside-discoverable on it? (I also wonder to what extent the initial offerings will relate to course resources that JISC OER funding helped to get openly licensed? And what sort of license will apply to the content on the site (for folk who do pay heed to the legalistic stuff;-)
So whilst Martin Weller victoriously proclaims Openness has won – now what?, saying “we’ll never go back to closed systems in academia”, I just hope that we don’t start seeing more and more lock dawn, that we don’t start seeing less and less discovery of useful content published ac.uk sites, that competition between increasingly corporatised universities doesn’t mean that all we get access to is HE marketing material in the form of course blurbs, and undiscoverable content that can only be accessed in exchange for credentials and personal tracking data.
In the same way that academics have always worked round the journal subscription racket that the libraries were complicit in developing with with academic publishers (if you get a chance, go to UKSG, where publisher reps with hospitality accounts do the schmooze with the academic library folk;-), sharing copies of papers if anyone ever asked, I hope that they do the same with their teaching materials, making them discoverable and sharing the knowledge.
One of the things that has never really been clear to me is what it is that universities think they sell and what students think they are “buying”. (OU modules have always(?) had a price tag associated with them, although large amounts of financial support has also traditionally been available). One partial view might focus on one of the more tangible exchanges that are evident when taking a university degree, specifically the modules taken as part of a qualification programme, and the way they are bundled, organised and presented to students. Curriculum innovation works at both the level of keeping these modules up to date, as well as introducing new modules (and potentially new degree programmes, either as new aggregations of, and pathways through, collection of modules).
If we think of universities as organisations in the business of selling, at least in part, structured collections of course modules*, then we might speculate around the processes that are used to come up with new collections that are desirable to fee-paying students (and consequently, employers).
(* I know, I know – we might also think of the cost centre services that go along with course delivery as part of the package, the assessment, the facilities, the pastoral care, the structured academic content; or the “payoff” in terms of improved employability, or higher lifetime earnings. But when I buy a bar of chocolate, I don’t see it as covering the factory automation, raw ingredients, logistics or supply chain costs, nor am I buying in to delight or gluttony. I’m buying a bar of chocolate. I’m also not saying that the courses are necessarily the thing students are buying, it’s just one particular lens we can use to see whether it makes storytelling sense to view the system in that way…)
In part, programmes of study leading to named qualifications in particular subject or topic areas are influenced by the QAA benchmark statements:
Subject benchmark statements set out expectations about standards of degrees in a range of subject areas.
Subject benchmark statements do not represent a national curriculum in a subject area. Rather, they allow for flexibility and innovation in programme design within an overall conceptual framework established by an academic subject community. They are intended to assist those involved in programme design, delivery and review and may also be of interest to prospective students and employers, seeking information about the nature and standards of awards in a subject area.
In terms of curriculum development, there is a chicken-and-egg element to the role QAA statements can play. As the Recognition scheme for subject benchmark statements suggests in its guidance relating to the creation of new benchmark statements:
The proposal will need to demonstrate that a new or revised statement would provide the
benefits of a wider understanding about the scope and nature of the subject and the academic
standards underpinning it. This could be desirable for one or more of the following reasons.
• The subject is growing and more degree programmes are being provided in it
• A degree in the subject may be required for entry into a profession, but there are no explicit
academic standards associated with the subject for this purpose. There may also be a lack of
understanding within the relevant profession of what level of attainment can be expected
of a graduate in the subject, or of its appropriateness for entry into the profession
• The prospective benefits of agreed and explicit standards in the relevant subject have been
highlighted by, for example, external examiners and validating boards, higher education
providers, subject groups, or stakeholder organisations.
(See also Statements in development for examples of statements currently under consideration.)
Unpicking the course module view a little further, modules are typically associated with notional academic credit points, which are awarded “when you have shown, through assessment, that you have successfully completed a module or a programme by meeting the specific set of learning outcomes for that module or programme” (Academic credit in higher education in England – an introduction; see also QAA – Academic Credit). Note that credit points do not reflect how well you passed the assessment, just that you achieved at least the minimum standard required. Credit points themselves relate to two considerations: “[t]he credit value indicates both the amount of learning expected (the number of credits) and its depth, complexity and intellectual demand (the credit level).” The “amount of learning” is captured by the “notional hours of
learning” spent on the subject within the module. The level is based on level descriptors that “are used to help work out the level of learning in individual modules.”
Credit level descriptors are guides that help identify the relative demand, complexity and depth of learning, and learner autonomy expected at each level, and also indicate the differences between the levels.
They are general descriptions of the learning involved at a particular level; they are not specific requirements of what must be covered in a particular module, unit or programme.
So to recap – modules are designed in order to deliver a set of learning outcomes (that include subject or topic specific learning outcomes as well as more general skills) that can be acquired in a notional amount of time and that are assessed at a particular academic level in exchange for a academic credit.
Qualifications are then awarded based on credit awarded in programmes of study, such as undergraduate or postgraduate degrees. Qualifications typically require the demonstration of some sort of progression through credit levels within a subject area, specify a range of qualification level learning outcomes that need to be delivered within the context of the programme as a whole, and may also require students to demonstrate aptitude across a range of assessment styles (or alternatively, offer a range of assessment styles so as not to disadvantage students who struggle with a particular style of assessment).
Whilst “traditional” universities typically offered named degree programmes in specific areas, the Open University originally offered an Open Degree (which is still available), in which students were free to choose whatever modules (then referred to as OU courses) they wanted, subject to certain requirements on the number of courses taken at each credit level (akin to each year of a traditional university degree; for more on credit points and credit equivalence. Whilst course choice was free, many students followed the same common pathways through courses to come out with degrees that were, essentially subject degrees. In recent years, the OU has moved increasingly towards the award of named degrees, where students are required to take particular modules. Indeed, it is increasingly difficult to find the individual modules that students originally “bought” on the OU website – the emphasis now is on selling qualification level credit bundles, rather than module level credit points.
But how do universities decide what modules to offer? And how does curriculum innovation work? A bottom-up approach might be to refresh modules within a qualification, and then create new qualifications by rebundling sets of modules that together define some sort of coherent whole (this is how ‘as-if’ subject degrees were self-assembled by OU students in the Open Degree). A top-down approach might be to come up with an idea for a degree programme, and then commission modules to deliver that programme of study. Alternatively, we might look to mass-dynamics in a free choice system, such as an open degree, and come up with a middle-out(?) approach that suggests programmes of study that formalise the module collections freely chosen by students interested in studying a particular set of topics that make sense to them.
(It is interesting to note that possibly uniquely within UK Higher Education, The Open University had the scale of numbers in undergraduate students to start to say interesting things about the way students selected courses under the Open degree model. Furthermore, as popularity in “Big Data” solutions and recommendations driven by crowd-behaviour becomes commonplace, so the OU is reducing the amount of personalisation possible by pushing hard-coded, predefined pathways. At the same time, institutions such as Southampton University seem to be looking to open up personalisation pathways (for example, Southampton Curriculum innovation, discussed here: Graduates for the 21st Century – Curriculum innovation [audio]) and standalone HE level courses are increasingly available, sans credit, (as marketing warez; but for what exactly?) via the various open online course platforms.
So now we’re at the point where I actually wanted to start this post… How do we go about the process of curriculum innovation (for example, OECD Education Working Papers No. 82 – Bringing About Curriculum Innovations) given that we already have a load of inventory? If we sell credit points in particular subject areas or topics, how do we decide what topics to cover and how do we bundle those points up into qualifications?
One place to start might be mapping out where we are at the current time, which is where course data comes in. For example, what does the interest map based on learning outcomes delivered by your university actually look like? Or if you work in HE, do you know (or can you readily find out):
- which modules are associated with any particular qualification?
- which qualifications are associated with any particular module, either as a required or optional component?
- which modules have path dependencies (eg where one module is the pre-requisite of another, or modules are excluded combinations)?
- which module are required in which pathways, and which are optional?
- in free choice modules (that maybe span programmes), which modules tend to be taken together?
- which modules deliver which qualification level learning outcomes?
- which modules deliver which sort of assessment types?
- are there any modules that already offer particular learning outcomes at a particular level?
To provide a little more context, imagine these scenarios:
- Module X is tired and needs to be replaced – what qualifications or other modules might be affected as a result? For example, does the module uniquely cover a a particular qualification level learning outcome, or assessment type?
- A new qualification is proposed with a particular set of learning outcomes – what modules are available that already deliver some (or all) of these learning outcomes?
- The quality folk want to know how your programme demonstrates progression across credit levels with respect to a particular set of subject related learning outcomes. Can you easily map this out?
- The quality folk also want to know whether a particular course is gameable in terms of assessment types covered by the course modules. Could a student select a set of modules that means they never have to do teamwork, project work/report, a presentation, an exam, etc?
- You need to generate a set of course transcripts (sets of learning outcomes, by credit level) for a proposed new assemblage of outstanding modules, some of which are core/compulsory modules, some of which are optional. Can you do it?
- Do you have the scaffolding data available to build course recommenders based on population flows and module selections of previous cohorts of student?
- Which modules deliver the content that potential students think they want to study, eg when searching your online course prospectus (you do use search logs for situational awareness around what potential students are searching for, don’t you?!)
So – how well do you fare?
[Note: this post is inspired by personal reflections around the University of Lincoln ON Course Course data project, on which I have, via the OU, a small consultancy retainer, and of which: more later.]
So it seems the Open University press office must have had an embargoed press release lined up for midnight, with a flurry of stories – and a reveal of the official press release on the OU site, partner quotes and briefing doc – about FutureLearn Ltd (Twitter: @future_learn)
Apparently, Futurelearn (not FutureLearn? The UEA press release uses CamelCase…) “will bring together a range of free, open, online courses from leading UK universities, in the same place and under the same brand.”
A bit like edX, then…?
…only that’s for US unis… Or Coursera, which is open to all-comers, I think? Whereas Futurelearn looks as if it’ll be championing the cause of UK universities – apparently, Birmingham [UK universities embrace the free, open, online future of higher education], Bristol [UK universities embrace the free, open, online future of higher education powered by The Open University], Cardiff [Online future of higher education], East Anglia [UK universities embrace the online future of higher education], Exeter [UK universities embrace the free, open, online future of higher education powered by The Open University], King’s College London [Futurelearn – new online higher education initiative], Lancaster [Lancaster signs up for Futurelearn], Leeds [Leeds joins partners in offering free online access to education], Southampton [University of Southampton embraces the open, online future of higher education], St Andrews [news feed] and Warwick [Warwick joins other leading UK universities to create multiple MOOC giving free access to some of those Universities’ most innovative courses] have all signed up to join Futurelearn… (It’ll be interesting to see if HEIs that are trying out Coursera, such as Edinburgh, will joing Futurelearn, or whether exclusive agreements are in place? I also wonder about whether membership of any of the particular university groups will influence which “open” online course marketing outfit particular universities join?) [Other press releases: QAA: Open University launches UK-based Moocs platform]
[For what it's worth, the OU and UEA were the only press offices to break the story just after midnight. St Andrews is the last to release a press release. Birmingham and Kings were also tardy... I wonder whether some of the partners were waiting to see whether anyone picked up on the story before putting out their own press releases?]
Here’s some of the press coverage so far – I guess I should grab these reports and give each a churnalism score…?
- THES: Open University launches British Mooc platform to rival US providers
- FT: OU leads universities into online venture
- The Telegraph: UK universities to launch free degree-style online courses
- The Independent: Students get free university courses online
- WSJ Tech Europe blog: U.K. Universities Embrace Digital Disruption
- The Chronicle of Higher Education/Wired Campus: Leading British Universities Join New MOOC Venture
- Techrunch: U.K. Universities Forge Open Online Courses Alliance: FutureLearn Consortium Will Offer Uni-Branded MOOCs Starting Next Year
Simon Nelson, whom I remember gave a presentation at the OU a few years ago when he was BBC multiplatform commissioner, has been appointed as CEO, so that could prove interesting… (FWIW, Simon Nelson Linked In page, directorships: Sineo Ltd, and I think Ludifi Ltd?) What might this mean for the OpenLearn brand, I wonder? Or for the Open University Apps, iBooks and Stores?
Structurally, “Futurelearn will be independent but majority-owned by the OU”, although as far as “partners” announced so far go, this “do[es] not constitute a partnership in the legal sense and the Parties shall not have authority to bind each other in any way. The term is used to indicate their support and intent to work together on this project.”
One possible response is that this is a playing out of an Emperor’s New Clothes marketing battle, but as with the evolution of any novel communication technology (seeing “MOOC’s” as such as thing), some of them do manage to lock-in… (And as George Siemens comments in Finally, alternatives to prominent MOOCs, “Even if MOOCs disappear from the landscape in the next few years, the change drivers that gave birth to them will continue to exert pressure and render slow plodding systems obsolete (or, perhaps more accurately, less relevant). If MOOCs are eventually revealed to be a fad, the universities that experiment with them today will have acquired experience and insight into the role of technology in teaching and learning that their conservative peers won’t have. It’s not only about being right, it’s about experimenting and playing in the front line of knowledge”.)
Leagas Delaney, it seems, is some sort of brand communications agency. So much style on their website, I couldn’t actually work out the substance of what it is they actually do at this late hour (all I did was check my feeds quickly, just after midnight, as I was on my way to bed, and catch sight of the OU news release…).
PS No-one mention the
warUKeU… (via Seb Schmoller (Futurelearn – an OU-led response to Coursera, Udacity, and MITx), I am reminded of Paul Bacsich’s Lessons to be learned from the failure of UKeU.)
PPS Now I’m wondering whether @dkernohan knew something I didn’t when he launched the MOOCAS/”MOOC Advisory Service” search engine a couple of days ago…?!;-)
[UPDATE: this was post was an early response that collated press stories released at end of embargo time. For a more considered review, check out Doug Clow's Futurelearn may or may not succeed but is well worth a try. Via @dkernohan, William Hammonds on the Universities UK blog: Are we witnessing higher education’s “digital moment”?]
[The views expressed within this post are barley even my personal ones, let alone anybody else's...]
FWIW, a copy of the slides I used in my ILI2012 presentation earlier this week – Making the most of structured content:data products from OpenLearn XML:
I guess this counts as a dissemination activity for my related eSTEeM project on course related custom search engines, since the work(?!) sort of evolved out of that idea…
The thesis is this:
- Course Units on OpenLearn are available as XML docs – a URL pointing to the XML version of a unit can be derived from the Moodle URL for the HTML version of the course; (the same is true of “closed” OU course materials). The OU machine uses the XML docs as a feedstock for a publication process that generates HTML views, ebook views, etc, etc of a course.
- We can treat XML docs as if they were database records; sets of structured XML elements can be viewed as if they define database tables; the values taken by the structured elements are like database table entries. Which is to say, we can treat each XML docs as a mini-database, or we we can trivially extract the data and pop it into a “proper”/”real” database.
- given a list of courses we can grab all the corresponding XML docs and build a big database of their contents; that is, a single database that contains records pulled from course XML docs.
- the sorts of things that we can pull out of a course include: links, images, glossary items, learning objectives, section and subsection headings;
- if we mine the (sub)section structure of a course from the XML, we can easily provide an interactive treemap version of the sections and subsections in a course; generating a Freemind mindmap document type, we can automatically generate course-section mindmap files that students can view – and annotate – in Freemind. We can also generate bespoke mindmaps, for example based on sections across OpenLearn courses that contain a particular search term.
- By disaggregating individual course units into “typed” elements or faceted components, and then reaggreating items of a similar class or type across all course units, we can provide faceted search across, as well as university wide “meta” view over, different classes of content. For example:
- by aggregating learning objectives from across OpenLearn units, we can trivially create a search tool that provides a faceted search over just the learning objectives associated with each unit; the search returns learning outcomes associated with a search term and links to course units associated with those learning objectives; this might help in identifying reusable course elements based around reuse or extension of learning outcomes;
- by aggregating glossary items from across OpenLearn units, we can trivially create a meta glossary for the whole of OpenLearn (or similarly across all OU courses). That is, we could produce a monolithic OpenLearn, or even OU wide, glossary; or maybe it’s useful to have redefine the same glossary terms using different definitions, rather than reuse the same definition(s) consistently across different courses? As with learning objectives, we can also create a search tool that provides a faceted search over just the glossary items associated with each unit; the search returns glossary items associated with a search term and links to course units associated with those glossary items;
- by aggregating images from across OpenLearn units, we can trivially create a search tool that provides a faceted search over just the descriptions/captions of images associated with each unit; the search returns the images whose description/captions are associated with the search term and links to course units associated with those images. This disaggregation provides a direct way of search for images that have been published through OpenLearn. Rights information may also be available, allowing users to search for images that have been rights cleared, as well as openly licensed images.
- the original route in was the extraction of links from course units that could be used to seed custom search engines that search over resources referenced from a course. This could in principle also include books using Google book search.
I also briefly described an approach for appropriating Google custom search engine promotions as the basis for a search engine mediated course, something I think could be used in a sMoocH (search mediated MOOC hack). But then MOOCs as popularised have f**k all to do with innovation, don’t they, other than in a marketing sense for people with very little imagination.
During questions, @briankelly asked if any of the reported dabblings/demos (and there are several working demo) were just OUseful experiments or whether they could in principle be adopted within the OU, or even more widely across HE. The answers are ‘yes’ and ‘yes’ but in reality ‘yes’ and ‘no’. I haven’t even been able to get round to writing up (or persuading someone else to write up) any of my dabblings as ‘proper’ research, let alone fight the interminable rounds of lobbying and stakeholder acquisition it takes to get anything adopted as a rolled out as adopted innovation. If any of the ideas were/are useful, they’re Googleable and folk are free to run with them…but because they had no big budget holding champion associated with their creation, and hence no stake (even defensively) in seeing some sort of use from them, they unlikely to register anywhere.