Personalised Learning Means Big Differences?

Back when the OU used to push all its course materials out to students in print form, I think the first presentation of a course used to have its own print run. Errata lists for mistakes identified during the presentation would be mailed out to students as supplementary print items (with their own publication number) every so often, and changes made to a master copies of what would become a revised versions of the main print items for later presentations. When a student received an errata list, it was up to them to mechanically make changes to their print items (scribbling out the wrong bits and writing in the corrections, for example), but at least then they’d have a copy of in-place corrected materials.

There generally aren’t that many errata in an OU course, but there always seem to be some that slip through the net, so how do we deal with them now?

With online delivery, I think we’ve got ourselves in a bit of a pickle when it comes to handling errata. (This post/rant is a bit of a mountain/molehill thing but it’s symptomatic of something-I-don’t-know-what. Fed-up-ness, perhaps.) Changes can’t be made to content that has gone live to students in case some students have already seen it (or something?!), and to ensure that everyone gets to see the “same” version of the course materials irrespective when they saw it. Which of course they don’t, because some folk go through the materials before an error is spotted, and some of them don’t read or spot the errata list that gets published in an announcements feed in the VLE sidebar. For those students who do read the errata list, it doesn’t really help much because you can’t update the material unless you print it all out and make changes to the hard copy, or grab your own, annotatable electronic copy and update and work from that. So I reckon the workflow you end up with is that you you have to keep an eye on the errata list whenever you read anything. Which sucks.

One thing I did wonder was whether we could add an errata annotation layer on top of the course materials. For several years, the OU Annotate tool has provided a browser bookmarklet that can overlay an annotation tool on top of a well structured HTML page (which rules out things like annotating PDFs). By highlighting broken text in a suitably vivid colour, putting the errata note in as a comment, tagging the comment with an errata tag, and making it public seemed to provide a quick solution:


The experience could be improved by adding an errata channel or filter that could be used to highlight just errata items, rather than all comments/annotations. I even started wondering whether there could be a VLE setting that would pull in errata tagged items and display them by default, overlaying them onto the course materials without the need for firing up the OU Annotate toolbar. But that would be a bit like publishing the VLE hosted materials with track changes switched on though, which would look a bit rubbish and make it obvious that there were errors we knew about but hadn’t fixed. Which there are; but we can’t; because the materials once published have to be left set in stone for that presentation of the course. (Except when they aren’t.)

The presentation could be improved further for the majority, who reach the errata’d item after the mistake has been found (“pathfinder” students who work through the materials quickly often spot errors before the majority even get to them), simply by us making the change when the error is spotted and before the student gets to see it…

Alternatively, we could make the change but highlight the text in some way to show that it had been changed, perhaps popping up a full errata note – including the original and the change that was made – if a student hovered their mouse cursor over the changed item. An even cleaner view could be provided with a setting that disabled any highlighting of error-correction terms.

One way of doing this would be to go back to the source and annotate that…: the original course materials are written in an XML document which is then rendered down to HTML and various ebook offerings. (For some reason, PDFs aren’t necessarily always produced, perhaps because of accessibility issues. For the students who want the PDF but don’t care about the accessibility features, they’re left to create their own workaround for generating the PDF. Perfect. Enemy. Good. Got to be equitable myth, etc.) Tagging the doc with the change as a change and leaving the original as an annotation then reflowing the HTML would mean the VLE materials would get the update and also be able to reveal the historical view. Of course, for students who downloaded an ebook version or generated a PDF before an update and reflow wouldn’t get the update, and, yada, yada, too difficult to even think about, don’t bother, stick with errata lists, make it the student’s fix responsibility… (Of course, if you downloaded all the ebooks at the start of the course and don’t go back to the VLE to check the errata list, then, erm… arrgh: remember Rule 1: check the VLE for the errata list before you read anything.)

Another route might be to base every student’s view of the course on a fork of the original that uses a form of version control that only displays changes to the materials that the student has already encountered. So if I read chapter 1, and an error is found, when I revisit chapter 1 the change is made and highlighted as a change. If I get to chapter 2 after a chapter 2 error has been found, the update is made before I reach it and not flagged to me as a change. This would mean everyone’s copy of the course materials could be different of course – I hesitate to say “personalised”…!;-) – which could be hugely complicated, but might also allow students to make changes directly to their own copy of the course materials. Git-tastic…

Rather, it seems to me that we have taken a completely depersonalised route to our materials that means we can’t countenance any situation that requires change to a document that everyone is supposed to have in exactly the same form. (One reason for this is to prevent confusion in the sense of different people talking about possibly different versions of something that is ostensibly the same.)

Anyway – all of this makes me think: is personalised learning about offering students stuff that only contains significant differences, but not minor differences? Because minor differences (like corrected typos in my copy but not yours) are just different enough to make you uncomfortable, but major differences, (you get a completely different paragraph or sequence/ordering of content) are “personalised”. Uncanny, that…

My ILI2012 Presentation – Derived Products from OpenLearn/OU XML Documents

FWIW, a copy of the slides I used in my ILI2012 presentation earlier this week – Making the most of structured content:data products from OpenLearn XML:

I guess this counts as a dissemination activity for my related eSTEeM project on course related custom search engines, since the work(?!) sort of evolved out of that idea…

The thesis is this:

  1. Course Units on OpenLearn are available as XML docs – a URL pointing to the XML version of a unit can be derived from the Moodle URL for the HTML version of the course; (the same is true of “closed” OU course materials). The OU machine uses the XML docs as a feedstock for a publication process that generates HTML views, ebook views, etc, etc of a course.
  2. We can treat XML docs as if they were database records; sets of structured XML elements can be viewed as if they define database tables; the values taken by the structured elements are like database table entries. Which is to say, we can treat each XML docs as a mini-database, or we we can trivially extract the data and pop it into a “proper”/”real” database.
  3. given a list of courses we can grab all the corresponding XML docs and build a big database of their contents; that is, a single database that contains records pulled from course XML docs.
  4. the sorts of things that we can pull out of a course include: links, images, glossary items, learning objectives, section and subsection headings;
  5. if we mine the (sub)section structure of a course from the XML, we can easily provide an interactive treemap version of the sections and subsections in a course; generating a Freemind mindmap document type, we can automatically generate course-section mindmap files that students can view – and annotate – in Freemind. We can also generate bespoke mindmaps, for example based on sections across OpenLearn courses that contain a particular search term.
  6. By disaggregating individual course units into “typed” elements or faceted components, and then reaggreating items of a similar class or type across all course units, we can provide faceted search across, as well as university wide “meta” view over, different classes of content. For example:
    • by aggregating learning objectives from across OpenLearn units, we can trivially create a search tool that provides a faceted search over just the learning objectives associated with each unit; the search returns learning outcomes associated with a search term and links to course units associated with those learning objectives; this might help in identifying reusable course elements based around reuse or extension of learning outcomes;
    • by aggregating glossary items from across OpenLearn units, we can trivially create a meta glossary for the whole of OpenLearn (or similarly across all OU courses). That is, we could produce a monolithic OpenLearn, or even OU wide, glossary; or maybe it’s useful to have redefine the same glossary terms using different definitions, rather than reuse the same definition(s) consistently across different courses? As with learning objectives, we can also create a search tool that provides a faceted search over just the glossary items associated with each unit; the search returns glossary items associated with a search term and links to course units associated with those glossary items;
    • by aggregating images from across OpenLearn units, we can trivially create a search tool that provides a faceted search over just the descriptions/captions of images associated with each unit; the search returns the images whose description/captions are associated with the search term and links to course units associated with those images. This disaggregation provides a direct way of search for images that have been published through OpenLearn. Rights information may also be available, allowing users to search for images that have been rights cleared, as well as openly licensed images.
  7. the original route in was the extraction of links from course units that could be used to seed custom search engines that search over resources referenced from a course. This could in principle also include books using Google book search.

I also briefly described an approach for appropriating Google custom search engine promotions as the basis for a search engine mediated course, something I think could be used in a sMoocH (search mediated MOOC hack). But then MOOCs as popularised have f**k all to do with innovation, don’t they, other than in a marketing sense for people with very little imagination.

During questions, @briankelly asked if any of the reported dabblings/demos (and there are several working demo) were just OUseful experiments or whether they could in principle be adopted within the OU, or even more widely across HE. The answers are ‘yes’ and ‘yes’ but in reality ‘yes’ and ‘no’. I haven’t even been able to get round to writing up (or persuading someone else to write up) any of my dabblings as ‘proper’ research, let alone fight the interminable rounds of lobbying and stakeholder acquisition it takes to get anything adopted as a rolled out as adopted innovation. If any of the ideas were/are useful, they’re Googleable and folk are free to run with them…but because they had no big budget holding champion associated with their creation, and hence no stake (even defensively) in seeing some sort of use from them, they unlikely to register anywhere.