Open Course Production
Following a chat with Mark Surman of the Mozilla Foundation a week or two ago, I’ve been pondering a possible “flip” between:
a) the production of course materials as part of a (closed) internal process, primarily for use within a (closed) course in a particular institution, and then released under an open license (such as a Creative commons license); and
b) the production of course materials in the open that are then:
i) pulled into the institution for use within a (closed) course; or
ii) used (or not) to support self-directed learning towards an assessment only award.
In the OU, the course production model can take a team of several academics, supported by a course manager, media project manager, editor, picture researcher, rights chasers, developers, artists, et al. several years to produce a course that will then last for between five and ten years of presentation. In addition, handover of course materials may take place up to a year before the first presentation of the course. Course units are typically drafted by individual authors, and then passed for comment and critical reading to the rest of the course team. Typically, materials will pass through at least two drafts before final handover.
(After a little digging, and the help of @ostephens, I managed to track down some reports on how course production was managed in the early years of the OU: Course Production: Some Basic Problems, Course Production: Activities and Activity Networks, Course Production: Planning and Scheduling, Course Production: The Problem of Assessment, though I haven’t had chance to read them yet…)
For the OU short course T151 Digital Worlds, the majority of the course team authored content was published as it was being written on a public WordPress blog (Digital Worlds Uncourse Blog); in the current version of the course, students are referred to that public content from within the VLE. (Note that the copyright and licensing of content on the public blog is left deliberately vague!)
Although the Digital Worlds content was written by a single author (me;-), the model was intended to support at the very least a team blog approach, or a distributed blog network authoring approach. Rather than authors writing large chunks of text and then passing them for comment to other course team members, the blogged approach encourages authors to: a) read along with what others are producing; b) create short chunks of material (500-800 words, typical blog post length) on a particular topic (probably linked to other posts on the topic) that are convenient to study in a single study session or interstitial learning break (cf. @lorcand on Interstitial reading); c) link out to related resources; d) act as a focus for trackbacks (passive related resource discovery) and comments that might influence the direction taken in future blog posts.
The use of WordPress as the blogging platform was deliberate, in part because of the wide support WordPress offers for RSS/Atom feed generation. By linking between posts, as well as tagging and categorising posts appropriately, a structure emerges that offers many different possible pathways through the content. RSS feeds with everything means that it’s then relatively straightforward to republish different pathways apparently as linear runs of content elsewhere, if required (e.g. as in an edufeedr environment, perhaps?)
Authoring content in a public forum – ideally under an open content license – means that content becomes available for re-use even as it is being drafted. By opening up comments, feedback can be solicited that allows content to be improved by updating blog posts, if necessary, as well as identifying topics or clarifications that can be addressed in separate backlinking blog posts. By opening up the production process, we make it far more likely that others will contribute to that process, helping shape and influence that content, than expecting others to take openly licensed content as a large chunk and then produced openly licensed derived works as a result (i.e. forks?!)
In short: maybe we shouldn’t just be releasing content created in a closed process as Open Educational Resources (OERs); rather, we should be producing them in public using an open source production model?
As Cameron Neylon suggests in a critique of academic research publishing (It’s not information overload, nor is it filter failure: It’s a discovery deficit):
t is very easy to say there is too much academic literature – and I do. But the solution which seems to be becoming popular is to argue for an expansion of the traditional peer review process. To prevent stuff getting onto the web in the first place. This is misguided for two important reasons. Firstly it takes the highly inefficient and expensive process of manual curation and attempts to apply it to every piece of research output created. This doesn’t work today and won’t scale as the diversity and sheer number of research outputs increases tomorrow. Secondly it doesn’t take advantage of the nature of the web. They way to do this efficiently is to publish everything at the lowest cost possible, and then enhance the discoverability of work that you think is important. We don’t need publication filters, we need enhanced discovery engines. Publishing is cheap, curation is expensive whether it is applied to filtering or to markup and search enhancement.
Filtering before publication worked and was probably the most efficient place to apply the curation effort when the major bottleneck was publication. Value was extracted from the curation process of peer review by using it reduce the costs of layout, editing, and printing through simple printing less. But it created new costs, and invisible opportunity costs where a key piece of information was not made available. Today the major bottleneck is discovery. …
The problem we have in scholarly publishing is an insistence on applying this print paradigm publication filtering to the web alongside an unhealthy obsession with a publication form, the paper, which is almost designed to make discovery difficult. If I want to understand the whole argument of a paper I need to read it. But if I just want one figure, one number, the details of the methodology then I don’t need to read it, but I still need to be able to find it, and to do so efficiently, and at the right time.
Currently scholarly publishers vie for the position of biggest barrier to communication. The stronger the filter the higher the notional quality. But being a pure filter play doesn’t add value because the costs of publication are now low. The value lies in presenting, enhancing, curating the material that is published.
And so on… (read the whole thing).
Maybe we need to think about educational materials in a similar way? By creating the materials in the open, we start to identify what the good stuff is, as well as being able to benefit from direct and relevant feedback from people who are interested in the topic because they discovered it by looking for it, or at least something like it. (For educators, if they think they are helping shape content, for example through commenting on it, they may be more likely to link back to it and direct their students to it because they have a stake in it, albeit weakly and possibly indirectly.)
In response to a call I put out out on Twitter last night for links to work relating to the use of open source production models in course development, @mweller suggested that Andreas Meiszner‘s PhD work may be relevant here? “My PhD research is aimed at investigating the impact of the organizational structure and operational organization on ICT enriched education by conducting a comparative study between FLOSS (Free / Libre Open Source Software) communities and Higher Education Institutions (HEIs). This work will conduct a comparative study between FLOSS communities and HEIs. The primary unit of analysis is (i.) the organizational structure of FLOSS communities and HEIs, (ii.) the operational organization of FLOSS communities and HEIs and (iii.) the learning process, outcome and environment in FLOSS communities and HEIs.”
By placing content out in the open, we also provide a stepping stone towards producing “assessment only” courses. By decoupling the teaching/learning content from the assessment, we can offer assessment only products (such as derivatives of the OU’s APEL containers, maybe?) that assess students based on their informal study of our open materials. (I’m not sure if any courses are yet assessing students who have studied materials placed on OpenLearn?) Once mechanisms are in place for writing robust assessments under the assumption that students will have been drawing at least in part on the study of open OU materials, we can maybe start to be more flexible in assessing students who have made used of other OERs (or indeed, any resources that they have been able to use to further their understanding on a topic).
Just by the by, it’s also worth noting that decoupling of assessment from teaching at the degree level is in the air at moment (e.g. New universities could teach but not test for degrees, says Vince Cable) …
Related: an old and confused post about what happens when content on the inside is opened up to the outside so that folk from the inside can work on it on the outside using all their skills from the inside but not having to adhere to any of its constraints… Innovating from the Inside, Outside