Rolling Your Own IT – Automating Multiple File Downloads

Yesterday, I caught up with a video briefing on Transforming IT from the OU’s Director of IT, recorded earlier thus year (OU internal link, which, being on Sharepoint, needs Microsoft authentication, rather than OU single sign on?).

The video, in part, describe the 20 year history of some of the OU’s teaching related software services, which tended to be introduced piecemeal and which are necessarily as integrated as they could be…

In the post Decision Support for Third Marking Significant Difference Double Marked Assessments, I mentioned part of the OU process for managing third marking.

Guidance provided for collecting scripts for third marking is something like this:

The original markers’ scores and feedback will be visible in OSCAR.

Electronically submitted scripts can be accessed in the eTMA system via this link: …

Please note the scripts can only be accessed via the EAB/admin tab in the eTMA system ensuring you add the relevant module code and student PI.

[My emphasis.]

Hmmm… OSCAR is accessed via a browser, and supports “app internal” links that display the overall work allocation, a table listing the students, along with their PIs, and links to various data views including the first and second marks table referred to in the post mentioned above.

The front end to the eTMA system is a web form that requests a course code and student PI, which then launches another web page listing the student’s submitted files, a confirmation code that needs to be entered in OSCAR to let you add third marks, and a web form that requires you to select a file download type from a drop down list with a single option and a button to download the zipped student files.

So that’s two web things…

To download multiple student files requires a process something like this:

So why not just have a something on the OSCAR work allocation page that that lets you select – or select all – the students and download all the files, or get all all the confirmation codes?

Thinks… I could do that, sort of, over  coffee…. (I’ve tried to obfuscate details while leaving the general bits of code that could be reused elsewhere in place…)

First up, we need to login and get authenticated:

#Login
!pip3 install MechanicalSoup

import mechanicalsoup
import pandas as pd

USERNAME=''
PASSWORD=''
LOGIN_URL=''
FORM_ID='#' 

def getSession():
 browser = mechanicalsoup.StatefulBrowser()
 browser.open(LOGIN_URL)
 browser.select_form(FORM_ID) #in form: #loginForm
 browser[_USERNAME] = USERNAME
 browser[_PASSWORD] = PASSWORD
 resp = browser.submit_selected()
 return browser

s=getSession()

Now we need a list of PIs. We could scrape these from OSCAR, but that’s a couple of steps and easier just to copy and paste the table from the web page for now:

#Get student PIs - copy and paste table from OSCAR for now

txt='''
CODE\tPI NAME\tMARKING_TYPE\tSTATUS
...
CODE\tPI NAME\tMARKING_TYPE\tSTATUS
'''

#Put that data into a pandas dataframe then pull out the PIs
from io import StringIO

df=pd.read_csv(StringIO(txt),sep='\t',header=None)
pids=[i[0] for i in df[1].str.split()]

We now have a list of student PIs, which we can iterate through to download the relevant files:

#Download the zip file for each student
import zipfile, io, random

def downloader(pid, outdir='etmafiles'):
  print('Downloading assessment for {}'.format(pid))
  !mkdir -p {outdir}
  payload = {FORM_ELEMENT1:FILETYPE, FORM_ELEMENT2: FILE_DETAILS(pid)}
  url=ETMA_DOWNLOARD_URL_PATTERN(pid)
  #Download the file...
  r=s.post(url,data=payload)

  #...and treat it as a zipfile
  z = zipfile.ZipFile(io.BytesIO(r.content))
  #Save a bit more time for the user by unzipping it too...
  z.extractall(outdir)

#Here's the iterator...
for pid in pids:
  try:
    downloader(pid)
  except:
    print('Failed for {}'.format(pid))

We can also grab the “student page” from the eTMA system and scrape it for the confirmation code. (On to do list, try to post the confirmation code back to OSCAR to authorise the upload of third marks, as well as auto-posting a list of marks and comments back.)

#Scraper for confirmation codes
def getConfirmationCode(pid):
  print('Getting confirmation code for {}'.format(pid))
  url=ETMA_STUDENT_PAGE(pid, ASSESSMENT_DETAILS)
  r=s.open(url)
  p=s.get_current_page()

  #scrapy bit
  elements=p.find(WHATEVER)
  confirmation_code, pid=SCRAPE(elements)
  return [confirmation_code, pid]

codes=pd.DataFrame()

for pid in pids:
  try:
    tmp=getConfirmationCode(pid)
    # Add data to dataframe...
    codes = pd.concat([codes, pd.DataFrame([tmp], columns=['PI','Code'])])
  except:
    print('Failed for {}'.format(pid))

codes

So… yes, the systems don’t join up in the usual workflow, but it’s easy enough to hack together some glue as an end-user developed application: given that the systems are based on quite old-style HTML thinking, they are simple enough to scrape and treat as a de facto roll-your-own API.

Checking the time, it has taken me pretty as much as long as it took to put the above code together as it has taken to write this post and generate the block diagram shown above.

With another hour, I could probably learn enough about the new plotly Dash package (like R/shiny for python?) to create a simple browser-based app UI for it.

Of course, this isn’t enterprise grade for a digital organisation, where everything is form/button/link/click easy, but it’s fine for a scruffy digital org where you appropriate what you need and string’n’glue’n’gaffer tape let you get stuff done (and also prototype, quickly and cheaply, things that may be useful, without spending weeks and months arguing over specs and font styles).

Indeed, it’s the digital equivalent of the workarounds all organisations have, where you know someone or something who can hack a process, or a form, or get you that piece of information you need, using some quirky bit of arcane knowledge, or hidden backchannel, that comes from familiarity with how the system actually works, rather than how people are told it is supposed to work. (I suspect this is not what folk mean when they are talking about a digital first organisation, though?!;-)

And if it’s not useful? Well it didn’t take that much time to try it to see if it would be…

Keep on Tuttling…;-)

PS the blockdiagram above was generated using an online service, blockdiag. Here’s the code (need to check: could I assign labels to a variable and use those to cut down repetition?):

[{
  A [label="Work Allocation"];
  B [label="eTMA System"];
  C [label="Student Record"];
  D [label="Download"];
  DD [label="Confirmation Code"]
  E [label="Student Record"];
  F [label="Download"];
  FF [label="Confirmation Code"]
  G [shape="dots"];
  H [label="Student Record"];
  I [label="Download"];
  II [label="Confirmation Code"];

  OSCAR -> A -> B;

  B -> C -> D;
  C -> DD;

  B -> E -> F;
  E -> FF;
  B -> G;

  B -> H -> I;
  H -> II;
}

Is that being digital? Is that being cloud? Is that being agile (e.g. in terms of supporting maintenance of the figure?)?

Innovation Starts At Home…?

Mention was made a couple of times last week in the VC’s presentation to the OU about the need to be more responsive in our curriculum design and course production. At the moment it can take a team of up to a dozen academics over two years to put an introductory course together, that is then intended to last, without significant change, other than in the preparation of assessment material, for five years or more.

The new “agile” production process is currently being trialled by a new authoring tool, OpenCreate, that is currently available to a few select course teams as a partially complete “beta”. I think it is “cloud” based. And maybe also promoting the new “digital” first strategy. (I wonder how many letters in the KPMG ABC bingo card consulting product the OU paid for, and how much per letter? Note: A may also stand for “analytics”.)

I asked I could have a play with the OpenCreate tool, such as it, last week, but told it was still in early testing (so a good time to be able to comment, then?) and so, “no”. (So instead,  I went back to one of the issues I’d raised a few days ago on somebody else’s project on Github to continue helping with the testing of a feature suggestion. (A few days ago; the suggestion has already been implemented and the issue is now closed as completed. making my life easier and hopefully improving the package too.) Individuals know how to do agile. Organisations don’t. ;-))

So why would I wan’t to play with OpenCreate now, while it’s still flaky? Partly because I suspect the team are working on a UI and have settled elements of the backend. For all the f**kwitted nonsense the consultants may have been spouting about  agile, beta, cloud, digital solutions, any improvements are going to come form the way the users use the tools. And maybe workarounds they find. And by looking at how the thing works, I may be able to explore other bits of the UI design space, and maybe even bits of the output space…

Years ago, the OU moved to an XML authoring route, defining and XML schema (OU-XML) that could be used to repurpose content for multiple output formats (HTML, epub, docx, Word). By the by, these are all standardised document formats, which means other people also build tooling around them. The OU-XML document was an internal standard. Which meant only the OU developed tools for it. Or people we paid. I’m not sure if, or how much Microsoft, were paid to produce the OU’s custom authoring extensions for Word that would output OU-XML, for example… Another authoring route was an XML editor (currently, oXygen, I believe). OU-XML also underpinned OpenLearn content.

That said, OU-XML was a standard, so it was in principle possible for people who had knowledge of it to author tools around it. I played with a few myself, though they never generated much interest internally.

  • generating mind maps from OU/OpenLearn structured authoring XML documents: these provided the overview of a whole course and could also be used as a navigation surface (revisited here and here); I made these sort of mindmaps available as an additional asset in the T151 short course, but they were never officially recognised);
  • I then started treating a whole set of OU-XML documents *as a database* which meant we could generate *ad hoc* courses on a particular topic by searching for keywords across OpenLearn courses and then returning a mindmap constructed around components in different courses, again displaying the result as a mindmap (Generating OpenLearn Navigation Mindmaps Automagically). Note this was all very crude and represented playtime. I’d have pushed it further if anyone internally had shown any interest in exploring this more widely.
  • I also started looking at ways of liberating assets and content, which meant we could perform OpenLearn Searches over Learning Outcomes and Glossary Items. That is, take all the learning outcomes from OpenLearn docs and search into that to find units with learning outcomes on that topic. Or provide a “metaglossary” generated (for free) from glossary terms introduced in all OpenLearn materials. Note that I *really* wanted to do this as a cross-OU course content demo, but as the OU has become more digital, access to content has become less open. (You used to be able to look at complete course, OU print materials in academic libraries. No you need a password to access the locked down digital content; I suspect access expires to students after a period of time too; and it also means students can’t sell on their old course materials;
  • viewing OU-XML documents as structured database meant we could also asset strip OpenLearn for  images, providing a search tool to lookup images related to a particular topic. (Internally, we are encouraged to reuse previously created assets, but the discovery problem about helping authors discover what previously created assets are available has never really been addressed; I’m not sure the OU Digital Archive is really geared up for this, either?)
  • we could also extract links from courses and use them as a course powered custom search engine. This wasn’t very successful at the course level, (not enough links) but might have been interesting at across multiple courses;
  • a first proof of concept pass at a tool to export OU-XML documents from Google docs, so you could author documents using Google docs and then upload the result into the OU publishing system.

Something that has also been on my to do list for a long time are templates to convert Rmd (Rmarkdown) and Jupyter notebook ipynb documents to OU-XML.

So… if I could get to see the current beta OpenCreate tool, I might me able to see what document format authors were being encouraged to author into. I know folk often get the “woahh,, too complicated… feeling when reading OUseful.info blog posts*, but at the end of the day whatever magic dreams folk have for using tech, it boils down to a few poor sods having to figure out how to do that using three things: code, document formats (which we might also view as data representations more generally) and transport mechanisms (things like http; and maybe we could also class things like database connections here). Transport moves stuff between stuff. Representations represent the stuff you want to move. Code lets you do stuff with the represented stuff, and also move it between other things that do black box transformations to it (for example, transforming it from one representation to another).

That’s it. (My computing colleagues might disagree. But they don’t know how to think about systems properly ;-)

If OpenCreate is a browser based authoring tool, the content stuff created by authors will be structured somehow, and possibly previewed somehow. There’ll also be a mechanism for posting the authored stuff into the OU backend.

If I know what (document) format the content is authored in, I can use that as a standard and develop my own demonstration authoring tools and routes around that on the input side. For example, a converted that converts Jupyter notebook, or Rmd, or Google docs authored content into that format.

If there is structure in the format (as there was in OU-XML), I can use that as a basis for exploring what might be done if we can treat the whole collection of OU authored course materials as a database and exploring what sorts of secondary products, or alternative ways of using that content, might be possible.

If the formats aren’t sorted yet, maybe my play would help identify minor tweaks that could make content more, or less, useful. (Of course, this might be a distraction.)

I might also be able to comment on the UI…

But is this likely to happen? Is it f**k, because the OU is an enterprise that’s sold corporate, enterprise IT thinking from muppets who only know “agile” (or is that “analytics”?), “beta”, “cloud” and “digital” as bingo terms that people pay handsomely for. And we don’t do any of them because nobody knows what they mean…

* So for example, in Pondering What “Digital First” and “University of the Cloud” Mean…Pondering What “Digital First” and “University of the Cloud” Mean…, I mention things like “virtual machines” and “Docker” and servers and services. If you think that’s too technical, you know what you can do with your cloud briefings…

The OU was innovative because folk understood technologies of all sorts and made creative use of them. Many of our courses included emerging technologies that were examples of the technologies being taught in the courses. We ate the dogfood we were telling students about. Now we’ve put the dog down and just show students cat pictures given to us by consultants.

Programming, Coding & Digital Skills

I keep hearing myself in meetings talking about the “need” to get people coding, but that’s not really what I mean, and it immediately puts people off because I’m not sure they know what programming/coding is or what it’s useful for.

So here’s an example of the sort of thing I regularly do, pretty much naturally – automating simple tasks, a line or two at a time.

The problem was generating some data files containing weather data for several airports. I’d already got a pattern for the URL for the data file, now I just needed to find some airport codes (for airports in the capital cities of the BRICS countries) and grab the data into a separate file for each [code]:


Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

In other words – figuring out what steps I need to do to solve a problem, then writing a line of code to do each step – often separately – looking at the output to check it’s what I expect, then using it as the input to the next step. (As you get more confident, you can start to bundle several lines together.)

The print statements are a bit overkill – I added them as commentary…

On its own, each line of code is quite simple. There are lots of high level packages out there to make powerful things happen with a single command. And there are lots of high level data representations that make it easier to work with particular things. pandas dataframes, for example, allow you to work natually the contents of a CSV data file or an Excel spreadsheet. And if you need to work with maps, there are packages to help with those too. (So for example, as an afterthought I added a quick example to the notebook showing how to add markers for the airports to a map… (I’m not sure if the map will render in the embed or the gist?) That code represents a recipe that can be copied and pasted and used with other datasets more or less directly.

So when folk talk about programming and coding, I’m not sure what they mean by it. The way we teach it in computing departments sucks, because it doesn’t represent the sort of use case above: using a line of code at a time, each one a possible timesaver, to do something useful. Each line of code is a self-made tool to do a particular task.

Enterprise software development has different constraints to the above, of course, and more formalised methods for developing and deploying code. But the number of people who could make use of code – doing the sorts of things demonstrated as per the example above – is far larger than than the number of developers we’ll ever need. (If more folk could build their own single line tools, or work through tasks a line of a code at a time, we may not need so many developers?)

So when it comes to talk of developing “digital skills” at scale, I think of the above example as being at the level we should be aspiring to. Scripting, rather then developer coding/programming (h/t @RossMackenzie for being the first to comment back with that mention). Because it’s in the reach of many people, and it allows them to start putting together their own single line code apps from the start, as well as developing more complex recipes, a line of code at a time.

And one of the reasons folk can become productive is because there are lots of helpful packages and examples of cribbable code out there. (Often, just one or two lines of code will fix the problem you can’t solve for yourself.)

Real programmers don’t write a million lines of code at a time – they often write a functional block – which may be just a line or a placeholder function – one block at a time. And whilst these single lines of code or simple blocks may combine to create a recipe that requires lots of steps, these are often organised in higher level functional blocks – which are themselves single steps at a higher level of abstraction. (How does the joke go? Recipe for world domination: step 1 – invade Poland etc.)

The problem solving process then becomes one of both top-down and bottom up: what do I want to do, what are the high-level steps that would help me achieve that, within each of those: can I code it as a single line, or do I need to break the problem into smaller steps?

Knowing some of the libraries that exist out there can help in this problem solving / decomposing the problem process. For example, to get Excel data into a data structure, I don’t need to know how to open a file, read in a million lines of XML, parse the XML, figure out how to represent that as a data structure, etc. I use the pandas.read_excel() function and pass it a filename.

If we want to start developing digital skills at scale, we need to get the initiatives out of the computing departments and into the technology departments, and science departments, and engineering departments, and humanities departments, and social science departments…

Academic Business Communications?

For several years, I’ve idly wondered whether the job ads of a particular company or institution provide some sort of evidence about the health of the organisation, its current strategy (in terms of long term appointments) and its tactics (short term appointments). Short term contract appointments might also reveal insights about current (or soon to be announced) projects, or even act as indicators that a project is in trouble (and hence requires more bodies throwing at it). Whatever…

Looking at appointments across a sector might also give us some sort of insight about the current concerns of the sector. Identifying bellweather or leader institutions that predict sector wide concerns through regularly being first to advertise posts or roles that others may then start to appoint may provide some sort of insight as to the direction a sector may be heading. Again, whatever.

Whilst I haven’t been tracking HE jobs in general, I do subscribe to the OU jobs feed (for a list of other UK HEIs with jobs related RSS/Atom syndication feeds, see this UK HEI autodiscoverable RSS feed directory).

My gut feeling from skimming this feed is that the OU has been appointing IT related jobs like crazy over the last year or so (read in to that what you may; high churn maybe? Or major IT restructuring?) and relatively few academic positions (from which we might conclude as observers either that the OU has a young/middle aged academic workforce, or that managing the size of the academic body through natural wastage is the order of the day). I think Google Reader will have been archiving the feed, so I guess I could try to run some sort of analysis over it. But that’s as maybe…

Anyway – today I spotted this ad: Strategic Communications Programme, Academic Reputation Manager, Communications (temporary contract for 24 months, £37,012-£44,116), reporting to the Head of Communications. Here’s the spec:

The post is formally based within The Open University Communications Unit, but the post holder will spend a significant amount of time working with academic staff and associate lecturers across the University’s seven faculties and two institutes, acting as a conduit for publicity, dissemination and impact across in the media and via the Universities’ advocates, students, alumni, staff and influential friends, making use of social media.

The post holder will report to the Head of Communications (Managing Editor) (and through him/her to the Director of Communications) and work closely with the Director of Marketing. There will be close working relationships with Communications colleagues in the Digital Engagement, Government Relations and External Affairs, Stakeholder and Media Relations teams. Specifically you will work closely with the Senior Manage Research Communications, to co-ordinate activity and avoid duplication. There are no line-management responsibilities associated with this post.

MAIN PURPOSE

• To lead and coordinate publicity activities across the University, ensuring an optimal and consistent approach is taken to maximise the dissemination and impact of our academic excellence.
• To raise external awareness of the profile and calibre of our academics and teaching staff with key target audiences.
• To raise internal awareness of the excellence and accomplishments of our academics and teaching staff across the OU’s faculties and institutes.
• To support the Director of Communications as an OU Ambassador in engaging on, and communicating, the OU Story.

MAIN RESPONSIBILITIES

• Develop and implement a new Academic Excellence Communications Strategy for the University based on a focused approach aimed at maximum impact on key opinion formers and decision-makers.
• Develop and maintain knowledge of key areas of OU academic excellence and publicise and disseminate news and information accordingly to target audiences, liaising with the media relations team as appropriate for high impact stories.
• Network across faculties, institutes and relevant service units to maximise news gathering, dissemination and impact.
• Commission and edit news stories for the bi-monthly staff enews and liaise with the PVC (Academic) to ensure that individual achievement is acknowledged with personal thanks and the best examples promoted to the Vice Chancellor for celebrating in his video addresses.
• Working with Digital Engagement, contribute to a pan-university approach to faculty and unit based research websites and social media activity.
• Manage publication of brochures and publicity materials for both web and print.
• Day–to-day quality control of all academic excellence materials, including academic excellence-related media releases and academic excellence elements of external and internal OU publications and websites.
• Contribute to the development of case studies for the OU’s Strategic Communications Programme focussed on acquiring new students and employer sponsors, including enhancing the impact of selected case studies in the run-up to submission.
• Support the development and implementation of stakeholder engagement/communications for specific high impact projects and initiatives.
• Create presentations on academic excellence for the PVC (Academic) and other senior staff and provide briefings and guidance for presentation opportunities.
• Manage high profile events aimed to raise the profile of key academics and the OU’s academic reputation as a whole, where there are significant communication opportunities (national workshops, international conferences, showcase events).
• Review academic staff web profiles and advise on raising the quality of these profiles for impact on external audiences such as potential students and the media.
• Work with Senior Manager Research Communications to develop the OU’s database of expertise as an effective means of maximising OU comment in the media (both proactively and in response to media enquiries).
• Contribute to development of the OU’s iTunes U and YouTube research portfolio.
• Liaise as appropriate with Digital Engagement, Open Broadcasting Unit and Marketing (e.g. for approval of advertisements).
• Coordinate academic excellence competition entries (e.g. for Times Higher awards)

OTHER GENERAL RESPONSIBILITIES:

• Understands and takes account of organisational aims and priorities to plan and set clear goals and deliver immediate and long term goals.

• Takes personal responsibility for effectively managing projects to achieve priorities, ensuring efficient use of resources to meet agreed delivery timescales and quality standards.

• Undertake such other duties as may be required from time-to-time by the Director of Communications, to build the reputation of the University.

ORGANISATIONAL RELATIONSHIPS:

• The post holder will be based in the Communications Unit but will also spend significant time working with colleagues across the OU faculties and institutes.

• The post holder will report to the Head of Communications (Managing Editor) (and through him/her to the Director of Communications) but will liaise closely with the Senior Manager, Research Communications, within the Communications Stakeholder Relations team..

• The post holder will work with other individuals, teams and units across the University where required.

So – profile building and celebration of academic achievements seem to be the order of the day, as well as getting OU comment into mainstream media? Thinking about OU content I share, most of it is generally on the basis of what I think is interesting, novel, “important”, quirky, or possibly of interest in one of the communities I believe I communicate into. But I don’t limit myself to sharing info about just OU activities…(The original naming of OUseful.info was inspired by a desire to share info that might be useful in an OU context, facing both outwards (linking to OU projects that were of interest), as well as inwards (bringing ideas from the outside that might contribute internally to the development of the OU mission).)

The job description doesn’t mention the REF, but work also appears to be being commissioned to support that bundle of laughs at a data management level – REF Publications Linked Data:

Key tasks will include:
– The review with others of the existing Research Publication Facility (RPF);
– Design and development of agreed enhancements and additions to the existing system;
– Delivery of an agreed programme of enhancement/development;
– Maintenance and user-support of the live RPF system;
– Direct liaison with users during the REF preparation period, to handle and progress queries and issues etc. as they arise;
– The postholder will also be expected to devise and introduce additional features to the RPF should they be identified as REF Preparations proceed – e.g. improved MI reporting for the REF Coordination Team and the Institutional Research Review Team (IRRT);
– Undertake such other duties as may be required from time-to-time by appointed line/project managers in support of REF preparations and related systems.

The use of linked data to support Research and Scholarship is an exciting field of research development in its own right, and part of the postholder’s role will be to work in association, as directed, with select colleagues in KMi, the PVC’s Office and elsewhere to identify other relevant opportunities for using linked data in support of the Research and Scholarship agenda, where this is considered appropriate and workload allows. The postholder’s primary responsibility however, will be direct support of the OU’s REF submission.

The job ad also mentions that the role “will include in particular the modelling and exposure as linked data of newly identified data not already covered by the current datasets, the constant maintenance and update of existing data. The Project Officer will in particular integrate a team working in collaboration with the Digital Engagement and the Open Media Units [the Open Broadcasting uint, as was…] of the Open University to create linked databased tools and systems related to improving the discoverability of open educational resources”. From which we maybe learn the Digital Engagement Unit and the OMU are sponsoring the OU’s Linked Data effort? As for example further evidenced by this second Linked Data related job ad – Project Officer – Linked Data

– linking and integrating information regarding the outcomes, impact and media appearance of research projects at the Open University;
– creating and making available new sets of data to support the connection between the Open University and external organisations;
– developing applications and tools to support the navigation in, search and reuse of content available at the Open University;
– improving how OU and external linked data is used by the OpenLearn website (open.edu/openlearn) to group relevant content and make recommendations to users;
– connect educational and research content with online services used by researchers and academics at the Open University;
– supporting the use of linked data in research projects;

A good example of what might be involved in that strand of work is suggested by the DiscOU (Discovering Open University Content from Other Online Resources) project:

Back on the jobs front, the Strategic Communications Programme is also appointing a couple of other positions at the moment – an Employer Engagement & Employability Manager “engag[ing] employers with the benefits of sponsoring staff on OU qualifications, and students with the impact an OU qualification can have on their career” and a Campaigns Manager (Social Media) “comfortable in the online and social media environment [who] will develop our reputation for thought leadership in areas of special interest to the University”. The Further Particulars for the Campaigns Manager go on:

Early priorities for the post will be to develop and implement the existing Social Media Content Strategy to respond to the needs of the Strategic Communications Programme (focussed on attracting more students and employers). In doing so the post holder will begin to develop The Open University’s place in public debate and position the University as a thought leader in areas of special concern. To do this, the post-holder will need to engage key academics and senior staff in the potential of social media as a tool to raise the profile for the University and themselves.

MAIN RESPONSIBILITIES

• Develop and implement a SCP Social Media Content Strategy for the University based on a focused approach aimed at maximum impact on prospective students, employer sponsors, key opinion formers and decision-makers.
• Develop and maintain knowledge of the OU’s areas of special concern, encourage debate, disseminate opinion and information accordingly to target audiences, liaising with the media relations team as appropriate for high-impact stories.
• Network across faculties, institutes and relevant service units to maximise engagement of relevant expertise and opinion gathering, to help you stimulate public debate, dissemination and impact.
• Contribute to the development and maintenance of the OU’s presence in Facebook, Twitter and LinkedIn to attract and inform target audiences.
• Working with Digital Engagement, contribute to a pan-university approach to social media activity.
• Work with the Senior Manager, Stakeholders and Ambassadors, on the development of our thought leadership event programme harnessing social media to increase our impact with this programme.
• Day-to-day quality control of all student and employer facing content in our primary Facebook, Twitter and LinkedIn presences.
• Work closely with the Senior Manager, Research Communications, to expand the reach of our impact case studies for the OU’s Research Excellence Framework submission.
• Support the development and implementation of stakeholder engagement/communications with key influencers in Social Media.
• Create and deliver presentations for staff training on the power of social media to help us strengthen our reputation for excellence and thought leadership, providing briefings and guidance for presentation opportunities.
• Work with academic staff to develop their social media profiles for impact on external audiences such as potential students and the media.
• Work with staff to optimize their text, audio, and video content and social media channels, evaluating existing content. Dependent on their abilities, this may include producing and editing digital content for them.
• Identify and disseminate digital content and social media best practices to the University community.
• Contribute to development of the OU’s iTunes U and YouTube portfolio and amplify the excellent content delivered into these environments.
• Liaise as appropriate with Digital Engagement, Open Media Unit and Marketing.
• Coordinate OU competition and league tables entries and amplify our success across social media and OU owned channels (e.g. for Times Higher awards)

Social media is definitely in-scope as a comms channel, then…?!

PS no time to go in to them here, but I also notice ads for a Digital Campaign Manager, a Digital Marketing Director, and a Research and Analysis Manager, all within the Open University Worldwide Ltd Business Development Unit. Apparently, “”[t]he Open University has ambitious plans to grow the number of students and associated revenues from overseas. As part of the Open University Worldwide (OUW), the Digital Marketing Director will be accountable for the marketing strategy and delivery of the marketing plan targeted at both new and existing B2C overseas markets, the highly influential Research and Analysis Manager role will be accountable for a range of activities from providing market, competitor and regulatory analysis to shape market strategy, through to producing insight and analysis of day to day performance, the Digital Campaign Manager will be responsible for the delivery of the marketing campaigns targeting B2C overseas markets.””

As to the sorts of skills these roles require:

• Exceptional understanding of all areas of online marketing, including SEO, SEM, social media and eCRM, acquisition, retention, display, affiliates & partnerships.
• Extensive experience of web management and analytics including knowledge of content management systems, content change process, knowledge of establishing web analytics and implementing measurement tools.
• Extensive experience of managing digital agencies.
• Excellent record of success delivering ROI via innovative online marketing campaigns.
• Proven analytical skills and the ability to drive insight from consumer and market data.
• Innovative approach and understanding of how to build a brand and create online communities.

So… that’s the business of the academy then?!

Guardian Telly on Google TV… Is the OU There, Yet?

A handful of posts across several Guardian blogs brought my attention to the Guardian’s new Google TV app (eg Guardian app for Google TV: an introduction (announcement), Developing the Google TV app in Beta (developer notes), The Guardian GoogleTV project, innovation & hacking (developer reflection)). Launched for the US, initially, “[i]t’s a new way to view [the Guardian’s] latest videos, headlines and photo galleries on a TV.”

The OU has had a demo Google TV app for several months now, courtesy of ex-of-the-OU, now of MetaBroadcast, Liam Green HughesAn HTML5 Leanback TV webapp that brings SPARQL to your living room:

@liamgh's OU leanback TV app demo

[Try the demo here: OU Google TV App [ demo ]]

Liam’s app is interesting for a couple of reasons: first, it demonstrates how to access data – and then content – from the OU’s open Linked Data store (in a similar way, the Guardian app draws on the Guardian Platform API, I think?); secondly, it demonstrates how to use the Google TV templates to get put a TV app together.

(It’s maybe also worth noting that the Google TV wasn’t Liam’s first crack at OU-TV – he also put together a Boxee app way back when: Rising to the Boxee developer challenge with an Open University app.)

As well as video and audio based course materials, seminar/lecture recordings, video shorts (such as the The History of the English Language in Ten Animated Minutes series (I couldn’t quickly find a good OU link?)), the OU also co-produces broadcast video with both the BBC (now under the OU-BBC “sixth agreement”), as well as Channel 4 (eg The Secret Life of Buildings was an OU co-pro).

Many of the OU/BBC co-pro programmes have video clips available on BBC iPlayer via the corresponding BBC programmes sites (I generate a quite possibly incomplete list through this hack – Linked Data Without the SPARQL – OU/BBC Programmes on iPlayer (here’s the current clips feed – I really should redo this script in something like Scraperwiki…); as far as I know, there’s no easy way of getting any sort of list of series codes/programme codes for OU/BBC co-pros, let alone an authoritative and complete one). The OU also gets access to extra clips, which appear on programme related pages on one of the OpenLearn branded sites (OpenLearn), but again, there’s no easy way of navigating these clips, and, erm, no TV app to showcase them.

Admittedly, Google TV enabled TVs are still in the minority and internet TV is still to prove itself with large audiences. I’m not sure what the KPIs are around OU/BBC co-pros (or how much the OU gives the BBC each year in broadcast related activity?), but I can’t for the life of me understand why we aren’t engaging more actively in beta styled initiatives around second screen in particular, but also things like Google TV. (If you think of apps on internet TV platforms such as Google TV or Boxee as channels that you can programme linearly or as on-demand services, might it change folks’ attitude towards them?)

Note that I’m not thinking of apps for course delivery, necessarily… I’m thinking more of ways of making more of the broadcast spend, increasing it’s surface area/exposure, and (particularly in the case of second screen) enriching broadcast materials and providing additional academic/learning journey value. Second screen activity might also as contribute to community development and brand enhancement through online social media engagement in an OU-owned and branded space parallel to the BBC space. Or it might not, of course…;-)

Of course, you might argue that this is all off-topic for the OU… but it isn’t if your focus is the OU’s broadcast activities, rather than formal education. If a fraction of the SocialLearn spend had gone on thinking about second screen applications, and maybe keeping Boxee/Google TV app development ticking over to see what insights it might bring about increasing engagement with broadcast materials, I also wonder if we might have started to think our way round to how second screen and leanback apps could also be used to support actual course delivery and drive innovation in that area?

PS two more things about the Guardian TV app announcement; firstly, it was brought to my attention through several different vectors (different blog subscriptions, Twitter); secondly, it introduced me to the Guardian beta minisite, which acts as an umbrella over/container for several of the Guardian blogs I follow… Now, where was the OU bloggers aggregated feed again? Planet OU wasn’t it? Another @liamgh initiative, I seem to remember…

PPS via a tweet from @barnstormed, I am reminded of something I keep meaning to blog about – OU Playlists on Youtube. For example, Digital Nepal or 60 Second Adventures in Thought, as well as The History of English in Ten Minutes. Given those playlists, one question might be: how might you build an app round them?!

PPPS via @paulbradshaw, it seems that the Guardian is increasingly into the content business, rather than just the news busines: Guardian announces multimedia partnerships with prestigious arts institutions [doh! of course it is….!] In this case, “partnering with Glyndebourne, the Royal Opera House, The Young Vic, Art Angel and the Roundhouse the Guardian [to] offer all more arts multimedia content than ever before”. “Summits” such as the recent Changing Media Summit are also candidate content factory events (eg in the same way that TED, O’Reilly conference and music festival events generate content…)

OU on the Telly…

Ever since the Open University was founded, a relationship with the BBC has provided the OU with a route to broadcast through both television and radio. Some time ago, I posted a recipe for generating a page that showed current OU programmes on iPlayer (all rotted now…). Chatting to Liam last night, I started wondering about resurrecting this service, as well as pondering how I could easily begin to build up an archive of programme IDs for OU/BBC co-pros, so that whenever the fancy took me I could go to a current and comprehensive “OU on iPlayer” page and see what OU co-pro’d content was currently available to watch again.

Unfortunately, there doesn’t seem to be an obvious feed anywhere that gives access to this information, nor a simple directory page listing OU co-pros with links even to the parent series page or series identifier on the BBC site. (This would be lovely data to have in the OU’s open linked data store;-)

OU on the telly...

What caught my attention about this feed is that it’s focussed on growing audience around live broadcasts. This is fine if you’re tweeting added value* along with the live transmission and turning the programme into an event, but in general terms? I rarely watch live television any more, but I do watch a lot of iPlayer…

(* the Twitter commentary feed can than also be turned into expert commentary subtitles/captions, of course, using Martin Hawksey’s Twitter powered iPlayer subtitles recipe..)

There is also a “what’s on” feed available from OpenLearn (via a link – autodiscovery doesn’t seem to be enabled?), but it is rather horrible and it doesn’t contain BBC programme/series IDs (and I’m not sure the linked to pages necessarily do so, either?)

OU openlearn whats on feed (broken)

So – what to do? In the short term, as far as my tinkering goes, nothing (holidays…:-) But I think with a nice feed available, we could make quite a nice little view over OU co-pro’d content currently on iPlayer, and also start to have a think about linking in expert commentary, as well as linking out to additional resources…

See also:
Augmenting OU/BBC Co-Pro Programme Data With Semantic Tags
Linked Data Without the SPARQL – OU/BBC Programmes on iPlayer [this actually provides a crude recipe for getting access to OU/BBC programmes by bookmarking co-pro’d series pages on delicious…]

PS from @liamgh: “Just noticed that Wikipedia lists both BBC & OU as production co e.g. en.wikipedia.org/wiki/The_Virtu… RH Panel readable with dbpedia.” Interesting… so we should be able to pull down some OU/BBC co-pros by a query onto DBPedia…

PPS also from Liam – a handy recipe for generating an HTML5 leanback UI for video content identified via a SPARQL query: An HTML5 Leanback TV webapp that brings SPARQL to your living room

So What’s Open Government Data Good For? Government and “Independent Advisers”, maybe?

Although I got an invite to today’s “Government Transparency: Opening Up Public Services” briefing, I didn’t manage to attend (though I’m rather wishing I had), but I did manage to keep up with what was happening through the #openuk hashtag commentary.

#openuk tweeps

It all kicked off with the Prime Minister’s Letter to Cabinet Ministers on transparency and open data, which sets out the roadmap for government data releases over the coming months in the areas of health, education, criminal justice, transport and public spending; it also sets the scene for the forthcoming Open Public Services White Paper (see also the public complement to that letter: David Cameron’s article in The Telegraph on transparency).

The Telegraph article suggests there will be a “profound impact” in four areas:

– First, it will enable choice, particularly for patients and parents. …
– Second, it will raise standards. All the evidence shows that when you give professionals information about other people’s performance, there’s a race to the top as they learn from the best. …
– Third, this information is going to help us mend our economy. To begin with, it’s going to save money. Already, the information we have published on public spending has rooted out waste, stopped unnecessary duplication and put the brakes on ever-expanding executive salaries. Combine that with this new information on the performance of our public services, and there will be even more pressure to get real value for taxpayers’ money.
– But transparency can help with the other side of the economic equation too – boosting enterprise. Estimates suggest the economic value of government data could be as much as £6 billion a year. Why? Because the possibilities for new business opportunities are endless. Imagine the innovations that could be created – the apps that provide up-to-date travel information; the websites that compare local school performance. But releasing all this data won’t just support new start-ups – it will benefit established industries too.

David Cameron’s article in The Telegraph on transparency

All good stuff… all good rhetoric. But what does that actually mean? What are people actually going to be able to do differently, Melody?

As far as I can tell, the main business models for making money on the web are:

sell the audience: the most obvious example of this is to sell adverts to the visitors of your site. The rate advertisers pay is dependent on the number of people who see the adds, and their specificity (different media attract different, possibly niche, audiences. If an audience is the one you’re particularly trying to target, you’re likely to pay more than you would for a general audience, in part because it means you don’t have to go out and find that focussed audience yourself.) Another example is to sell information about the users of your site (for example, banks selling shopping data).

take a cut: so for example, take an affiliate fee, referral fee or booking fee for each transaction brokered through your site, or levy some other transaction cost.

Where data is involved, there is also the opportunity to analyse other peoples’ data and then sell analysis of that data back to the pubishing organisations as consultancy. Or maybe use that data to commercial advantage in put together tenders and approaches to public bodies?

When all’s said and done, though, the biggest potential is surely within government itself? By making data from one department or agency available, other departments or agencies will have easier access to it. Within departments and agencies too, open data has the potential to reduce friction and barriers to access, as well as opening up the very existence of data sets that may be being created in duplicate fashion across areas of government.

By consuming their own and each others’ open data, departments will also start to develop processes that improve the cleanliness and quality of data sets, (for example, see Putting Public Open Data to Work…? and Open Data Processes – Taps, Query Paths/Audit Trails and Round Tripping; Library Location Data on data.gov.uk gives examples of how the same data can be released in several different (i.e. not immediately consistent) ways).

I’m more than familiar with the saying that “the most useful thing that can be done with your data will probably be done by someone else”, but if an organisation can’t find a way to make use of its own data, why should anyone else even try?! Especially if it means they have to go through the difficulty of cleaning the published data and preparing it for first use. By making use of open data as part of everyday government processes: a) we know the data’s good (hopefully!); b) cleanliness and inconsistency issues will be detected by the immediate publisher/user of the data; c) we know the data will have at least one user.

Finally, one other thing that concerns me is the extent to which “the public” want access to data in order to provide choice. As far as I can tell, choice is often the enemy of contentment; choice can sow the seeds of doubt and inner turmoil when to all intents and purposes there is no choice. I live on an island with a single hospital and not the most effective of rural transport systems. I’d guess the demographics of the island skew old and poor. So being able to “choose” a hospital with performance figures better than the local one for a given procedure is quite possibly no choice at all if I want visitors, or to be able to attend the hospital as an outpatient.

But that’s by the by: because the real issues are that the data that will be made available will in all likelihood be summary statistic data, which actually masks much of the information you’d need to make an informed decision; and if there is any meaningful intelligence in the data, or its summary statistics, you’ll need to know how to interpret the statistics, or even just read the pretty graphs, in order to take anything meaningful form them. And therein lies a public education issue…

Maybe then, there is a route to commercialisation of public facing public data? By telling people the data’s there for you to make the informed choice, the lack of knowledge about how to use that information effectively will open up (?!) a whole new sector of “independent advisers”: want to know how to choose a good school? Ask your local independent education adviser; they can pay for training on how to use the monolithic, more-stats-than-you-can-throw-a-distribution-at one-stop education data portal and charge you to help you decide which school is best for your child. Want comforting when you have to opt for treatment in a hospital that the league tables say are failing? Set up an appointment with your statistical counsellor, who can explain to you that actually things may not be so bad as you fear. And so on…

Graduate With Who (Whom?!;-), Exactly…?

Time was when the banks used to try to grab the dazed and confused, just-left-home school-leaver at Freshers’ Fair, sign ’em up for a bank account, and then be pretty sure that they’d stick with you for the rest of their hopefully profitable (to the bank) life. It’s the cloud companies that are doing the same now, of course:


(It’s a video of two halves…)

As the Official Google Blog puts it, in a post entitled “Graduate with Google Apps“:

[W]e’ve created the Google Guides program to help you take your Google Apps expertise to your future job. When you become a Google Guide, we’ll equip you with resources to introduce and implement Apps in your workplace. You’ll make an immediate impact by saving your company money and facilitating collaboration among coworkers. Once your company is up and running with Google Apps, you’ll get to continue using all the Apps tools you learned and loved in college—not to mention be known as your company’s in-house Google expert.

Compare this with something like:

“…..some commentators believe that the size of the increase of (student fee) contributions may well lead some disadvantaged students to question the benefit of a university education. I would urge you all to do everything within your powers to persuade disadvantaged students that a university education will offer them huge lifelong rewards”.
Sir Martin Harris, Director of Fair Access in “How to produce an Access Agreement for 2012-13”, OFFA, March 2011

[Pinched from an OU internal consultation document, “Widening participation in the future funding environment: An Access and Success Strategy? “, 201146_30614_o1.doc]

Seems to me that’s a case of: “once they graduate, your job is done…”

Now don’t get me wrong, I personally think the role of the educator should be to do themselves out of the structured, formal education job wherever possible, and help engender skills and enthusiasm for self-directed independent learning wherever possible, which is related to the idea of the job being done. But I also believe in network-mediated learning, where the network provides proximity to folk who can help meet your proximal development needs, and the lifelong subscription model can be part of that. (Network mediated proximal development can also play a part in the day-to-day offering within a traditional HE model, of course, as for example in the case of invisible/ubiquitous support and influential friends.)

But is the role of the university really to shut itself off from it’s graduates and turn them into fondly remembered alumni. How does that sit with the “knowledge transfer” remit, if we year-on-year stop talking to thousands of graduates who are supposedly entering a knowledge economy?

Regular/longstanding readers will know that I’ve previously blogged about the idea of Subscription Models for Lifelong Students in which the three year course proposition is just the start of a university’s relationship with a lifelong learner, rather than the whole extent of it, alumni dinners and requests for donations aside…

…and I’ve also advocated getting students signed up to course related web feeds (RSS/Atom feeds) during their courses, not only for the purpose of feeding course/topic related content to them (such as related news articles, as well as “teaching” materials) in the expectation that a fraction of the students will stay subscribed to the feed having finished the course and will maintain a “learning”, or at least, current awareness relationship with the topic maybe for years ever after. (This may or may not be appropriate in all subject areas; for subjects where continual professional updating is useful, such as health and technology, I think it is…)

What you may not know about is a “meme” that’s going around the OU at the moment relating to the idea of “M-World” and “Q-World” futures (would it be churlish to wonder whether we paid an external consulting firm a shedload for that observation?!;-0). The OU traditionally sold what we used to call courses and now call modules, 100-600 hour (10-60 credit point) units of study on a particular area. If you go to the OU course catalogue, that’s predominantly what you see, and what you hand your credit card over for. The OU offers degrees too, originally as an “open” degree but now increasingly as named degrees, which makes the original model of “take what you want when you want it” harder to justify for a programme that requires progression through a specified sequence of courses…

Despite trying – and failing – to find papers relating to the Q- and M- world strategies on the OU intranet (note to self: spider the intranet and write my own search engine), from the chats I’ve had there appears to have been no mention of the S-world, in which we sign up either Q- or M- students, but only see that as the start of a relationship in which the institution provides micro-content to lifelong students who pay an annual subscription fee, as well as maybe selling the odd 10-30 point course over the lifetime of the relationship. That is, rather than sell “leisure learners” the occasional module (extreme M-world, I guess?) for a 10-30 week relationship, or degree seekers an undergraduate qualification over a 3-9 year period (Q-world), we look for a relationship that is likely to last 10 weeks-50+ years.

Imagine than, you take a course, and sign up to the course feed when you do so. The course feed supplements the course material with items of relevance and interest the course (contemporary/current related wider reading for example, something that might be drawn in conversation relating to the topics covered in the course). There is nothing so damning in the course feed that it can’t be made available to people outside the course, such as alumni of course (people who never unsubscribed).

You finish the course and keep the feed, for free… (it might be keyed with a presentation code, so filtered content can be sent down it). Occasionally, you might get an advert appearing in the feed, announcing a new podcast on a relevant topic or a news release/research paper report from an OU academic working in the area. A mechanism for informed serendipitous discovery is in place.

For subscribers, the whole of the OU archive is open to search. Subscribers can treat the OU legacy course corpus as a learning and teaching styled encyclopedia. They can also get access to premium content. Another subscription tier may provide access to subscription content via the library (see e.g. Lessig At CERN: Scientific Knowledge Should Not Be Reserved For Academic Elite on why this is important). I don’t know if HE product development/curriculum innovation bods watch the quality dailies, but I keep seeing the Guardian and Observer promoting events that look appealing, for a fee; and they’re also developing various membership models, I think, too? (“An audience With…”, or “Join the club, and get hammered with some of our hacks…”, or whatever…) But how many folk outside the university (indeed, within it?!) even know of the many free and open seminars and talks that take place on university campuses and in university buildings, let alone attend them?* (Cf. Keeping Up With Events (broken…?!).) *if the answer is “not many”, that’s maybe in part a failing of the promoter…? For example, the last Cafe Scientifique event I attended (admittedly, nothing to do with a university) was very well attended, in part as a result of effective word-of-mouth promotion.

Now (it) may or may not be a crunchtime for HE (I think the older institutions at least are pretty robust), but if no-one in HE is thinking about how higher level education can become everyday relevant and engage in effective knowledge transfer out of the institution and into the real world (err, Google Guides anyone?!) on a sustainable basis, maybe as part of a lifelong learning culture where the university plays an important role as both authority and hub, to use a bit of social network analysis jargon, then that part of the sector marketing itself as a purveyor of “graduate level skills” to a volume market (as opposed to propagating itself to an academic elite, which was the mainstay of the traditional academic model) deserves to go down. IMVHO, of course…

PS another thought I’ve blogged/ranted many times before that’s come to mind again: how many universities make use of search data (form internal and incoming) searches as a provider of at least weak signals about curriculum offerings as perceived by potential students? Compare that with how many employ traditional market research companies to sound out the market about potential new courses. (If anyone has any (links to) good insights about how universities (can) do the curriculum innovation thing effectively, please post them in the comments;-)

PPS Hmmm – seems like the idea of course for alumni may not be such a good one: The Harvard “MOOCs for Alumni” Thing Parties Like It’s 1999. That said, would it work as part of a credential upgrade package?

Thinkses Around Open Course Accreditation

What do P2PU, the University of Mary Washington (UMW), and a joint venture between the National Research Council of Canada (Institute for Information Technology, Learning and collaborative Technologies Group, PLE Project), The Technology Enhanced Knowledge Research Institute at Athabasca University and the University of Prince Edward Island have in common? The answer is that they either have, or are about to, run open online courses, at undergraduate level, for free, on the web.

In the case of P2PU and the Canadian joint venture, the courses were run without credit. At UMW, the DS106 Digital Storytelling course ran for the first time in 2010 as a for credit course for registered UMW students, albeit largely in public. In 2011, it has run as a course with loose boundaries, open to all whilst at the same time providing a recognised course offering within UMW itself. In each case, the course duration was of the order of 10 weeks.

With HE in the UK going through a phase of soul-searching around the question of “where’s the money going to come from”, it could be argued that we need to start doing some work around business model innovation. So here’s one of my starters for ten… (I have floated this internally, and no-one’s picked up on it, so I feel as if I’m not giving away anything away by posting it here…)

The idea is simple: a recognised award offering body offers a module or course container that will allow participants in online courses to receive recognised academic credit points based in part on their participation in an open, online course, in part on their reflections about what they learned on the course.

What follows are initial (probably naive) thoughts on how it might work…

The module is inspired in part by the International Baccalaureate’s CAS (Creativity, Action, Service) component as well as HE level course modules developed to recognise work based or prior experiential learning; it provides a means by which paid for assessment may be decoupled from course delivery. To try and address concerns, the proposal in the first instance is that the container be used to award credit for students who have freely participated in one of a recognised number of open educational units, for example from the OU’s OpenLearn website or one or more courses offered by P2PU (subject to agreement).

OpenLearn Courses: participation in these courses is based on individual engagement with the course material, informally supported by one or more forums or social spaces open to all. This model allows us to explore the extent to which purely independent learning within a controlled open courseware context provides an appropriate context for accredited independent study.

One or more OU Uncourses/Learning Journey Courses (or open, online courses run by academics in other institutions): a significant part of the original course material drafted for the Relevant Knowledge short course T151 Digital Worlds was authored over a 15-20 week period on a public blog hosted on wordpress.com. The materials posted combined elements of personal learning diary as the OU author explored the subject area, as well as learning devices borrowed from the OU’s tutorial-in-print style of writing (in-line exercises, self-reflection questions, and worked through tutorials, for example). By running one or more new “learning journey” courses, such as in areas where material is being drafted for fully fledged future OU courses, where material is timely (for example, in response to a BBC series or short term skills gap (such as the opening up of data in central and local government)), or where there exists considerable vendor produced third party training material albeit in a poorly structured form as far as course design goes (for example, Google tutorials around Google Apps, or Google Analytics, or the Yahoo User Interface libraries), we can: i) pilot the open course container model; ii) create useful open resources “for the common good”; c) draft course materials for possible formal (paid for) OU course offerings.

P2PU Courses: P2PU runs 10 week courses for small cohorts starting throughout the year. Learners engage with each other as well as the course resources and course instructors. Recognising participation in this sort of course allows us to explore the extent to which an open accreditation module can be used to recognise participation in semi-formal courses. Recognising participation with P2PU courses also provides an opportunity for the OU to develop ties to the Mozilla Foundation, who support P2PU and are keen to see it develop a range of semi-professional courses based around the open web and open software development.

How the Container Works

The container awards credit based on the fulfilment of several criteria:

– demonstration of engagement with, or participation in, a recognised open, online course; this requirement means we know that learners were at least exposed to a certain of content we recognise;

– a reflective assessment component; this may take the form of a reflective essay, or piece of project work arising from the course and a critical review of that work.

– optionally, results from quizzes provided during the course. These not only demonstrate engagement with the course, but also provide some means of demonstrating a particular level of attainment in particular topic areas through computer marked assessment.

In the first instance, accreditation is offered for independent study based on participation with one of a limited number of pre-identified open online courses. In this way, we could artificially limit the range of subject areas and course models engaged with by the initial batches of learners to a know set of approved courses. This approach allows us to mitigate the risks involved with a prove the model and allow the course model to develop in a carefully controlled way.

The OpenLearn Context (2011I-2011L)

To a certain extent, the idea is based on a particular vision of how we might go about assessing participation in open online courses run outside the OU. However, I think it might also be used to provide a way in to formal study for students wishing to take formal OU awards based on prior engagement with OpenLearn materials.

By accrediting engagement with two OpenLearn based units derived from current Technology short course/Relevant Knowledge programme courses, we can compare achievement levels across formal and informal presentations of the material. For example, if material from Relevant Knowledge short courses in the their final presentation are released to OpenLearn immediately prior to the final presentation, we can engage learners around course material that is concurrently being offered in a supported fashion as an officially recognised OU course through the VLE, and informally via OpenLearn. As such, we can explore the extent to which an open course container might: i) extend the life of a course; ii) provide alternative pathways to credit and assessment models for students interested in a particular topic area but not necessarily interested in “named credit” for a course.

The Uncourse/Learning Journey Context

As institutions such as the OU continue to innovate in the areas of informal and semi-formal education through OpenLearn and emerging practice in Digital Scholarship, the uncourse/learning journey, originally inspired, in part, by the notion of “misguided tours”, provides a framework for digital scholars to record their learning journey through a new subject area as a learning pathway that others might follow. By employing writing devices that well are proven in the delivery of “tutorial-in-print” style learning materials, the learning diary becomes a piece of instructional material in its own right. Through openly recording the learning journey, and ideally engaging with other learners interested in the topic area, the author should also remain free to negotiate the future direction of the learning journey (hence its declaration as an ‘uncourse’) and so discover a curriculum that fairly reflects the learning needs of its participants.

The P2PU Context

If, as seems likely, ad hoc open online courses continue to emerge as a consequence of: a) the increasing availability of high quality content that can be put to use as a learning resource, even if not originally designed as one; b) the growth in online social networks and an apparent desire and willingness for learners to come together and participate in semi-structured learning directed activity, there will be a growing market for recognising participation in such activities and acknowledging it in some way. Through recognising participation in P2PU courses in certain areas, it may be possible for HEIs to develop closer ties with the Mozilla Foundation and engage with open courses in areas complementary to formal offerings (e.g. in the OU’s case, the Web Certificate, Open Source Tools and Linux courses). Such engagement provides opportunities for using P2PU courses as a marketing channel similar to the way in which OpenLearn units may be used, as well as providing a continuing education context for alumni in areas where an institution may not provide courses. P2PU may also provide a slightly more structured context than is offered by the uncourse/learning journey model for the developmental testing of formal course materials as they are being developed for fully fledged distance online courses.

What’s in it for folk offering online courses?
An obvious argument against the above approach is that folk running courses may get upset that someone else if offering (for a fee) accreditation around their course materials. (I always thought non-commercial could be a Bad Thing ;-) However, a couple of benefits come to mind.

Firstly, the institution offering the accreditation may pay to advertise on the site offering the course. (Yes, I know this might seem as if it’s a way for an institution to essentially outsource its course production and delivery, and in a way it is… But if open courses take off, and if they offer educational benefit, and if there’s value in proving to someone else you have taken an open course, and if HEIs don’t start offering certification around open courses, then someone else will. Such as an organisation like Pearson…

Secondly, by accepting that participation in a course can be used as partial fulfillment of requirements for the receipt of formal academic credit, it reflects back some of the authority of the award offering body on the course, showing that the course has something of educational value to offer.

Isn’t the Audience Limited?
Open educational courses aren’t for everyone; they require some element of motivation on the part of the learner, they are often best followed in a social way. At times they may lack structure, and instead focus on resource investigation activities, which can be hard for learners who prefer very heavily structured courses with linear narratives and “teacher” leading from the front. But if you want to develop skills and a model of learning that helps you exploit the power of the web, then open courses may help you on your way…

Conclusion
Err, that’s it… ;-)

Related: Massive Open Online Courses – All You Need to Know…

The “Most Accessible Media Player on the Web”?

A couple of weeks ago, a couple of tweets (maybe RTs of @ODIgovuk?) alerted me to the release of the ODI (HMG’s Office for Disability Issues) (flash based – how accessible is that?!*) accessible media player (see the player), which “supports customisable subtitles, audio description, British Sign Language, downloadable PDF transcripts”. In support of its credentials, the host website sports the RNIB’s Surf Right badge of approval.

*Ah: “Built in Flash, it provides full keyboard support for people who can’t use a mouse, is extremely well labelled, making all of it’s functions clear to screen reader users, and for those who don’t have Flash in their browsers, provides the same information in alternative formats.”

On hearing the announcement, my first thought was to ask whether or not anyone had let JISC know, for example by sending a press release at least to JISC TechDis, JISC’s advisory service on technology and inclusion, but I think that was probably taking the hope of joined-up-ness a step too far…;-)

I’ve not had a chance to play with the ODI player probably yet, nor am I sure what I should look for in terms of what makes this a properly accessible media player, so if you know of a review anywhere, please add a link in the comments…

Anyway, today, at the bottom of post (Are waterfalls agile?) from Will Woods, one of the few “in-the-loop” folk in the OU who seems to blog hints at least about planned OU tech innovation, I saw this:

We’ve just started a venture to develop the OU Media Player for example which is going to create ‘the worlds’ most accessible media player’. It’s built using existing services but we’ll add the value to make it provide captioning and accessibility services and to link to all OU media materials on a variety of platforms including the VLE. This is a very small team working over the next five months in an agile way. I’ve got 100% confidence in it’s success because it’s a great team, everyone understands how important it is tot he OU and they’re being given the freedom to build it iteratively, creatively and well, i.e. serving the OU’s mission in being “open and accessible”.

Useful… But just out of interest, how will it differ from the ODI’s “most accessible player available on the web”, and is there a critique from the OU perspective of the ODI player available anywhere?

PS Will’s post also picks up on a recent Computer Weekly interview with new OU CIO David Matthewman, which I don’t think I’ve blogged a link to before, but is worth a read. No mention of a skunkworks team I heard mooted at one point, though… unlike the “right to skunkworks” teams that the Cabinet Office seems keen to chase: Cabinet Offfice Structural Reform Plan Monthly Implementation Update, January 2001 [pdf]: “1.12(v) Announce new open standards and procurement rules for ICT, including right for skunk works to be involved prior to launch of procurement [Not complete]”

PPS This post got me thinking, trivially, about the accessibility of the iPad: my gut reaction, lack of tactile feedback might make it inaccessible… Apparently not – see for example the RNIB media release when the iPad first came out. Folks in IET at the OU apparently also did a favourable evaluation review (hence (as @liamgh put it) the OU mention in this iPad advert?), though I’m not sure it’s on the web anywhere… (Thinks: with the OU taking on various national roles, might the release/publication of accessibility reviews (aka accessibility consumer reports?) relating to new technology, particularly in an educational context, be an appropriate Big Society kind of thing to do?;-)