Tagged: baiting

Rolling Your Own IT – Automating Multiple File Downloads

Yesterday, I caught up with a video briefing on Transforming IT from the OU’s Director of IT, recorded earlier thus year (OU internal link, which, being on Sharepoint, needs Microsoft authentication, rather than OU single sign on?).

The video, in part, describe the 20 year history of some of the OU’s teaching related software services, which tended to be introduced piecemeal and which are necessarily as integrated as they could be…

In the post Decision Support for Third Marking Significant Difference Double Marked Assessments, I mentioned part of the OU process for managing third marking.

Guidance provided for collecting scripts for third marking is something like this:

The original markers’ scores and feedback will be visible in OSCAR.

Electronically submitted scripts can be accessed in the eTMA system via this link: …

Please note the scripts can only be accessed via the EAB/admin tab in the eTMA system ensuring you add the relevant module code and student PI.

[My emphasis.]

Hmmm… OSCAR is accessed via a browser, and supports “app internal” links that display the overall work allocation, a table listing the students, along with their PIs, and links to various data views including the first and second marks table referred to in the post mentioned above.

The front end to the eTMA system is a web form that requests a course code and student PI, which then launches another web page listing the student’s submitted files, a confirmation code that needs to be entered in OSCAR to let you add third marks, and a web form that requires you to select a file download type from a drop down list with a single option and a button to download the zipped student files.

So that’s two web things…

To download multiple student files requires a process something like this:

So why not just have a something on the OSCAR work allocation page that that lets you select – or select all – the students and download all the files, or get all all the confirmation codes?

Thinks… I could do that, sort of, over  coffee…. (I’ve tried to obfuscate details while leaving the general bits of code that could be reused elsewhere in place…)

First up, we need to login and get authenticated:

#Login
!pip3 install MechanicalSoup

import mechanicalsoup
import pandas as pd

USERNAME=''
PASSWORD=''
LOGIN_URL=''
FORM_ID='#' 

def getSession():
 browser = mechanicalsoup.StatefulBrowser()
 browser.open(LOGIN_URL)
 browser.select_form(FORM_ID) #in form: #loginForm
 browser[_USERNAME] = USERNAME
 browser[_PASSWORD] = PASSWORD
 resp = browser.submit_selected()
 return browser

s=getSession()

Now we need a list of PIs. We could scrape these from OSCAR, but that’s a couple of steps and easier just to copy and paste the table from the web page for now:

#Get student PIs - copy and paste table from OSCAR for now

txt='''
CODE\tPI NAME\tMARKING_TYPE\tSTATUS
...
CODE\tPI NAME\tMARKING_TYPE\tSTATUS
'''

#Put that data into a pandas dataframe then pull out the PIs
from io import StringIO

df=pd.read_csv(StringIO(txt),sep='\t',header=None)
pids=[i[0] for i in df[1].str.split()]

We now have a list of student PIs, which we can iterate through to download the relevant files:

#Download the zip file for each student
import zipfile, io, random

def downloader(pid, outdir='etmafiles'):
  print('Downloading assessment for {}'.format(pid))
  !mkdir -p {outdir}
  payload = {FORM_ELEMENT1:FILETYPE, FORM_ELEMENT2: FILE_DETAILS(pid)}
  url=ETMA_DOWNLOARD_URL_PATTERN(pid)
  #Download the file...
  r=s.post(url,data=payload)

  #...and treat it as a zipfile
  z = zipfile.ZipFile(io.BytesIO(r.content))
  #Save a bit more time for the user by unzipping it too...
  z.extractall(outdir)

#Here's the iterator...
for pid in pids:
  try:
    downloader(pid)
  except:
    print('Failed for {}'.format(pid))

We can also grab the “student page” from the eTMA system and scrape it for the confirmation code. (On to do list, try to post the confirmation code back to OSCAR to authorise the upload of third marks, as well as auto-posting a list of marks and comments back.)

#Scraper for confirmation codes
def getConfirmationCode(pid):
  print('Getting confirmation code for {}'.format(pid))
  url=ETMA_STUDENT_PAGE(pid, ASSESSMENT_DETAILS)
  r=s.open(url)
  p=s.get_current_page()

  #scrapy bit
  elements=p.find(WHATEVER)
  confirmation_code, pid=SCRAPE(elements)
  return [confirmation_code, pid]

codes=pd.DataFrame()

for pid in pids:
  try:
    tmp=getConfirmationCode(pid)
    # Add data to dataframe...
    codes = pd.concat([codes, pd.DataFrame([tmp], columns=['PI','Code'])])
  except:
    print('Failed for {}'.format(pid))

codes

So… yes, the systems don’t join up in the usual workflow, but it’s easy enough to hack together some glue as an end-user developed application: given that the systems are based on quite old-style HTML thinking, they are simple enough to scrape and treat as a de facto roll-your-own API.

Checking the time, it has taken me pretty as much as long as it took to put the above code together as it has taken to write this post and generate the block diagram shown above.

With another hour, I could probably learn enough about the new plotly Dash package (like R/shiny for python?) to create a simple browser-based app UI for it.

Of course, this isn’t enterprise grade for a digital organisation, where everything is form/button/link/click easy, but it’s fine for a scruffy digital org where you appropriate what you need and string’n’glue’n’gaffer tape let you get stuff done (and also prototype, quickly and cheaply, things that may be useful, without spending weeks and months arguing over specs and font styles).

Indeed, it’s the digital equivalent of the workarounds all organisations have, where you know someone or something who can hack a process, or a form, or get you that piece of information you need, using some quirky bit of arcane knowledge, or hidden backchannel, that comes from familiarity with how the system actually works, rather than how people are told it is supposed to work. (I suspect this is not what folk mean when they are talking about a digital first organisation, though?!;-)

And if it’s not useful? Well it didn’t take that much time to try it to see if it would be…

Keep on Tuttling…;-)

PS the blockdiagram above was generated using an online service, blockdiag. Here’s the code (need to check: could I assign labels to a variable and use those to cut down repetition?):

[{
  A [label="Work Allocation"];
  B [label="eTMA System"];
  C [label="Student Record"];
  D [label="Download"];
  DD [label="Confirmation Code"]
  E [label="Student Record"];
  F [label="Download"];
  FF [label="Confirmation Code"]
  G [shape="dots"];
  H [label="Student Record"];
  I [label="Download"];
  II [label="Confirmation Code"];

  OSCAR -> A -> B;

  B -> C -> D;
  C -> DD;

  B -> E -> F;
  E -> FF;
  B -> G;

  B -> H -> I;
  H -> II;
}

Is that being digital? Is that being cloud? Is that being agile (e.g. in terms of supporting maintenance of the figure?)?

Innovation Starts At Home…?

Mention was made a couple of times last week in the VC’s presentation to the OU about the need to be more responsive in our curriculum design and course production. At the moment it can take a team of up to a dozen academics over two years to put an introductory course together, that is then intended to last, without significant change, other than in the preparation of assessment material, for five years or more.

The new “agile” production process is currently being trialled by a new authoring tool, OpenCreate, that is currently available to a few select course teams as a partially complete “beta”. I think it is “cloud” based. And maybe also promoting the new “digital” first strategy. (I wonder how many letters in the KPMG ABC bingo card consulting product the OU paid for, and how much per letter? Note: A may also stand for “analytics”.)

I asked I could have a play with the OpenCreate tool, such as it, last week, but told it was still in early testing (so a good time to be able to comment, then?) and so, “no”. (So instead,  I went back to one of the issues I’d raised a few days ago on somebody else’s project on Github to continue helping with the testing of a feature suggestion. (A few days ago; the suggestion has already been implemented and the issue is now closed as completed. making my life easier and hopefully improving the package too.) Individuals know how to do agile. Organisations don’t. ;-))

So why would I wan’t to play with OpenCreate now, while it’s still flaky? Partly because I suspect the team are working on a UI and have settled elements of the backend. For all the f**kwitted nonsense the consultants may have been spouting about  agile, beta, cloud, digital solutions, any improvements are going to come form the way the users use the tools. And maybe workarounds they find. And by looking at how the thing works, I may be able to explore other bits of the UI design space, and maybe even bits of the output space…

Years ago, the OU moved to an XML authoring route, defining and XML schema (OU-XML) that could be used to repurpose content for multiple output formats (HTML, epub, docx, Word). By the by, these are all standardised document formats, which means other people also build tooling around them. The OU-XML document was an internal standard. Which meant only the OU developed tools for it. Or people we paid. I’m not sure if, or how much Microsoft, were paid to produce the OU’s custom authoring extensions for Word that would output OU-XML, for example… Another authoring route was an XML editor (currently, oXygen, I believe). OU-XML also underpinned OpenLearn content.

That said, OU-XML was a standard, so it was in principle possible for people who had knowledge of it to author tools around it. I played with a few myself, though they never generated much interest internally.

  • generating mind maps from OU/OpenLearn structured authoring XML documents: these provided the overview of a whole course and could also be used as a navigation surface (revisited here and here); I made these sort of mindmaps available as an additional asset in the T151 short course, but they were never officially recognised);
  • I then started treating a whole set of OU-XML documents *as a database* which meant we could generate *ad hoc* courses on a particular topic by searching for keywords across OpenLearn courses and then returning a mindmap constructed around components in different courses, again displaying the result as a mindmap (Generating OpenLearn Navigation Mindmaps Automagically). Note this was all very crude and represented playtime. I’d have pushed it further if anyone internally had shown any interest in exploring this more widely.
  • I also started looking at ways of liberating assets and content, which meant we could perform OpenLearn Searches over Learning Outcomes and Glossary Items. That is, take all the learning outcomes from OpenLearn docs and search into that to find units with learning outcomes on that topic. Or provide a “metaglossary” generated (for free) from glossary terms introduced in all OpenLearn materials. Note that I *really* wanted to do this as a cross-OU course content demo, but as the OU has become more digital, access to content has become less open. (You used to be able to look at complete course, OU print materials in academic libraries. No you need a password to access the locked down digital content; I suspect access expires to students after a period of time too; and it also means students can’t sell on their old course materials;
  • viewing OU-XML documents as structured database meant we could also asset strip OpenLearn for  images, providing a search tool to lookup images related to a particular topic. (Internally, we are encouraged to reuse previously created assets, but the discovery problem about helping authors discover what previously created assets are available has never really been addressed; I’m not sure the OU Digital Archive is really geared up for this, either?)
  • we could also extract links from courses and use them as a course powered custom search engine. This wasn’t very successful at the course level, (not enough links) but might have been interesting at across multiple courses;
  • a first proof of concept pass at a tool to export OU-XML documents from Google docs, so you could author documents using Google docs and then upload the result into the OU publishing system.

Something that has also been on my to do list for a long time are templates to convert Rmd (Rmarkdown) and Jupyter notebook ipynb documents to OU-XML.

So… if I could get to see the current beta OpenCreate tool, I might me able to see what document format authors were being encouraged to author into. I know folk often get the “woahh,, too complicated… feeling when reading OUseful.info blog posts*, but at the end of the day whatever magic dreams folk have for using tech, it boils down to a few poor sods having to figure out how to do that using three things: code, document formats (which we might also view as data representations more generally) and transport mechanisms (things like http; and maybe we could also class things like database connections here). Transport moves stuff between stuff. Representations represent the stuff you want to move. Code lets you do stuff with the represented stuff, and also move it between other things that do black box transformations to it (for example, transforming it from one representation to another).

That’s it. (My computing colleagues might disagree. But they don’t know how to think about systems properly ;-)

If OpenCreate is a browser based authoring tool, the content stuff created by authors will be structured somehow, and possibly previewed somehow. There’ll also be a mechanism for posting the authored stuff into the OU backend.

If I know what (document) format the content is authored in, I can use that as a standard and develop my own demonstration authoring tools and routes around that on the input side. For example, a converted that converts Jupyter notebook, or Rmd, or Google docs authored content into that format.

If there is structure in the format (as there was in OU-XML), I can use that as a basis for exploring what might be done if we can treat the whole collection of OU authored course materials as a database and exploring what sorts of secondary products, or alternative ways of using that content, might be possible.

If the formats aren’t sorted yet, maybe my play would help identify minor tweaks that could make content more, or less, useful. (Of course, this might be a distraction.)

I might also be able to comment on the UI…

But is this likely to happen? Is it f**k, because the OU is an enterprise that’s sold corporate, enterprise IT thinking from muppets who only know “agile” (or is that “analytics”?), “beta”, “cloud” and “digital” as bingo terms that people pay handsomely for. And we don’t do any of them because nobody knows what they mean…

* So for example, in Pondering What “Digital First” and “University of the Cloud” Mean…Pondering What “Digital First” and “University of the Cloud” Mean…, I mention things like “virtual machines” and “Docker” and servers and services. If you think that’s too technical, you know what you can do with your cloud briefings…

The OU was innovative because folk understood technologies of all sorts and made creative use of them. Many of our courses included emerging technologies that were examples of the technologies being taught in the courses. We ate the dogfood we were telling students about. Now we’ve put the dog down and just show students cat pictures given to us by consultants.

Programming, Coding & Digital Skills

I keep hearing myself in meetings talking about the “need” to get people coding, but that’s not really what I mean, and it immediately puts people off because I’m not sure they know what programming/coding is or what it’s useful for.

So here’s an example of the sort of thing I regularly do, pretty much naturally – automating simple tasks, a line or two at a time.

The problem was generating some data files containing weather data for several airports. I’d already got a pattern for the URL for the data file, now I just needed to find some airport codes (for airports in the capital cities of the BRICS countries) and grab the data into a separate file for each [code]:

In other words – figuring out what steps I need to do to solve a problem, then writing a line of code to do each step – often separately – looking at the output to check it’s what I expect, then using it as the input to the next step. (As you get more confident, you can start to bundle several lines together.)

The print statements are a bit overkill – I added them as commentary…

On its own, each line of code is quite simple. There are lots of high level packages out there to make powerful things happen with a single command. And there are lots of high level data representations that make it easier to work with particular things. pandas dataframes, for example, allow you to work natually the contents of a CSV data file or an Excel spreadsheet. And if you need to work with maps, there are packages to help with those too. (So for example, as an afterthought I added a quick example to the notebook showing how to add markers for the airports to a map… (I’m not sure if the map will render in the embed or the gist?) That code represents a recipe that can be copied and pasted and used with other datasets more or less directly.

So when folk talk about programming and coding, I’m not sure what they mean by it. The way we teach it in computing departments sucks, because it doesn’t represent the sort of use case above: using a line of code at a time, each one a possible timesaver, to do something useful. Each line of code is a self-made tool to do a particular task.

Enterprise software development has different constraints to the above, of course, and more formalised methods for developing and deploying code. But the number of people who could make use of code – doing the sorts of things demonstrated as per the example above – is far larger than than the number of developers we’ll ever need. (If more folk could build their own single line tools, or work through tasks a line of a code at a time, we may not need so many developers?)

So when it comes to talk of developing “digital skills” at scale, I think of the above example as being at the level we should be aspiring to. Scripting, rather then developer coding/programming (h/t @RossMackenzie for being the first to comment back with that mention). Because it’s in the reach of many people, and it allows them to start putting together their own single line code apps from the start, as well as developing more complex recipes, a line of code at a time.

And one of the reasons folk can become productive is because there are lots of helpful packages and examples of cribbable code out there. (Often, just one or two lines of code will fix the problem you can’t solve for yourself.)

Real programmers don’t write a million lines of code at a time – they often write a functional block – which may be just a line or a placeholder function – one block at a time. And whilst these single lines of code or simple blocks may combine to create a recipe that requires lots of steps, these are often organised in higher level functional blocks – which are themselves single steps at a higher level of abstraction. (How does the joke go? Recipe for world domination: step 1 – invade Poland etc.)

The problem solving process then becomes one of both top-down and bottom up: what do I want to do, what are the high-level steps that would help me achieve that, within each of those: can I code it as a single line, or do I need to break the problem into smaller steps?

Knowing some of the libraries that exist out there can help in this problem solving / decomposing the problem process. For example, to get Excel data into a data structure, I don’t need to know how to open a file, read in a million lines of XML, parse the XML, figure out how to represent that as a data structure, etc. I use the pandas.read_excel() function and pass it a filename.

If we want to start developing digital skills at scale, we need to get the initiatives out of the computing departments and into the technology departments, and science departments, and engineering departments, and humanities departments, and social science departments…

Academic Business Communications?

For several years, I’ve idly wondered whether the job ads of a particular company or institution provide some sort of evidence about the health of the organisation, its current strategy (in terms of long term appointments) and its tactics (short term appointments). Short term contract appointments might also reveal insights about current (or soon to be announced) projects, or even act as indicators that a project is in trouble (and hence requires more bodies throwing at it). Whatever…

Looking at appointments across a sector might also give us some sort of insight about the current concerns of the sector. Identifying bellweather or leader institutions that predict sector wide concerns through regularly being first to advertise posts or roles that others may then start to appoint may provide some sort of insight as to the direction a sector may be heading. Again, whatever.

Whilst I haven’t been tracking HE jobs in general, I do subscribe to the OU jobs feed (for a list of other UK HEIs with jobs related RSS/Atom syndication feeds, see this UK HEI autodiscoverable RSS feed directory).

My gut feeling from skimming this feed is that the OU has been appointing IT related jobs like crazy over the last year or so (read in to that what you may; high churn maybe? Or major IT restructuring?) and relatively few academic positions (from which we might conclude as observers either that the OU has a young/middle aged academic workforce, or that managing the size of the academic body through natural wastage is the order of the day). I think Google Reader will have been archiving the feed, so I guess I could try to run some sort of analysis over it. But that’s as maybe…

Anyway – today I spotted this ad: Strategic Communications Programme, Academic Reputation Manager, Communications (temporary contract for 24 months, £37,012-£44,116), reporting to the Head of Communications. Here’s the spec:

The post is formally based within The Open University Communications Unit, but the post holder will spend a significant amount of time working with academic staff and associate lecturers across the University’s seven faculties and two institutes, acting as a conduit for publicity, dissemination and impact across in the media and via the Universities’ advocates, students, alumni, staff and influential friends, making use of social media.

The post holder will report to the Head of Communications (Managing Editor) (and through him/her to the Director of Communications) and work closely with the Director of Marketing. There will be close working relationships with Communications colleagues in the Digital Engagement, Government Relations and External Affairs, Stakeholder and Media Relations teams. Specifically you will work closely with the Senior Manage Research Communications, to co-ordinate activity and avoid duplication. There are no line-management responsibilities associated with this post.

MAIN PURPOSE

• To lead and coordinate publicity activities across the University, ensuring an optimal and consistent approach is taken to maximise the dissemination and impact of our academic excellence.
• To raise external awareness of the profile and calibre of our academics and teaching staff with key target audiences.
• To raise internal awareness of the excellence and accomplishments of our academics and teaching staff across the OU’s faculties and institutes.
• To support the Director of Communications as an OU Ambassador in engaging on, and communicating, the OU Story.

MAIN RESPONSIBILITIES

• Develop and implement a new Academic Excellence Communications Strategy for the University based on a focused approach aimed at maximum impact on key opinion formers and decision-makers.
• Develop and maintain knowledge of key areas of OU academic excellence and publicise and disseminate news and information accordingly to target audiences, liaising with the media relations team as appropriate for high impact stories.
• Network across faculties, institutes and relevant service units to maximise news gathering, dissemination and impact.
• Commission and edit news stories for the bi-monthly staff enews and liaise with the PVC (Academic) to ensure that individual achievement is acknowledged with personal thanks and the best examples promoted to the Vice Chancellor for celebrating in his video addresses.
• Working with Digital Engagement, contribute to a pan-university approach to faculty and unit based research websites and social media activity.
• Manage publication of brochures and publicity materials for both web and print.
• Day–to-day quality control of all academic excellence materials, including academic excellence-related media releases and academic excellence elements of external and internal OU publications and websites.
• Contribute to the development of case studies for the OU’s Strategic Communications Programme focussed on acquiring new students and employer sponsors, including enhancing the impact of selected case studies in the run-up to submission.
• Support the development and implementation of stakeholder engagement/communications for specific high impact projects and initiatives.
• Create presentations on academic excellence for the PVC (Academic) and other senior staff and provide briefings and guidance for presentation opportunities.
• Manage high profile events aimed to raise the profile of key academics and the OU’s academic reputation as a whole, where there are significant communication opportunities (national workshops, international conferences, showcase events).
• Review academic staff web profiles and advise on raising the quality of these profiles for impact on external audiences such as potential students and the media.
• Work with Senior Manager Research Communications to develop the OU’s database of expertise as an effective means of maximising OU comment in the media (both proactively and in response to media enquiries).
• Contribute to development of the OU’s iTunes U and YouTube research portfolio.
• Liaise as appropriate with Digital Engagement, Open Broadcasting Unit and Marketing (e.g. for approval of advertisements).
• Coordinate academic excellence competition entries (e.g. for Times Higher awards)

OTHER GENERAL RESPONSIBILITIES:

• Understands and takes account of organisational aims and priorities to plan and set clear goals and deliver immediate and long term goals.

• Takes personal responsibility for effectively managing projects to achieve priorities, ensuring efficient use of resources to meet agreed delivery timescales and quality standards.

• Undertake such other duties as may be required from time-to-time by the Director of Communications, to build the reputation of the University.

ORGANISATIONAL RELATIONSHIPS:

• The post holder will be based in the Communications Unit but will also spend significant time working with colleagues across the OU faculties and institutes.

• The post holder will report to the Head of Communications (Managing Editor) (and through him/her to the Director of Communications) but will liaise closely with the Senior Manager, Research Communications, within the Communications Stakeholder Relations team..

• The post holder will work with other individuals, teams and units across the University where required.

So – profile building and celebration of academic achievements seem to be the order of the day, as well as getting OU comment into mainstream media? Thinking about OU content I share, most of it is generally on the basis of what I think is interesting, novel, “important”, quirky, or possibly of interest in one of the communities I believe I communicate into. But I don’t limit myself to sharing info about just OU activities…(The original naming of OUseful.info was inspired by a desire to share info that might be useful in an OU context, facing both outwards (linking to OU projects that were of interest), as well as inwards (bringing ideas from the outside that might contribute internally to the development of the OU mission).)

The job description doesn’t mention the REF, but work also appears to be being commissioned to support that bundle of laughs at a data management level – REF Publications Linked Data:

Key tasks will include:
– The review with others of the existing Research Publication Facility (RPF);
– Design and development of agreed enhancements and additions to the existing system;
– Delivery of an agreed programme of enhancement/development;
– Maintenance and user-support of the live RPF system;
– Direct liaison with users during the REF preparation period, to handle and progress queries and issues etc. as they arise;
– The postholder will also be expected to devise and introduce additional features to the RPF should they be identified as REF Preparations proceed – e.g. improved MI reporting for the REF Coordination Team and the Institutional Research Review Team (IRRT);
– Undertake such other duties as may be required from time-to-time by appointed line/project managers in support of REF preparations and related systems.

The use of linked data to support Research and Scholarship is an exciting field of research development in its own right, and part of the postholder’s role will be to work in association, as directed, with select colleagues in KMi, the PVC’s Office and elsewhere to identify other relevant opportunities for using linked data in support of the Research and Scholarship agenda, where this is considered appropriate and workload allows. The postholder’s primary responsibility however, will be direct support of the OU’s REF submission.

The job ad also mentions that the role “will include in particular the modelling and exposure as linked data of newly identified data not already covered by the current datasets, the constant maintenance and update of existing data. The Project Officer will in particular integrate a team working in collaboration with the Digital Engagement and the Open Media Units [the Open Broadcasting uint, as was…] of the Open University to create linked databased tools and systems related to improving the discoverability of open educational resources”. From which we maybe learn the Digital Engagement Unit and the OMU are sponsoring the OU’s Linked Data effort? As for example further evidenced by this second Linked Data related job ad – Project Officer – Linked Data

– linking and integrating information regarding the outcomes, impact and media appearance of research projects at the Open University;
– creating and making available new sets of data to support the connection between the Open University and external organisations;
– developing applications and tools to support the navigation in, search and reuse of content available at the Open University;
– improving how OU and external linked data is used by the OpenLearn website (open.edu/openlearn) to group relevant content and make recommendations to users;
– connect educational and research content with online services used by researchers and academics at the Open University;
– supporting the use of linked data in research projects;

A good example of what might be involved in that strand of work is suggested by the DiscOU (Discovering Open University Content from Other Online Resources) project:

Back on the jobs front, the Strategic Communications Programme is also appointing a couple of other positions at the moment – an Employer Engagement & Employability Manager “engag[ing] employers with the benefits of sponsoring staff on OU qualifications, and students with the impact an OU qualification can have on their career” and a Campaigns Manager (Social Media) “comfortable in the online and social media environment [who] will develop our reputation for thought leadership in areas of special interest to the University”. The Further Particulars for the Campaigns Manager go on:

Early priorities for the post will be to develop and implement the existing Social Media Content Strategy to respond to the needs of the Strategic Communications Programme (focussed on attracting more students and employers). In doing so the post holder will begin to develop The Open University’s place in public debate and position the University as a thought leader in areas of special concern. To do this, the post-holder will need to engage key academics and senior staff in the potential of social media as a tool to raise the profile for the University and themselves.

MAIN RESPONSIBILITIES

• Develop and implement a SCP Social Media Content Strategy for the University based on a focused approach aimed at maximum impact on prospective students, employer sponsors, key opinion formers and decision-makers.
• Develop and maintain knowledge of the OU’s areas of special concern, encourage debate, disseminate opinion and information accordingly to target audiences, liaising with the media relations team as appropriate for high-impact stories.
• Network across faculties, institutes and relevant service units to maximise engagement of relevant expertise and opinion gathering, to help you stimulate public debate, dissemination and impact.
• Contribute to the development and maintenance of the OU’s presence in Facebook, Twitter and LinkedIn to attract and inform target audiences.
• Working with Digital Engagement, contribute to a pan-university approach to social media activity.
• Work with the Senior Manager, Stakeholders and Ambassadors, on the development of our thought leadership event programme harnessing social media to increase our impact with this programme.
• Day-to-day quality control of all student and employer facing content in our primary Facebook, Twitter and LinkedIn presences.
• Work closely with the Senior Manager, Research Communications, to expand the reach of our impact case studies for the OU’s Research Excellence Framework submission.
• Support the development and implementation of stakeholder engagement/communications with key influencers in Social Media.
• Create and deliver presentations for staff training on the power of social media to help us strengthen our reputation for excellence and thought leadership, providing briefings and guidance for presentation opportunities.
• Work with academic staff to develop their social media profiles for impact on external audiences such as potential students and the media.
• Work with staff to optimize their text, audio, and video content and social media channels, evaluating existing content. Dependent on their abilities, this may include producing and editing digital content for them.
• Identify and disseminate digital content and social media best practices to the University community.
• Contribute to development of the OU’s iTunes U and YouTube portfolio and amplify the excellent content delivered into these environments.
• Liaise as appropriate with Digital Engagement, Open Media Unit and Marketing.
• Coordinate OU competition and league tables entries and amplify our success across social media and OU owned channels (e.g. for Times Higher awards)

Social media is definitely in-scope as a comms channel, then…?!

PS no time to go in to them here, but I also notice ads for a Digital Campaign Manager, a Digital Marketing Director, and a Research and Analysis Manager, all within the Open University Worldwide Ltd Business Development Unit. Apparently, “”[t]he Open University has ambitious plans to grow the number of students and associated revenues from overseas. As part of the Open University Worldwide (OUW), the Digital Marketing Director will be accountable for the marketing strategy and delivery of the marketing plan targeted at both new and existing B2C overseas markets, the highly influential Research and Analysis Manager role will be accountable for a range of activities from providing market, competitor and regulatory analysis to shape market strategy, through to producing insight and analysis of day to day performance, the Digital Campaign Manager will be responsible for the delivery of the marketing campaigns targeting B2C overseas markets.””

As to the sorts of skills these roles require:

• Exceptional understanding of all areas of online marketing, including SEO, SEM, social media and eCRM, acquisition, retention, display, affiliates & partnerships.
• Extensive experience of web management and analytics including knowledge of content management systems, content change process, knowledge of establishing web analytics and implementing measurement tools.
• Extensive experience of managing digital agencies.
• Excellent record of success delivering ROI via innovative online marketing campaigns.
• Proven analytical skills and the ability to drive insight from consumer and market data.
• Innovative approach and understanding of how to build a brand and create online communities.

So… that’s the business of the academy then?!

Guardian Telly on Google TV… Is the OU There, Yet?

A handful of posts across several Guardian blogs brought my attention to the Guardian’s new Google TV app (eg Guardian app for Google TV: an introduction (announcement), Developing the Google TV app in Beta (developer notes), The Guardian GoogleTV project, innovation & hacking (developer reflection)). Launched for the US, initially, “[i]t’s a new way to view [the Guardian’s] latest videos, headlines and photo galleries on a TV.”

The OU has had a demo Google TV app for several months now, courtesy of ex-of-the-OU, now of MetaBroadcast, Liam Green HughesAn HTML5 Leanback TV webapp that brings SPARQL to your living room:

@liamgh's OU leanback TV app demo

[Try the demo here: OU Google TV App [ demo ]]

Liam’s app is interesting for a couple of reasons: first, it demonstrates how to access data – and then content – from the OU’s open Linked Data store (in a similar way, the Guardian app draws on the Guardian Platform API, I think?); secondly, it demonstrates how to use the Google TV templates to get put a TV app together.

(It’s maybe also worth noting that the Google TV wasn’t Liam’s first crack at OU-TV – he also put together a Boxee app way back when: Rising to the Boxee developer challenge with an Open University app.)

As well as video and audio based course materials, seminar/lecture recordings, video shorts (such as the The History of the English Language in Ten Animated Minutes series (I couldn’t quickly find a good OU link?)), the OU also co-produces broadcast video with both the BBC (now under the OU-BBC “sixth agreement”), as well as Channel 4 (eg The Secret Life of Buildings was an OU co-pro).

Many of the OU/BBC co-pro programmes have video clips available on BBC iPlayer via the corresponding BBC programmes sites (I generate a quite possibly incomplete list through this hack – Linked Data Without the SPARQL – OU/BBC Programmes on iPlayer (here’s the current clips feed – I really should redo this script in something like Scraperwiki…); as far as I know, there’s no easy way of getting any sort of list of series codes/programme codes for OU/BBC co-pros, let alone an authoritative and complete one). The OU also gets access to extra clips, which appear on programme related pages on one of the OpenLearn branded sites (OpenLearn), but again, there’s no easy way of navigating these clips, and, erm, no TV app to showcase them.

Admittedly, Google TV enabled TVs are still in the minority and internet TV is still to prove itself with large audiences. I’m not sure what the KPIs are around OU/BBC co-pros (or how much the OU gives the BBC each year in broadcast related activity?), but I can’t for the life of me understand why we aren’t engaging more actively in beta styled initiatives around second screen in particular, but also things like Google TV. (If you think of apps on internet TV platforms such as Google TV or Boxee as channels that you can programme linearly or as on-demand services, might it change folks’ attitude towards them?)

Note that I’m not thinking of apps for course delivery, necessarily… I’m thinking more of ways of making more of the broadcast spend, increasing it’s surface area/exposure, and (particularly in the case of second screen) enriching broadcast materials and providing additional academic/learning journey value. Second screen activity might also as contribute to community development and brand enhancement through online social media engagement in an OU-owned and branded space parallel to the BBC space. Or it might not, of course…;-)

Of course, you might argue that this is all off-topic for the OU… but it isn’t if your focus is the OU’s broadcast activities, rather than formal education. If a fraction of the SocialLearn spend had gone on thinking about second screen applications, and maybe keeping Boxee/Google TV app development ticking over to see what insights it might bring about increasing engagement with broadcast materials, I also wonder if we might have started to think our way round to how second screen and leanback apps could also be used to support actual course delivery and drive innovation in that area?

PS two more things about the Guardian TV app announcement; firstly, it was brought to my attention through several different vectors (different blog subscriptions, Twitter); secondly, it introduced me to the Guardian beta minisite, which acts as an umbrella over/container for several of the Guardian blogs I follow… Now, where was the OU bloggers aggregated feed again? Planet OU wasn’t it? Another @liamgh initiative, I seem to remember…

PPS via a tweet from @barnstormed, I am reminded of something I keep meaning to blog about – OU Playlists on Youtube. For example, Digital Nepal or 60 Second Adventures in Thought, as well as The History of English in Ten Minutes. Given those playlists, one question might be: how might you build an app round them?!

PPPS via @paulbradshaw, it seems that the Guardian is increasingly into the content business, rather than just the news busines: Guardian announces multimedia partnerships with prestigious arts institutions [doh! of course it is….!] In this case, “partnering with Glyndebourne, the Royal Opera House, The Young Vic, Art Angel and the Roundhouse the Guardian [to] offer all more arts multimedia content than ever before”. “Summits” such as the recent Changing Media Summit are also candidate content factory events (eg in the same way that TED, O’Reilly conference and music festival events generate content…)

OU on the Telly…

Ever since the Open University was founded, a relationship with the BBC has provided the OU with a route to broadcast through both television and radio. Some time ago, I posted a recipe for generating a page that showed current OU programmes on iPlayer (all rotted now…). Chatting to Liam last night, I started wondering about resurrecting this service, as well as pondering how I could easily begin to build up an archive of programme IDs for OU/BBC co-pros, so that whenever the fancy took me I could go to a current and comprehensive “OU on iPlayer” page and see what OU co-pro’d content was currently available to watch again.

Unfortunately, there doesn’t seem to be an obvious feed anywhere that gives access to this information, nor a simple directory page listing OU co-pros with links even to the parent series page or series identifier on the BBC site. (This would be lovely data to have in the OU’s open linked data store;-)

OU on the telly...

What caught my attention about this feed is that it’s focussed on growing audience around live broadcasts. This is fine if you’re tweeting added value* along with the live transmission and turning the programme into an event, but in general terms? I rarely watch live television any more, but I do watch a lot of iPlayer…

(* the Twitter commentary feed can than also be turned into expert commentary subtitles/captions, of course, using Martin Hawksey’s Twitter powered iPlayer subtitles recipe..)

There is also a “what’s on” feed available from OpenLearn (via a link – autodiscovery doesn’t seem to be enabled?), but it is rather horrible and it doesn’t contain BBC programme/series IDs (and I’m not sure the linked to pages necessarily do so, either?)

OU openlearn whats on feed (broken)

So – what to do? In the short term, as far as my tinkering goes, nothing (holidays…:-) But I think with a nice feed available, we could make quite a nice little view over OU co-pro’d content currently on iPlayer, and also start to have a think about linking in expert commentary, as well as linking out to additional resources…

See also:
Augmenting OU/BBC Co-Pro Programme Data With Semantic Tags
Linked Data Without the SPARQL – OU/BBC Programmes on iPlayer [this actually provides a crude recipe for getting access to OU/BBC programmes by bookmarking co-pro’d series pages on delicious…]

PS from @liamgh: “Just noticed that Wikipedia lists both BBC & OU as production co e.g. en.wikipedia.org/wiki/The_Virtu… RH Panel readable with dbpedia.” Interesting… so we should be able to pull down some OU/BBC co-pros by a query onto DBPedia…

PPS also from Liam – a handy recipe for generating an HTML5 leanback UI for video content identified via a SPARQL query: An HTML5 Leanback TV webapp that brings SPARQL to your living room

So What’s Open Government Data Good For? Government and “Independent Advisers”, maybe?

Although I got an invite to today’s “Government Transparency: Opening Up Public Services” briefing, I didn’t manage to attend (though I’m rather wishing I had), but I did manage to keep up with what was happening through the #openuk hashtag commentary.

#openuk tweeps

It all kicked off with the Prime Minister’s Letter to Cabinet Ministers on transparency and open data, which sets out the roadmap for government data releases over the coming months in the areas of health, education, criminal justice, transport and public spending; it also sets the scene for the forthcoming Open Public Services White Paper (see also the public complement to that letter: David Cameron’s article in The Telegraph on transparency).

The Telegraph article suggests there will be a “profound impact” in four areas:

– First, it will enable choice, particularly for patients and parents. …
– Second, it will raise standards. All the evidence shows that when you give professionals information about other people’s performance, there’s a race to the top as they learn from the best. …
– Third, this information is going to help us mend our economy. To begin with, it’s going to save money. Already, the information we have published on public spending has rooted out waste, stopped unnecessary duplication and put the brakes on ever-expanding executive salaries. Combine that with this new information on the performance of our public services, and there will be even more pressure to get real value for taxpayers’ money.
– But transparency can help with the other side of the economic equation too – boosting enterprise. Estimates suggest the economic value of government data could be as much as £6 billion a year. Why? Because the possibilities for new business opportunities are endless. Imagine the innovations that could be created – the apps that provide up-to-date travel information; the websites that compare local school performance. But releasing all this data won’t just support new start-ups – it will benefit established industries too.

David Cameron’s article in The Telegraph on transparency

All good stuff… all good rhetoric. But what does that actually mean? What are people actually going to be able to do differently, Melody?

As far as I can tell, the main business models for making money on the web are:

sell the audience: the most obvious example of this is to sell adverts to the visitors of your site. The rate advertisers pay is dependent on the number of people who see the adds, and their specificity (different media attract different, possibly niche, audiences. If an audience is the one you’re particularly trying to target, you’re likely to pay more than you would for a general audience, in part because it means you don’t have to go out and find that focussed audience yourself.) Another example is to sell information about the users of your site (for example, banks selling shopping data).

take a cut: so for example, take an affiliate fee, referral fee or booking fee for each transaction brokered through your site, or levy some other transaction cost.

Where data is involved, there is also the opportunity to analyse other peoples’ data and then sell analysis of that data back to the pubishing organisations as consultancy. Or maybe use that data to commercial advantage in put together tenders and approaches to public bodies?

When all’s said and done, though, the biggest potential is surely within government itself? By making data from one department or agency available, other departments or agencies will have easier access to it. Within departments and agencies too, open data has the potential to reduce friction and barriers to access, as well as opening up the very existence of data sets that may be being created in duplicate fashion across areas of government.

By consuming their own and each others’ open data, departments will also start to develop processes that improve the cleanliness and quality of data sets, (for example, see Putting Public Open Data to Work…? and Open Data Processes – Taps, Query Paths/Audit Trails and Round Tripping; Library Location Data on data.gov.uk gives examples of how the same data can be released in several different (i.e. not immediately consistent) ways).

I’m more than familiar with the saying that “the most useful thing that can be done with your data will probably be done by someone else”, but if an organisation can’t find a way to make use of its own data, why should anyone else even try?! Especially if it means they have to go through the difficulty of cleaning the published data and preparing it for first use. By making use of open data as part of everyday government processes: a) we know the data’s good (hopefully!); b) cleanliness and inconsistency issues will be detected by the immediate publisher/user of the data; c) we know the data will have at least one user.

Finally, one other thing that concerns me is the extent to which “the public” want access to data in order to provide choice. As far as I can tell, choice is often the enemy of contentment; choice can sow the seeds of doubt and inner turmoil when to all intents and purposes there is no choice. I live on an island with a single hospital and not the most effective of rural transport systems. I’d guess the demographics of the island skew old and poor. So being able to “choose” a hospital with performance figures better than the local one for a given procedure is quite possibly no choice at all if I want visitors, or to be able to attend the hospital as an outpatient.

But that’s by the by: because the real issues are that the data that will be made available will in all likelihood be summary statistic data, which actually masks much of the information you’d need to make an informed decision; and if there is any meaningful intelligence in the data, or its summary statistics, you’ll need to know how to interpret the statistics, or even just read the pretty graphs, in order to take anything meaningful form them. And therein lies a public education issue…

Maybe then, there is a route to commercialisation of public facing public data? By telling people the data’s there for you to make the informed choice, the lack of knowledge about how to use that information effectively will open up (?!) a whole new sector of “independent advisers”: want to know how to choose a good school? Ask your local independent education adviser; they can pay for training on how to use the monolithic, more-stats-than-you-can-throw-a-distribution-at one-stop education data portal and charge you to help you decide which school is best for your child. Want comforting when you have to opt for treatment in a hospital that the league tables say are failing? Set up an appointment with your statistical counsellor, who can explain to you that actually things may not be so bad as you fear. And so on…