OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education

Archive for the ‘Stirring’ Category

Mapping the Tesco Corporate Organisational Sprawl – An Initial Sketch

A quick sketch, prompted by Tesco Graph Hunting on OpenCorporates of how some of Tesco’s various corporate holdings are related based on director appointments and terminations:

The recipe is as follows:

- grab a list of companies that may be associated with “Tesco” by querying the OpenCorporates reconciliation API for tesco
– grab the filings for each of those companies
– trawl through the filings looking for director appointments or terminations
– store a row for each directorial appointment or termination including the company name and the director.

You can find the scraper here: Tesco Sprawl Grapher

import scraperwiki, simplejson,urllib

import networkx as nx

#Keep the API key [private - via http://blog.scraperwiki.com/2011/10/19/tweeting-the-drilling/
import os, cgi
try:
    qsenv = dict(cgi.parse_qsl(os.getenv("QUERY_STRING")))
    ockey=qsenv["OCKEY"]
except:
    ockey=''

rurl='http://opencorporates.com/reconcile/gb?query=tesco'
#note - the opencorporates api also offers a search:  companies/search
entities=simplejson.load(urllib.urlopen(rurl))

def getOCcompanyData(ocid):
    ocurl='http://api.opencorporates.com'+ocid+'/data'+'?api_token='+ockey
    ocdata=simplejson.load(urllib.urlopen(ocurl))
    return ocdata

#need to find a way of playing nice with the api, and not keep retrawling

def getOCfilingData(ocid):
    ocurl='http://api.opencorporates.com'+ocid+'/filings'+'?per_page=100&api_token='+ockey
    tmpdata=simplejson.load(urllib.urlopen(ocurl))
    ocdata=tmpdata['filings']
    print 'filings',ocid
    #print 'filings',ocid,ocdata
    #print 'filings 2',tmpdata
    while tmpdata['page']<tmpdata['total_pages']:
        page=str(tmpdata['page']+1)
        print '...another page',page,str(tmpdata["total_pages"]),str(tmpdata['page'])
        ocurl='http://api.opencorporates.com'+ocid+'/filings'+'?page='+page+'&per_page=100&api_token='+ockey
        tmpdata=simplejson.load(urllib.urlopen(ocurl))
        ocdata=ocdata+tmpdata['filings']
    return ocdata

def recordDirectorChange(ocname,ocid,ffiling,director):
    ddata={}
    ddata['ocname']=ocname
    ddata['ocid']=ocid
    ddata['fdesc']=ffiling["description"]
    ddata['fdirector']=director
    ddata['fdate']=ffiling["date"]
    ddata['fid']=ffiling["id"]
    ddata['ftyp']=ffiling["filing_type"]
    ddata['fcode']=ffiling["filing_code"]
    print 'ddata',ddata
    scraperwiki.sqlite.save(unique_keys=['fid'], table_name='directors', data=ddata)

def logDirectors(ocname,ocid,filings):
    print 'director filings',filings
    for filing in filings:
        if filing["filing"]["filing_type"]=="Appointment of director" or filing["filing"]["filing_code"]=="AP01":
            desc=filing["filing"]["description"]
            director=desc.replace('DIRECTOR APPOINTED ','')
            recordDirectorChange(ocname,ocid,filing['filing'],director)
        elif filing["filing"]["filing_type"]=="Termination of appointment of director" or filing["filing"]["filing_code"]=="TM01":
            desc=filing["filing"]["description"]
            director=desc.replace('APPOINTMENT TERMINATED, DIRECTOR ','')
            director=director.replace('APPOINTMENT TERMINATED, ','')
            recordDirectorChange(ocname,ocid,filing['filing'],director)

for entity in entities['result']:
    ocid=entity['id']
    ocname=entity['name']
    filings=getOCfilingData(ocid)
    logDirectors(ocname,ocid,filings)

The next step is to graph the result. I used a Scraperwiki view (Tesco sprawl demo graph) to generate a bipartite network connecting directors (either appointed or terminated) with companies and then published the result as a GEXF file that can be loaded directly into Gephi.

import scraperwiki
import urllib
import networkx as nx

import networkx.readwrite.gexf as gf

from xml.etree.cElementTree import tostring

scraperwiki.sqlite.attach( 'tesco_sprawl_grapher')
q = '* FROM "directors"'
data = scraperwiki.sqlite.select(q)

DG=nx.DiGraph()

directors=[]
companies=[]
for row in data:
    if row['fdirector'] not in directors:
        directors.append(row['fdirector'])
        DG.add_node(directors.index(row['fdirector']),label=row['fdirector'],name=row['fdirector'])
    if row['ocname'] not in companies:
        companies.append(row['ocname'])
        DG.add_node(row['ocid'],label=row['ocname'],name=row['ocname'])   
    DG.add_edge(directors.index(row['fdirector']),row['ocid'])

scraperwiki.utils.httpresponseheader("Content-Type", "text/xml")


writer=gf.GEXFWriter(encoding='utf-8',prettyprint=True,version='1.1draft')
writer.add_graph(DG)

print tostring(writer.xml)

Saving the output of the view as a gexf file means it can be loaded directly in to Gephi. (It would be handy if Gephi could load files in from a URL, methinks?) A version of the graph, laid out using a force directed layout, with nodes coloured according to modularity grouping, suggests some clustering of the companies. Note the parts of the whole graph are disconnected.

In the fragment below, we see Tesco Property Nominees are only losley linked to each other, and from the previous graphic, we see that Tesco Underwriting doesn’t share any recent director moves with any other companies that I trawled. (That said, the scraper did hit the OpenCorporates API limiter, so there may well be missing edges/data…)

And what is it with accountants naming companies after colours?! (It reminds me of sys admins naming servers after distilleries and Lord of the Rings characters!) Is there any sense in there, or is arbitrary?

Written by Tony Hirst

April 12, 2012 at 3:56 pm

Tesco Graph Hunting on OpenCorporates

A quick lunchtime post on some thoughts around constructing corporate graphs around OpenCorporates data. To ground it, consider a search for “tesco” run on gb registered companies via the OpenCorporates reconciliation API.

{"result":[{"id":"/companies/gb/00445790", "name":"TESCO PLC", "type":[{"id":"/organization/organization","name":"Organization"}], "score":78.0, "match":false, "uri":"http://opencorporates.com/companies/gb/00445790"}, {"id":"/companies/gb/05888959", "name":"TESCO AQUA (FINCO1) LIMITED", "type":[{"id":"/organization/organization", "name":"Organization"}], "score":71.0, "match":false, "uri":"http://opencorporates.com/companies/gb/05888959"}, { ...

Some or all of these companies may or may not be part of the same corporate group. (That is, there may be companies in that list with Tesco in the name that are not part of the group of companies associated with a major UK supermarket.)

If we treat the companies returned in that list as one class of nodes in a graph, we can start to construct a range of graphs that demonstrate linkage between companies based on a variety of factors. For example, a matching address for a registered, post off box mediated, address in an offshore tax haven suggests there may be a weak tie at least between companies:

(Alternatively, we might construct bipartite graphs containing company nodes and address nodes, for example, then collapse the graph about common addresses.)

Shared directors would be another source of linkage, although at the moment, I don’t think OpenCorporates publishes directors associated with UK companies (I suspect that data is still commercially licensed?). However, there is associated information available in the OpenCorporates database already…. For example, if we look at the various company filings, we can pick up records relating to director appointments and terminations?

By monitoring filings, we can then start to build up a record of directorial involvement with companies? From looking at the filings, it also suggests that it would make sense to record commencement and cessation dates for directorial appointments…

There may also be weak secondary evidence linking companies. For example, two companies that file trademarks using the same agent have a weak tie through that agent. (Of course, that agent may be acting for two completely independent companies.)

If we weight edges between nodes according to the perceived strength of a tie and then lay out the graph in a way that is sensitive to the number of weight of edge connections between company nodes, we may be able to start mapping out the corporate structure of these large, distributed corporations, either in network map terms, or maybe by mapping geolocated nodes based on registered addresses; and then we can start asking questions about why these distributed corporate entities are structured the way they are…

PS note to self – OpenCorporates API limit with key: 1000/hr, 10k/day

Written by Tony Hirst

April 12, 2012 at 12:36 pm

Posted in Anything you want, Stirring

Tagged with

Autodiscoverable Feeds and UK HEIs (Again…)

It’s that time of year again when Brian’s banging on about IWMW, the Instituional[ised?] Web Managers’ Workshop, and hence that time of year again when he reminds me* about my UK HE Feed Autodiscovery app that trawls through various UK HEI home pages (the ones on .ac.uk, rather than the one you get by searching for a uni name in Google;-)

* that is, tells me the script is broken and, by implication, gently suggests that I should fix it…;-)

As ever, most universities don’t seem to be supporting autodiscoverable feeds (neither are many councils…), so here are a few thoughts about what feeds you might link to, and why…

- news feeds: the canonical example. News feeds can be used to pipe news around various university websites, and also syndicate content to any local press or hyperlocal news sites. If every UK HEI published a news feed that was autodiscoverable as such, it would be trivial to set up a UK universities aggregated newswire.

- research announcements: I was told that one reason for putting out press releases was simply to build up an institutional memory/archive of notable events. Many universities run research newsletters that remark on awarded grants. How about a “funded research” feed from each university detailing grant awards and other research funding. Again, at a national level, this could be aggregated to provide a research funding newswire, as well as contribtuing data to local archives of research funding success.

- jobs: if every UK HEI published a jobs/vacancies RSS feed, it would trivial to build an aggregator and let people roll their own versions of jobs.ac.uk.

- events: universities contribute a lot to local culture through public talks and exhibitions. Make it easy for the local press and hyperlocal news sites to syndicate this info, and add events to their own aggregated “what’s on” calendars. (And as well as RSS, give ‘em an iCal feed for your events.)

- recent submissions to local repository: provide a feed listing recent submissions to the local research output/paper repository (and/or maybe a feed of the most popular downloads); if local feeds are you thing, the library quite possibly makes things like recent acquisition feeds available…

- YouTube uploads: you might was well add an autodiscoverable feed to your university’s recent uploads on YouTube. If nothing else, it contributes an informal ownership link to the web for folk who care about things like that.

- your university Twitter feed: if you’ve got one. I noticed Glasgow Caledonian linked to their Twitter feed through an autodiscoverable link on their university homepage.

- tenders: there’s a whole load of work going on in gov at the moment regarding transparency as it relates to procurement and tendering. So why not get open with your procurement and tendering data, and increase the chances of SMEs finding out what you’re tendering around. If the applications have to go through a particular process, no problem: link to the appropriate landing page in each feed item.

- energy data: releasing this data may well become a requirement in the not so far off future, so why not get ahead of the game, e.g. as Lincoln are starting to do (Lincoln U energy data)? If everyone was publishing energy data feeds, I’m sure DevCSI hackday folk would quickly roll together something like the aggregating service built by college student @issyl0 out of a Rewired State hack that pulls together UK gov department energy data: GovSpark

- XCRI-CAP course marketing data feeds: JISC is giving away shed loads of cash to support this, so pull your finger out and get the thing published.

- location data: got a KML feed yet? If not, why not? e.g. Innovations in Campus Mapping

PS the backend of my RSS feed autodiscovery app (founded: 2008) is a Yahoo pipe. Just because, I thought I’d take half an hour out to try and build something related on Scraperwiki. The code is here: UK University Autodiscoverable RSS feeds. Please feel free to improve or, fork it, etc. University homepage URLs are identified by scraping a page on the Universities UK website, but I probably should use a feed from the JISC Monitoring Unit (e.g. getting UK University location/contact data).

PPS this could be handy for some folk – the code that runs the talks@cam events site: http://source.caret.cam.ac.uk/svn/projects/talks.cam/. (Thanks Laura:-) – does it do feeds nicely now?! Related: Keeping Up With Events, a quickly hacked app from my Arcadia project that (used to) aggregate Cambridge events feeds.)

Written by Tony Hirst

July 26, 2011 at 6:59 pm

Getting Access to University Course Code Data (or not… (yet…))

A couple of weeks or so ago, having picked up the TSO OpenUp competition prize for suggesting that it would be a Good Thing for UCAS/university course code data to be made available, I had a meeting with the TSO folk to chat over “what next?” The meeting was an upbeat one with a plan to get started as soon as possible with a scrape of the the UCAS website… so what’s happened since…?

First up – a reading of the UCAS website Terms and Conditions suggests that scraping is a no-no…

6. Intellectual property rights
e. Copying, distributing or any use of the material contained on the website for any commercial purpose is prohibited.
f. You may not create a database by systematically downloading substantial parts of the website

(In the finest traditions of the web, you aren’t allowed to deep link into the site without permission either: 6.c inks to the website are not permitted, other than links to the homepage for your personal use, except with our prior written permission. Links to the website from within a frameset definition are not permitted except with our prior written permission.)

So, err, I guess my link to the terms and conditions breaks those terms and conditions? Oops…;-) Should I be sending them something like this do you think?

Dear enquiries@ucas.ac.uk,
As per your terms and conditions, (paragraph 6 c) please may I publish a link to your terms and conditions web page [ http://www.ucas.com/terms_and_conditions ] in a blog post I am writing that, in part, refers to your terms and conditions?
Luv'n'hugs,
tony

As a fallback, I put a couple of trial balloon FOI requests in to a couple of universities asking for the course names and UCAS course codes for courses offered in 2010/11, along with the search keywords associated with each course (doh! I did it again, deep linking into the UCAS site…)

PS Please may I also link to the page describing course search keywords [ http://www.ucas.com/he_staff/courses/coursesearchkeywords ] ?

The first request went to the University of Southampton, in part because I knew that they already publish chunks of the data (as data) as part of their #opensoton Open Data initiative. (This probably means I was abusing the FOI system, but a point maybe needed to be made…?!;-) The second request was put in to the University of Bristol.

The requests were of the form:

I would be grateful if you could send me in spreadsheet, machine readable electronic form or plain text a copy of the course codes, course titles and search keywords for each course as submitted to UCAS for the 2010-2011 (October 2010) student entry.

If possible, would you also provide HESA subject category codes associated with each course.

So how did I get on?

Bristol’s response was as follows:

On discussion with our Admissions and Student Information teams, it appears that the University does not actually hold this data – it is held on a UCAS database. UCAS are not currently subject to the Freedom of Information Act (they will be in due course) but it may be worth talking to them directly to see if they are willing to assist.

And Southampton’s FOI response?

Course codes and titles may be found here: http://www.soton.ac.uk/corporateservices/foi/request-66210-6124d691.pdf Keywords were not held by the University – you should inquire with UCAS (http://www.ucas.com). HESA subject category codes may be found here: http://www.hesa.ac.uk/index.php/content/view/1806/296/

So what did I learn?

  1. I don’t seem to have made it clear enough to Southampton that I wanted the the 2-tuple (course code, HESA code) for each course. So how should I have asked for that data (the response pointed me to the list of all HESA codes. What I wanted was, for each course code, the course code/HESA code pair).
  2. Generalising from an example of one;-), there seems to be a disconnect between FOI and open data branches of organisations. In my ideal world, the FOI person (an advocate for the person making the request) would also be on good terms with the Open Data team in the organisation, if not a data wrangler themselves. For data requests, the FOI person would make sure the data is released as open data as part of the process of fulfilling the request and then refer the person making the request to the open data site (see also: Open Data Processes – Taps, Query Paths/Audit Trails and Round Tripping). Southampton have part of this process already – the course data is in a PDF on the their site and I was referred to it. (Note that the PDF is not just any PDF – have a look at it! – rather than the spreadsheet, machine readable electronic form or plain text I requested, even though @cgutteridge had posted a link to the SPARQL opendata query for the course code/UCAS code information I’d requested as a reply to my FOI request on the WhatDoTheyKnow site.)
  3. Universities don’t necessarily have any record of the search keywords they associate with the courses they post on UCAS. The UCAS website suggests that (doh!) “[r]ecent analysis of unique IP address use of the UCAS Course Search indicates that the subject search is by far the most popular of the 3 search options currently available”, such that “[w]hen an applicant uses our Course Search facility to search for available courses, they can choose a keyword by which to search, known as the ‘subject search’.” Which is to say, universities have no local record of the terms they use to describe courses that are the the primary way of discovering their courses on UCAS? Blimey… (I wonder how much universities spend on Google AdWords for advertising particular courses on their own course prospectus websites and how they go about selecting those terms?)
  4. Asking for a machine readable “data as data” response has no teeth at the current time. I don’t know if the Protection of Freedoms bill clause that “extends Freedom of Information rights by requiring datasets to be available in a re-usable format” will change this? It seems like it might?

    Where—
    (a) an applicant makes a request for information to a public authority in respect of information that is, or forms part of, a dataset held by the public authority, and
    (b) on making the request for information, the applicant expresses a preference for communication by means of the provision to the applicant of a copy of the information in electronic form, the public authority must, so far as reasonably practicable, provide the information to the applicant in an electronic form which is capable of re-use.

  5. So what next? UCAS is a charity that appears to be operated by, for, and on behalf of UK Higher Education (e.g. UCAS Directors’ Report and Accounts 2009). Whilst not FOIable yet, it looked set to become FOIable from October 2011 (Ministry of Justice: Greater transparency in Freedom of Information), though I haven’t been able to find the SI and commencement date that enact this…?). IF it does become FOIable, we may be able to get the data out that way (although memories of the battle between open data advocates and the Ordnance Survey come to mind…) Hopefully, though, we’ll be able to get the data open by more amicable means before then…:-)

    PS a couple of other things that I’ve been dipping into relating to this project. Firstly, the UCAS Business Plan 2009-2012 (doh!):

    PPS Please may I also link to your Corporate Business Plan 2009-2012 [ http://www.ucas.com/documents/corporate/corpbusplan09-12.pdf ]

    Secondly, the Cabinet Office’s “Better Choices: Better Deals” strategy document [PDF], which as well as its “MyData” right to personal data initiative, also encourages business to put their information (and data…) to work. Whether or not you agree that more information may help to make for better choices from potential students, or that comparison sites have a role to play in this, the UK government appears to believe it and looks set to support the development of businesses operating in this area. For example:

    Effective consumer choices are also important in the public sector – such as decisions about what and where to study.
    However, unlike in private markets, public services are generally:
    ● Free at the point of delivery, so prices do not give us clues about quality or popularity.
    ● Not motivated by profits, so there is little incentive to highlight differences and encourage switching.
    ● Supplied under a universal service obligation, such that they serve a particularly broad range of users, from the very informed to the highly vulnerable.
    In the same way that comparison and feedback sites have developed for private markets, some choice-tools have already emerged for public services. For example, parents and prospective students can use league tables to compare school and university performance, while patients can access websites comparing waiting times for treatments across different healthcare providers, and feedback from fellow consumers about the performance of a local GP practice. Their role is likely to become more important in future as public service markets are opened up and there is scope for further choice-tools to be developed [Better Choices: Better Deals, p. 32]

    If you’re looking to put a bid or business plan together based on using public data as a basis for comparison services, the Better Choices document has more than a few quotable sections;-)

    [Related: Course Detective metasearch/custom search across UK University prospectus websites]

Written by Tony Hirst

April 26, 2011 at 12:58 pm

Posted in Data, Stirring, Thinkses

Tagged with , , , ,

Predictive Ads…? Or Email Address Targeted Advertising…?!

As I get was getting increasingly annoyed by large flashing display ads in my feedreader this morning, the thought suddenly occurred to me: could Google serve me ads on third party sites based on my unread Gmail emails?

That is, as I check my feeds before my email in a morning, could I be seeing ads that foreshadow the content of the email I’ve been ignoring for way too long? Or could I receive ads that flag the content of my Priority Inbox messages?

Rules regarding sensitivity and privacy would have to be carefully thought through,m of course. Here’s how they currently stand regarding contextual ads delivered in Gmail (More on Gmail and privacy: Targeted ads in Gmail):

By offering Gmail users relevant ads and information related to the content of their messages, we aim to offer users a better webmail experience. For example, if you and your friends are planning a vacation, you may want to see news items or travel ads about the destination you’re considering.

To ensure a quality user experience for all Gmail users, we avoid showing ads reflecting sensitive or inappropriate content by only showing ads that have been classified as “Family-Safe.” We also avoid targeting ads to messages about catastrophic events or tragedies. [Google's emphasis]

[See also: Ads in Gmail and your personal data Share Comment]

Not quite as future predictive as gDay™ with MATE™ that lets you “search tomorrow’s web today” and “[discover] content on the internet before it is created”, but almost…!

It’s also a step on the road to Eric Schmidt’s dream of providing you with results even before you search for them. (For a more recent interview, see Google’s Eric Schmidt predicts the future of computing – and he plans to be involved.)

Here’s another, more practical(?!) thought – suppose Google served me headers of Priority Inbox email messages that were also marked as urgent through Adwords ads, in a full-on attempt to try to attract my attention to “really important” messages?! “Flashmail” messages delivered through the Adwords network… (I can imagine at least one course manager who I suspect would try to contact me via ads when I don’t pick up my email! ;-)

Searching the internet of things may still be a little way off though….

PS thinking email address targeted ads (mailads?) through a bit more, here are a couple of ways of doing it that immediately come to mind. Suppose I want to target an ad at whoever@example.com:

1) Adwords could place that ad in my GMail sidebar; (I think they’d be unlikely to place ads within emails, even if clearly marked, because this approach has been hugely unpopular in the past (it also p****s me off in feeds ); that said, Google has apparently started experimenting with (image based) display ads in gmail;

2) Adwords could place the ad on a third party site if the Goog spots me via a cookie and sees I’m currently logged in to Google, for example, with the whoever@example.com email address.

As Facebook gets into the universal messaging game, email address based ad targeting would also work there?

PPS interesting – the best ads act as content, so maybe ads could be used to deliver linked content? Twitter promoted tweets – the AdWords for live news?. Which reminds me, I need to work up my bid for using something like AdWords to deliver targeted educational content.

Written by Tony Hirst

February 8, 2011 at 11:08 am

So What Do Universities Sell?

When I joined the OU as an academic over a decade ago, I spent my first 6 months or so asking everyone I met what it was the OU sold, only to be met with “go away, silly boy” sort of looks. (I still don’t know: courses/modules? degrees/qualifications? CPD products? consultancy? research interests, or capacity (though not development or innovation;-)?! If nothing else, the demographics of our paying customers has changed over that period (“Open University may be in its 40s – but students are getting younger“); but does that mean that what we’re selling has actually changed too? Who knows?!)

That universities are now businesses competing in a marketplace is undeniable, and increasingly looks as if it is opening up to private enterprises (Publishing giant Pearson looks set to offer degrees) who are allowed expected to talk up the ability to generate profits (rather than, err, building up reserves and new buildings;-). See for example Doug Clow’s piece on Apollo Group results – BPP and University of Phoenix where he starts to unpick Apollo Group’s reported financials. (It’s worth remembering where the profits are expected to come from, of course… e.g. Doug again: Tuition Fees and the costs of HE).

So what happens when the market hots the university? More and more marketing, maybe…?

Here’s a round-up of the latest OU job ads…

  • Director of Communications (£88,769 – £100,763): “The Open University has been providing life-changing learning experiences for over 40 years and now has 250,000 students. We make a major contribution to choice and innovation in higher education, social mobility and enriching the skills of the workforce through world class teaching and research.
    “The Director of Communications works closely with the Vice-Chancellor and Executive to develop and implement a communications strategy to support delivery of the corporate strategy, build and develop relationships with key external stakeholders, ensure consistent delivery of brand, protect and develop reputation and develop organisational culture.
    “This is a rare and exciting opportunity for an energetic and visionary person with a passion for education to drive communications activities to build our reputation as the world leader in flexible learning.”
  • Marketing Planning and Programme Manager, Marketing (B2B) (£46,510- £52,347): “The post has been created to assist the University to develop its marketing capacity specifically to the B2B Employer Engagement area. It will be essential to harness the energies of academic and academic related staff in the University’s Business Development units, service units and regions to develop a more effective marketing strategy. This will require planning, modelling, project management, influencing and networking skills of the highest order, and an ability to adapt leadership/management style to an academic context.”
  • Two Programme Communications Managers, Marketing (£36,715 – £43,840): “Working within a small team, you will be responsible for planning, developing and delivering a broad range of marketing acquisition or retention campaigns to meet student number targets.
    “The position requires a proven ability to develop & implement successful marcomms strategies that have the support of key stakeholders. The successful candidate will have a full mix of marketing experience, including a clear understanding of disciplines such as direct & digital marketing, advertising & event management to name a few. This role requires excellent communications & project management skills, ideally twinned with a strong commercial background.”
  • Web Assistant Producer Open Learn (Explore), Open Broadcasting Unit (£29,853 – £35,646): “Earlier this year the OU launched an updated public facing, topical news and media driven site. The site bridges the gap between BBC TV viewing and OU services and functions as the new ‘front door’ to Open Learn and all of the Open University’s open, public content. We are looking for a Web Assistant Producer with web production/editing skills.
    “You will work closely with a Producer, 2 Web Assistant Producers, the Head of Online Commissioning and many others in the Open University, as well as the BBC.”

And whilst the OU – like many other HEIs – is doing its utmost to keep recruitment of new academic staff to a minimum, and allowing natural wastage to reduce staffing further, it’s good to know that at least posts like the above count as academic related:

Academic related jobs at the OU

PS if you have any ideas about what it is that universities actually sell, please let me know in a comment…;-)

PPS Relevant to the above ads, and picking up on a couple of tweets I posted last week, I’m intrigued to know how university communications departments measure their impact? Presumably (despite being academic related) it’s not got a lot to do with being referenced in academic journals?;-) But how do they measure their impact? Answers in the comments, please…:-)

Written by Tony Hirst

January 6, 2011 at 11:47 am

Posted in Stirring

@SOU_Airport No ads, thanks, just info… A Flight Tracking Autoresponder Would Be Handy Though…

A few minutes or so, @sou_airport tweeted:

Welcome to new followers of Southampton Airport. Since the snow, followers have doubled and we will keep you up to date with news and offers

With recent news stories deploring the state of information provision, the occasional tweets from @sou_airport regarding the status of the airport have been handy…

- “Southampton Airport is to open from 06:30am today- some knock-on delays are expected due to the weather. Pls check with airlines for info.”

- “Southampton Airport currently has capacity on Flybe flights to Amsterdam, Paris, Dusseldorf and Brussels up to Christmas.”

If they’re just going to start tweeting ads and offers, though, I’m not interested, and will likely unfollow… Just because a company suddenly opens up a comms channel that folk sign up doesn’t mean it needs to be a marketing channel – the payoff is in having fewer disgruntled passengers, and folk turning up to fly to find cancelled flights and adding to the airport’s problems…

If they want to consign me to following a backwater channel, such as @sou_airport_status, that’s fine… Just don’t add noise to the signal if all i want is signal…

Something that would be quite handy would be an autoresponder. The Southampton website already has live flight arrival/departure info, and a form that lets you enter either a flight number or a departure/destination airport for arrivals and departures respectively.

Southampton airport

The “accessible” page provides a simpler view of the information, though the URL is not as friendly as it might be…:

http://www.southamptonairport.com/portal/controller/dispatcher.jsp?ChPath=Southampton^General^Flight%20information^Live%20flight%20departures

Southampton airport

The search URL is even more hostile:

http://www.southamptonairport.com/portal/site/southampton/template.PAGE/menuitem.eae22a7fd8fc683c63f0ec109328c1a0/?javax.portlet.begCacheTok=token&javax.portlet.endCacheTok=token&javax.portlet.tpst=f8a931aeea8d03f4b03f78109328c1a0&javax.portlet.prp_f8a931aeea8d03f4b03f78109328c1a0_flightRoute=leeds&javax.portlet.prp_f8a931aeea8d03f4b03f78109328c1a0_flightNumber=&javax.portlet.prp_f8a931aeea8d03f4b03f78109328c1a0_flightTerminal=

which looks like it requires the presence of tokens or dynamically created data.

SOuthampton airport flights board

So what might an autoresponder look like? Even if all we do is redirect the flight status information, we might imagine an exchange like the following:

@sou_airport_flightStatus BE173

would return something like:

@example BE173 dep 15.40 Tues 21/10 LEEDS B/FORD SCHEDULED

or for BE3646:

@example BE3646 BERGERAC sch_dep 14:05 arr ESTIMATED 19:03

Alternatively, the airport could just re-factor its (paid for) SMS service (which I think is operated by BAA, as it seems to be available for several other UK airports):

Flight status SMS

Flying Messenger – check flights by SMS
Need flight updates on the move? Flying Messenger lets you check flights at Southampton Airport by mobile phone text message (SMS).

How to use it
Text sou (for Southampton) plus your flight number to 82222.
So for example, if your flight number is BE1846, your text should read:
sou be1846
You’ll receive a reply giving the current status of your flight.

When to use it
This service is available up to 12 hours ahead of the flight’s scheduled time, and up to four hours afterwards. Requests must be sent on the scheduled date.
If you need to set up an alert in advance, please see Flying Messenger PLUS.

What it costs
Each Flying Messenger request costs 25p plus your network’s SMS message rate. If you’re using a pre-pay phone, you need to have enough credit to cover the cost of the service.

As to the amount of distress caused to folk traveling over the last few days, and the stress that several UK airports have been under because of travelers turning up for already cancelled flights, it amazes me that BAA aren’t willing to buy a bundle of free texts and offer a free SMS autoresponder information service…

Written by Tony Hirst

December 21, 2010 at 4:07 pm

Posted in Stirring

Follow

Get every new post delivered to your Inbox.

Join 787 other followers