TT381 Presentation – Open Data and Open Standards

Slides and resources relating to a presentation I just gave to students on the OU course TT381 Open source development toolsOpen Standards and Open Data (presentation):

Here’s a delicious stack of related resources.

I’m wondering if I over enthused on UK Government engagement with open data and open standards?! Ho hum, that’s my public service duty for the day if so…;-)

eSTeEM Conference Presentation – Making More of Structured Course Materials

A copy of the presentation I gave at the OU-eSTeEM conference (no event URL?) on generating custom course search engines and mining OU XML documents to generate course mindmaps (Making More of Structured Documents presentation; delicious stack/bookmark list of related resources):

Chatting to Jonathan Fine after the event, he gave me the phrase secondary products to describe things like course mindmaps that can be generated from XML source files of OU course materials. From what I can tell, there isn’t much if any work going on in the way of finding novel ways of exploiting the structure of OU structured course materials, other than using them simply as a way of generating different presentational views of the course materials as a whole (that is, HTML versions, maybe mobile friendly versions, PDF versions). (If that’s not the case, please feel free to put me right in the comments:-)

One thing Jonathan has been scouring the documents for is evidence of mathematical content across the courses; he also mentioned a couple of ideas relating to access audits over the content itself, such as extracting figure headings, or image captions. (This reminded me of the OpenLearn XML processor (and redux) I first played with 4 years ago (sigh… and nothing’s changed… sigh….), which stripped assets by type from the first generation of OU XML docs). So on my to do list is to have a deeper look at the structure of OU XML, have a peek at what sorts of things might meaningfully (and easily;-) extracted, and figure out two or three secondary products that can be generated as a result. Note that these products might be products for different audiences, at different times of the course lifecycle: tools for use by course team or LTS during production (such as accessibility checks), products to support maintenance (there is already a link checker, but maybe there is more that can be done here?), products for students (such as the mindmap), products for alumni, products for OpenLearn views over the content, products to support “learning analytics”, and so on. (If you have any ideas of what forms the secondary products might take, or what structures/elements/entities you’d like to see mined from OU XML, please let me know via the comments. For an example of an OU XML doc, see here.

jibs.ac.uk AGM Keynote – Revisiting the 5 Laws…

I had the honour of being invited to talk at the JIBS User Group 20 Anniversary AGM yesterday, and as well as having a bit of a rant in the closing plenary about opening up and making internal reuse of data and making FOI requests about SCONUL data*, I also gave this sideways take on Ranganathan’s Five Laws of Library Science for the current age (The Frictionless Library).

Amongst other things, the presentation sketches a possible project (that I think could make for a good workshop day) revisiting each of the laws in network context using the various techniques of constitutional interpretation and (briefly) revisits at least one of the notions of the Invisible Library (see also The Invisible Library (ILI, 2009), another meaningless set of slides…;-)

* Note to self: read up about the current HESA HE Information Landscape Project (Redesigning the higher education data and information landscape). Also check out the “KB+” JISC project (programme?) that will “develo[p] a shared community service that will improve the quality, accuracy, coverage and availability of data for the management, selection, licensing, negotiation, review and access of electronic resources for UK HE” (via @benshowers) and the Talis Aspire Community Edition (aggregated reading lists across several HEIs).

PS I’m working out how to make the slides a little bit more useful as a post hoc/legacy resource by posting them with a bit a context and commentary… But it may take a bit of time…

PPS on the way home, I listened to this Long Now Foundation seminar by Brewster Kahle on Universal Access to All Knowledge, which got me wondering about the extent to which University libraries are depositing resources into the Internet Archive..? There’s a nice piece at the end that makes the point that IPR is such that in terms of the digital record, there’s likely to be a gap in the timeline of archived content right around the 20th century…

PPPS as far as library futures go, here’s a loosely related Roadmapping TEL activity on “Ideas that influence the future of technology enhanced learning” that is currently running on Ideascale.

There were also several discussions during the day relating to information skills needs for 21st century librarians. Some of the ANCIL reports from the Arcadia project on a new information literacy curriculum may be of interest to JIBS members in this regard, I think? Arcadia Project Report

I think there’s a real need for librarians to help folk make sense of the wealth of data out there, and this in part requires a good understanding of network structures and organisations, not just a concentration on hierarchical models.

Hear (sic) also, for example, OU Vice Chancellor Martin Bean on ‘sensemaking’ and the role of the library from his 2010 ALT-C Keynote:

I think it’s also time to start seeing people as information and knowledge resources, as well as just texts…

Just Back From #DevXS

Just back home from #devXS, the first DevCSI student developer event held at the University of Lincoln, in which a shed load (literally!) of student developers gave up their weekend for a 24 hour code bash (and 2 minute Rememberance Sunday silence) on projects of their own design. Well done to all the teams for their hacks and apps – I’m guessing a list of prize winners will appear on the DevXS blog, but you can find a full list on the wiki

It was really encouraging to see several teams hacking out apps and services around course code data – it’s just a shame that UCAS Terms and Conditions make it so hard for folk to find an open way in to getting hold of a national catalogue of course codes. In the same way that restrictions on UK postcode data held back grass roots development for way too long until recently, access to course code data – which UCAS could help out with – is really holding back the development of grass roots apps around course choice and selection…if crappy license conditions are respected of course… (is there an “in the public interest” defence that could be mounted against respecting such terms and conditions?!)

Here’s the overall winning app, from St Andrews’ Another Team: UUG: the Unofficial University Guide

Many congrats and thanks to the local organisers Alex Bilbie, Nick Jackson, Joss Winn, Jamie Mahoney and any others I may have omitted (apols…) as well as UKOLN’s DevCSI co-ordinator Mahendra Mahey. Great stuff, chaps:-)

PS FWIW, here are my slides from the presentation I gave at the event, as well as a hack I did along the way

Appropriate IT – My ILI2011 Presentation

Here’s a copy of the slides from my ILI2011 presentation on Appropriate IT:

One thing I wanted to explore was, if discovery happens elsewhere, and the role of the librarian is no longer focussed on discovery related issues, where can library folk help out? Here’s where I think we need to start placing some attention: sensemaking, and knowing what’s possible (aka helping redistribute the future that is already around us;-) Allied with this is the idea that we need to make more out of using appropriate IT for particular tasks, as well as appropriating IT where we can to make our lives easier.

In part, sensemaking is turning the wealth of relevant data out there into something meaningful for the question or issue at hand, or the choice we have to make. My own dabblings with social network analysis are approaches I’m working on that help me make sense of interest networks and social positioning within those networks so I can get a feel for how those communities are structured and who the major actors are within them.

As far as knowing what’s possible, I think we have a real issue with “folk IT” knowledge. Most of us have a reasonable grasp of folk physics and folk psychology. That is, we have a reasonable common-sense model of how the world works at the human scale (let go of an apple, it falls to the floor), and we can generally read other people from their behaviour; but how well developed is “folk IT” knowledge? Given that to most people the idea that you can search within a page in a wide variety of electronic documents using crtrl-F as a keyboard shortcut to a “search within page/document” feature is alien to them, I think our folk understanding of IT is limited to the principle of “if you switch it off and on again it should start working again”.

Folk IT is also tied up with computational thinking, but at a practical, “human scale”. So here are a few ideas I think the librarians need to start pushing:

– the idea of a graph; it’s what the web’s based around, after all, and it also helps us understand social networks. If you think of your website as a graph, with edges representing links that connect nodes/pages together, and realise that your on-site homepage is whatever page someone lands on from a search engine or third party link, you soon start to realise that maybe your website is not as usefully structured as you thought…
– some sort of common sense understanding of the role that URLs/URIs play in the browser, along with the idea that URIs are readable and hackable and also may say something about the way a website, or the resources it makes available, organised;
– the notion of “View Source”, that allows you to copy and crib the work of others when constructing your own applications, along with the very idea that you might be able to build web pages yourself out of free standing components.
– the idea of document types and applications that can work all sorts of magic given documents of that type; the knowledge that an MP3 file works well with an audio player or audio editor, for example, or that a PNG or JPG encodes an image, along with more esoteric formats such as KML (paste a URL to a KML file into the search box of a Google Maps search and see what happens, for example…). Knowledge of the filetype/document type gives you some sort of power over it, and helps you realise what sorts of thing you can do with it… (except for things like PDF, for example, which is to all intents and purposes a “can’t do anything with it” filetype;-)

I also think an understanding of pattern based string matching and what regular expressions allow you to do would go a long way towards helping folk who ever have to manipulate text or text-based data files, at least in terms of letting them know that there are often better ways of cleaning up a text file automagically rather than having to repeat the same operation over and over again on each separate row in file containing several thousand lines… They don’t need to know how to write the regular expression from the off, just that the sorts of operation regular expressions support are possible, and that someone will probably be able to show you how to do it…

Slides from OU Rise Library Analytics Workshop: Rambling about Visualisation

For what it’s worth, slides from my presentation yesterday… As ever, they’re largely pointless without commentary…

… and even with the commentary, it was all a bit more garbled than usual (I forgot to breathe, had no real idea in my own mind what I wanted to say, etc etc…)

On reflection, here’s what I took from thinking back about what I should have tried to say:

– my assumption is that folk who are interested in asking data related questions should feel as if they can actually work with the data itself (direct data manipulation); I appreciate this is already way off the mark for some people who want someone else to work the data and then just read reports about it – but then that means you can’t ask or discover your own questions about the data, just read answers (maybe) to questions that someone else has asked, presented in a way they decided;

– you need to feel confident in working with data files – or at least, you need to be prepared to have a go at working with data files! (Bear in mind that many of the blog posts I write are write ups – of a sort – of how to do something I didn’t know how to do a couple of hours before… The web usually has answers to most of the questions that I come up against – and if I can’t find the answers, I can often request them via things like Twitter or Stack Overflow…) This can range from using command line tools, to using applications that let you take data in using one format and getting it out as another);

– different tools do different things; if you can get a dataset into a tool in the right way, it may be able to do magical things very very easily indeed…

– three tools that can do a lot without you having to know a lot (though you may have to follow a tutorial or two to pick up the method/recipe….or at least recognise a picture you like and a dataset whose shape you can replicate using your own data, and then the ability to see which bits you need to cut and paste into the command line…):

-=- Gephi: great for plotting networks and graphs. It can also be appropriated to draw line charts (if you can work out how to ‘join the dots’ in the data file by turning the line into a set of points connected by edges) or scatter plots (just load in nodes – no edges connecting them – and lay it out using Gephi’s geolayout tool which also lets you plot “rectilinear” plots based on x and y axis values; (I haven’t worked out a reliable way of working with CSV in Gephi – yet…); it’s amazing what you can describe as a graph when you put your mind to it…

-=- gnuplot: command line tool for plotting scatter plots and line graphs (eg from time series) using data stored in simple text file (e.g. TSV or CSV)

-=- R (and ggplot if you’re feeling adventurous and want :pretty”, nicely designed graphs out); another command line tool (I find R-Studio helps) that again loads in data from a CSV file; R can generate statistical graphs very easily from the command line (it does the stats calculations for you given the raw data).

– Visual analytics/graphical data analysis is a process – you tease out questions and answers through directly manipulating the data and engaging with it in a visual way;

– when you see a visualisation you like, look at it closely: what do you see? Spending five mins or so looking at a Gestalt psychology/visual perception tutorial will give you all sorts of tricks and tips for how to construct visualisations so that structure your eye can detect will jump out at you;

– I think I may have confused folk talking about “dimensions”: what I meant what, how many columns could you represent in a given visulisation at the same time, if each data point corresponds to a single row in a data set. So for example, if you have an x-y plot (2 dimensions), with different symbols (1 dimension) available for plotting the points, as well as different colours (1 dimension) and different possible size (1 dimension) for each symbol, along with a label (1 dimension) for each point, and maybe control over the size (1 dimension), colour (1 dimension) and even font (1 dimension) applied to the label, you might find you can actually plot quite a few columns/dimensions for each data point on your chart… Whether or not you can actually decipher it is another matter of course! My Gephi charts generally have 2 explicit dimensions (node size and colour), as well as making use of two spatial dimensions (x, y) to lay out points that are in some sense “close” to each other in network space. It’s worth remembering though, that if you’re using a tool to engage in a conversation with a dataset as you try to get it to tell its story to you, it may not matter that the visualisation looks a mess to anyone else (a bit like an involved conversation may not make sense if someone else suddenly tries to join it). (Presentation graphics, on the other hand, are usually designed to communicate something that the data is trying to say to another person in a very explicit way.)

– working with data is a tactile thing… you have to be prepared to get your hands dirty…

My Presentation at OU Statistics Conference – Visualisation Tools for the Rest of Us

Slides from my presentation to the OU Visualisation and Presentation in Statistics earlier today… will update this post with notes and links as an when I get round to it! In the meantime, you’ll have to use Google…(though other search engines are available). (Slodes via Slideshare)

Just Do IT Yourself… MY UKSG Presentation

As ever, the slides really need a commentary, but just in case they are useful – my slides from UKSG11:

(It’s short of links, but if you Google appropriate application names it should get you there…;-)

PS Following the presentation, I’ve had a couple of queries about Yahoo Pipes… Here’s a starter guide: Pipes Book – Imaginings. (See also: OUseful.info blog posts on Yahoo Pipes.)

And if you thought you knew what Google could do for you…. What can Google do for you?

TSO OpenUP Competition – Opening Up UCAS Data

Here’s the presentation I gave to the judging panel at the TSO OpenUp competition final yesterday. As ever, it doesn’t make sense with[out] (doh!) me talking, though I did add some notes in to the Powerpoint deck: Opening up UCAS Course Code Data

(I had hoped Slideshare would be able to use the notes as a transcript, bit it doesn’t seem to do that, and I can’t see how to cut and paste the notes in by hand?:-(

A quick summary:

The “Big Idea” behind my entry to the TSO competition was a simple one – make UCAS course data (course code, title and institution) avaliable as data. By opening up the data we make it possible for third parties to construct services and applications based around complete data skeleton of all the courses offered for undergraduate entry through clearing in a particular year across UK higher education.
The data acts as scaffolding that can be used to develop consumer facing applications across HE (e.g. improved course choice applications) as well as support internal “vertical” activities within HEIs that may also be transferable across HEIs.
Primary value is generated from taking the course code scaffolding and annotating it with related data. Access to this dataset may be sold on in a B2B context via data platform services. Consumer facing applications with their own revenue streams may also be built on top of the data platform.
This idea makes data available that can potentially disrupt the currently discovery model for course choice and selection (but in its current form, not in university application or enrollment), in Higher Education in the UK.

Here are the notes I doodled to myself in preparation for the pitch. Now the idea has been picked up, it will need tightening up and may change significantly! ;-) Which is to say – in this form, it is just my original personal opinion on the idea, and all ‘facts’ need checking…

  1. I thought the competition was as much about opening up the data as anything… So the original idea was simply that it would be really handy to have machine readable access to course code and course name information for UK HE courses from UCAS – which is presumably the closest thing we have to a national catalogue of higher education courses.

    But when selected to pitch the idea, it became clear that an application or two were also required, or at least some good business reasons for opening up this data…

    So here we go…

  2. UCAS is the clearing house for applying to university in the UK. It maintains a comprehensive directory of HE courses available in the UK.

    Postgraduate students and Open University students do not go through UCAS. Other direct entry routes to higher education courses may also be available.

    According to UCAS, in 2010, there were 697,351 applicants with 487,329 acceptances, compared with 639,860 applications and 481,854 acceptances in 2009. [ Slightly different figures in end of cycle report 2009/10? ]

    For convenience, hold in mind the thought that course codes could be to course marketing, what postcodes are for geo related applications… They provide a natural identifier that other things can be associated with.

    Associated with each degree course is a course code. UCAS course codes are also associated with JACS codes – Joint Academic Coding System identifiers – that relate to particular topics of study. “The UCAS course codes have no meaning other than “this course is offered by this institution for this application cycle”.” link]

    “UCAS course code is 4 character reference which can be any combination of letters and numbers.

    Each course is also assigned up to three JACS (Joint Academic Coding System) codes in order to classify the course for *J purposes. The JACS system was introduced for 2002 entry, and replaced UCAS Standard Classification of Academic Subjects (SCAS). Each JACS code consists of a single letter followed by 3 numbers. JACS is divided into subject areas, with a related initial letter for each. JACS codes are allocated to courses for the *J return.

    The JACS system is used by the Higher Education Statistics Agency (HESA), and is the result of a joint UCAS-HESA subject code harmonization project.

    JACS is also used by UK institutions to identify the subject matter of programmes and modules. These institutions include the Department for Innovation, Universities and Skills (DIUS), the Home Office and the Higher Education Funding Council for England (HEFCE).”

    Keywords: up to 10 keywords per course are allocated to each course from a restricted list of just over 4,500 valid keywords.
    “Main keyword: This is generally a broad subject category, usually expressed as a single word, for example ‘Business’.
    Suggested keyword (SUG): Where a search on a main keyword identifies more than 200 courses, the Course Search user is prompted to select from a set of secondary keywords or phrases. These are the more specific ‘Suggested keywords’ attached to the courses identified. For example, ‘Business Administration’ is one of a range of ‘Suggested keywords’ which could be attached to a Business course (there are more than 60 others to choose from). A course in Business Administration would typically have this as the ‘Suggested keyword’, with ‘Business’ as the main keyword.
    However, if a course only has a ‘Suggested keyword’ and not a related ‘Main keyword’, the course will not be displayed in any search under the ‘Main keyword’ alone.

    Single subject: Main keywords can be ticked as ‘Single subject’. This means that the course will be displayed by a keyword search on the subject, when the user chooses the ‘single subject’ option below. You may have a maximum of two keywords indicated as single subjects per course.”

    “Between January and March 2010, approximately 600,000 unique IP addresses access the UCAS course code search function. During the same time period, almost 5 million unique IP addresses accessed the UCAS subject search function.” [link]

    “New courses from 2012 will be given UCAS codes that should not be used for subject classification purposes. However, all courses will still be assigned up to three individual JACS3 codes based on the subject content of the course.

    An analysis of unique IP address activity on the UCAS Course Search has shown that very few searches are conducted using the course code, compared to the subject search function. UCAS Courses Data Team will be working to improve the subject search and course keywords over the coming year to enable potential applicants to accurately find suitable courses.” [link]

    Course code identifiers have an important role to play within a university administrations, for example in marshalling resources around a course, although they are not used by students. (On the other hand, students may have a familiarity with module codes.) Course codes identify courses that are the subject of quality assessment by the QAA. To a certain extent, a complete catalogue of course codes allows third parties to organise offerings based around UK higher education degrees in a comprehensive way and link in to the UCAS application procedure.

  3. If released as open data, and particularly as Linked Open Data, the course data can be used to support:
    – the release of horizontal data across the UK HE sector by HEIs, such as course catalogue information;
    – vertical scaffolding within an institution for elaboration by module codes, which in turn may be associated with module descriptions, reading lists, educational resources, etc.
    – the development across HE of services supporting student choice – for example “compare the uni” type services
  4. At the moment the data is siloed inside UCAS behind a search engine with unfriendly session based URLs and a poor results UI. Whilst it is possible to scrape or crowd-source course code information, such ad hoc collection mechanisms run the danger of being far from complete, which means that bias may be introduced into the collection as a side effect of the collection method.
  5. Making the data available via an API or Linked data store makes it easier for third parties to build course related services of whatever flavour – course comparison sites, badging services, resource recommendation services. The availability of the data also makes it easier for developers within an intsitution to develop services around course codes that might be directly transferable to, or scaleable across, other institutions.
  6. What happens if the API becomes writeable? An appropriately designed data store, and corresponding ingest routes, might encourage HEIs to start releasing the course data themselves in a more structured way.

    XCRI is JISC’s preferred way of doing this, and I think there has been some lobbying of HEFCE from various JISC projects, but I’m not sure how successful it’s been?

  7. Ultimately, we might be able to aggregate data from locally maintained local data stores. Course marketing becomes a feature of the Linked Data cloud.

    Also context of data burden on HEIs, reporting to Professional, Statutory and Regulatory Bodies – PSURBS.

    Reconciliation with HESA Institution and campus identifiers, as well as the JISCMU API and Guardian Datablog Rosetta Stone spreadsheet

    By hosting course code data, and using it as scaffolding within a Linked Data cloud around HE courses, a valuable platform service can be made available to HEIs as well as commercial operations designed to support student choice when it comes to selecting an appropriate course and university.

  8. Several recent JISC project have started to explore the release of course related activity data on the one hand, and Linked Data approaches to enterprise wide data management on the other. What is currently lacking is national data-centric view over all HEI course offerings. UCAS has that data.

    Opening up the data facilitates rapid innovation projects within HEIs, and makes it possible for innovators within an HEI to make progress on projects that span across course offerings even if they don’t have easy access to that data from their own institution.

  9. Consumer services are also a possibility. As HEIs become more businesslike, treating students as customers, and paying customers at that, we might expect to see the appearance of university course comparison sites.

    CompareTheUni has had a holding page up for months – but will it ever launch? Uni&Books crowd sources module codes and associated reading links. Talis Aspire is a commercial reading list system that associates resources with module codes.

  10. Last year, I pulled together a few separate published datasets and through them into Google Fusion Tables, then plotted the results. The idea was that you could chart research ratings against student satisfaction, or drop out rates against the academic pay. [link ]

    Guardian datablog picked up the post, and I still get traffic from there on a daily basis… [link ]

  11. The JISC MOSAIC Library data challenge saw Huddersfield University open up book loans data associated with course codes – so you could map between courses and books, and vice versa (“People who studied this course borrowed this book”, “this book was referred to by students on this course”)

    One demonstrator I built used a bookmarklet to annotate UCAS course pages with a link to a resource page showing what books had been borrowed by students on that course at Huddersfiled University. [Link ]

  12. Enter something like compare the uni, but data driven, and providing aggregated views over data from universities and courses.
  13. To set the scene, the site needs to be designed with a user in mind. I see a 16-17 year old, sloughing on the sofa, TV on with the most partial of attention being paid to it, laptop or tablet to hand and the main focus of attention. Facebook chat and a browser are grabbing attention on screen, with occasional distractions from the TV and mobile phone.
  14. The key is course data – this provides a natural set of identifiers that span the full range of clearing based HE course offerings in the UK and allows third parties to build servies on this basis.

    The course codes also provide hooks against which it may be possible to deploy mappings across skills frameworks, e.g. SFIA in IT world. The course codes will also have associated JACS subject code mappings and UCAS search terms, which in turn may provide weak links into other domains, such as the world of books using vocabularies such as the Library of Congress Subject headings and Dewey classification codes.

  15. Further down the line, if we can start to associate module codes with course codes, we can start to develop services to support current students, or informal learners, by hooking in educational resources at the module level.
  16. Marketing can go several ways. For the data platform, evangelism into the HE developer community may spark innovation from within HEIs, most likely under the auspices of JISC projects. Platform data services may also be marketed to third party developers and innovators/entrepeneurs.

    Marketing of services built on top of the data platform will need to be marketed to the target audience using appropriate channels. Specialist marketers such as Campus Group may be appropriate partners here.

  17. The idea pitched is disruptive in that one of the major competitors is at first UCAS. However, if UCAS retains it’s unique role in university application and clearing, then UCAS will still play an essential, and heavily trafficked, role in undergraduate student applications to university. Course discovery and selection will, however, move away from the UCAS site towards services that better meet the needs of potential applicants. One then might imagine UCAS becoming a B2B service that acts as intermediary between student choice websites and universities, or even entertain a scenario in which UCAS is disintermediated and some other clearing mechanism instituted between universities and potential-student facing course choice portals.
  18. According to UCAS, between January and March 2010 “almost 5 million unique IP addresses accessed the UCAS subject search function” [link] In each of the last couple of years, the annual application/acceptance numbers have been of the order approx 500,000 students intake per year, on 600,000 applicants. If 10% of applicants and generate £5 per applicant, that’s £300k pa. £10 from 20% of intake, that’s £1M pa. £50 each from 40% is £10M. I haven’t managed to find out what the acquisition cost of a successful applicant is, or the advertising budget allocated to an undergraduate recruitment marketing campaign, but there are 200 or so HE institutions (going by the number of allocated HESA institution codes).

    For platform business – e.g. business model based around selling queries on linked/aggregated/mapped datasets. If you imagine a query returning results with several attributes, each result is a row and each attribute is a column, If you allow free access to x thousand query cells returned a day, and then charge for cells above that limit, you:
    Encourage wider innovation around your platform; let people run narrow queries or broad queries. License on use of data for folk to use on their own datastores/augmented with their own triples.
    Generate revenue that scales on a metered basis according to usage;
    – offer additional analytics that get your tracking script in third party web pages, helping train your learning classifiers, which makes platform more valuable.

    For a consumer facing application – eg a course choice site for potential appications is the easiest to imagine:
    – Short term model would be advertising (e.g. course/uni ads), affiliate fees on booksales for first year books? Seond hand books market eg via Facebook marketplace?
    – Medium term – affiliate for for prospectus application/fulfilment
    Long term – affiliate fee for course registration

  19. At the end of the day, if the data describing all the HE courses available in the UK is available as data, folk will be able to start to build interesting things around it…

Google Apps as a Mashup Environment – Slides from #guug11

FWIW, here are the slides from my presentation on “Mashing Up Google Apps” at the excellent Google Apps UK User Group (#guug11), as hosted by Martin Hamilton at Loughbourough University yesterday.

The “mashup environment” diagram was generated using a desktop version of Graphviz, but it can also be generated using the Google Chart Tools Graphviz chart, as in the example below:

google apps mashup environment

Here’s the “source code” for that image:

digraph googApps {

GoogleSpreadsheet [shape=Msquare]
GoogleCalendar [shape=Msquare]
GoogleMail [shape=Msquare]
GoogleDocs [shape=Msquare]
CSV [shape=diamond]
JSON [shape=diamond]
HTML [shape=diamond]
XML [shape=diamond]
GoogleAppsScript [shape=diamond]
"[GoogleVizDataAPI]" [shape=diamond]
"<GoogleForm>" [shape=doubleoctagon]
"<GoogleGadgets>" [shape=doubleoctagon]
"<GoogleVizDataCharts>" [shape=doubleoctagon]
"<GoogleMaps>" [shape=doubleoctagon]

CSV->URL
HTML->URL
XML->URL
event->GoogleAppsScript
GoogleAppsScript->"<GoogleMaps>"
GoogleAppsScript->GoogleMail
GoogleAppsScript->GoogleCalendar
GoogleAppsScript->GoogleSpreadsheet
GoogleSpreadsheet->GoogleAppsScript
GoogleAppsScript->GoogleDocs
GoogleSpreadsheet->JSON
email->GoogleMail
GoogleMail->email
GoogleDocs->GoogleAppsScript
GoogleCalendar->GoogleAppsScript
"<GoogleForm>"->event
event->GoogleSpreadsheet
time->event
"<GoogleForm>"->GoogleSpreadsheet
URL->GoogleSpreadsheet
GoogleSpreadsheet->"[GoogleVizDataAPI]"
"[GoogleVizDataAPI]"->"<GoogleVizDataCharts>"
GoogleSpreadsheet->"<GoogleGadgets>"
}

And finally, here’s a snapshot of the hashtag community around the event as of mid-morning yesterday:

#guug11 twitter echo chamber

Node colour is related to the total number of followers, and node size is betweenness centrality.