OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education

Archive for the ‘Thinkses’ Category

Confused About Transparency

with 6 comments

[Thinkses in progress - riffing around the idea that transparency is not reporting. This is all a bit confused atm...]

UK Health Secretary Jeremy Hunt was on BBC Radio 4′s Today programme today talking about a new “open and honest reporting culture” for UK hospitals. Transparency, it seems, is about publishing open data, or at least, putting crappy league tables onto websites. I think: not….

The fact that a hospital has “a number” of mistakes may or may not be interesting. As with most statistics, there is little actual information in a single number. As the refrain on the OU/BBC co-produced numbers programme More or Less goes, ‘is it a big number or a small number?’. The information typically lies in the comparison with other numbers, either across time or across different entities (for example, comparing figures across hospitals). But comparisons may also be loaded. For a fair comparison we need to normalise numbers – that is, we need to put them on the same footing.

[A tweet from @kdnuggets comments: 'The question to ask is not - "is it a big number or a small number?", but how it compares with other numbers'. The sense of the above is that such a comparison is always essential. A score of 9.5 in a test is a large number when the marks are out of ten, a small one when out of one hundred. Hence the need for normalisation, or some other basis for generating a comparison.]

XKCD: heatmap

The above cartoon from web comic XKCD demonstrates this with a comment about how reporting raw numbers on a map often tends to just produce a population map illustrates this well. If 1% of town A with population 1 million has causal incidence [I made that phrase up: I mean, the town somehow causes the incidence of X at that rate] of some horrible X (that is, 10,000 people get it as a result of living in town A), and town B with a population of 50,000 (that is, 5,000 people get X) has a causal incidence of 10%, a simple numbers map would make you fearful of living in town A, but you’d be likely worse off moving to town B.

Sometimes a single number may appear to be meaningful. I have £2.73 in my pocket so I have £2.73 to spend when I go to the beach. But again, there is a need for comparison here. £2.73 needs to be compared against the price of things it can purchase to inform my purchasing decisions.

In the opendata world, it seems that just publishing numbers is taken as transparency. But that’s largely meaningless. Even being able to compare numbers year on year, or month on month, or hospital on hospital, is largely meaningless, even if those comparisons can be suitably normalised. It’s largely meaningless because it doesn’t help me make sense of the “so what?” question.

Transparency comes from seeing how those numbers are used to support decision making. Transparency comes from seeing how this number was used to inform that decision, and why it influenced the decision in that way.

Transparency comes from unpacking the decisions that are “evidenced” by the opendata, or other data not open, or no data at all, just whim (or bad policy).

Suppose a local council spends £x thousands on an out-of area placement several hundred miles away. This may or may not be expensive. We can perhaps look at other placement spends and see that the one hundred of miles away appears to offer good value for money (it looks cheap compared to other placements; which maybe begs the question why those other placements are bing used if pure cost is a factor). The transparency comes from knowing how the open data contributed to the decision. In many cases, it will be impossible to be fully transparent (i.e. to fully justify a decision based on opendata) because there will be other factors involved, such as a consideration of sensitive personal data (clinical decisions based around medical factors, for example).

So what that there are z mistakes in a hospital, for league table purposes – although one thing I might care about is how z is normalised to provide a basis of comparison with other hospitals in a league table. Because league tables, sort orders, and normalisation make the data political. On the other hand – maybe I absolutely do want to know the number z – and why is it that number? (Why is it not z/2 or 2*z? By what process did z come into being? (We have to accept, unfortunately, that systems tend to incur errors. Unless we introduce self-correcting processes. I absolutely loved the idea of error-correcting codes when I was first introduced to them!) And knowing z, how does that inform the decision making of the hospital? What happens as a result of z? Would the same response be prompted if the number was z-1, or z/2? Would a different response be in order if the number was z+1, or would nothing change until it hit z*2? In this case the “comparison” comes from comparing the different decisions that would result from the number being different, or the different decisions that can be made given a particular number. The meaning of the number then becomes aligned to the different decisions that are taken for different values of that number. The number becomes meaningful in relation to the threshold values that the variable corresponding to that number are set at when it comes to triggering decisions.)

Transparency comes not from publishing open data, but from being open about decision making processes and possibly the threshold values or rates of change in indicators that prompt decisions. In many cases the detail of the decision may not be fully open for very good reason, in which case we need to trust the process. Which means understanding the factors involved in the process. Which may in part be “evidenced” through open data.

Going back to the out of area placement – the site hundreds of miles away may have been decided on by a local consideration, such as the “spot price” of the service provision. If financial considerations play a part in the decision making process behind making that placement, that’s useful to know. It might be unpalatable, but that’s the way the system works. But it begs the question – does the cost of servicing that placement (for example, local staff having to make round trips to that location, opportunity cost associated with not servicing more local needs incurred by the loss of time in meeting that requirement) also form part of the financial consideration made during the taking of that decision? The unit cost of attending a remote location for an intervention will inevitably be higher than attending a closer one.

If financial considerations are part of a decision, how “total” is the consideration of the costs?

That is very real part of the transparency consideration. To a certain extent, I don’t care that it costs £x for spot provision y. But I do want to know that finance plays a part in the decision. And I also want to know how the finance consideration is put together. That’s where the transparency comes in. £50 quid for an iPhone? Brilliant. Dead cheap. Contract £50 per month for two years. OK – £50 quid. Brilliant. Or maybe £400 for an iPhone and a £10 monthly contract for a year. £400? You must be joking. £1250 or £520 total cost of ownership? What do you think? £50? Bargain. #ffs

Transparency comes from knowing the factors involved in a decision. Transparency comes from knowing what data is available to support those decisions, and how the data is used to inform those decisions. In certain cases, we may be able to see some opendata to work through whether or not the evidence supports the decision based on the criteria that are claimed to be used as the basis for the decision making process. That’s just marking. That’s just checking the working.

The transparency bit comes from understanding the decision making process and the extent to which the data is being used to support it. Not the publication of the number 7 or the amount £43,125.26.

Reporting is not transparency. Transparency is knowing the process by which the reporting informs and influences decision making.

I’m not sure that “openness” of throughput is a good thing either. I’m not even sure that openness of process is a Good Thing (because then it can be gamed, and turned against the public sector by private enterprise). I’m not sure at all how transparency and openness relate? Or what “openness” actually applies to? The openness agenda creeps (as I guess I am proposing here in the context of “openness” around decision making) and I’m not sure that’s a good thing. I don’t think we have thought openness through and I’m not sure that it necessarily is such a Good Thing after all…

What I do think we need is more openness within organisations. Maybe that’s where self-correction can start to kick in, when the members of an organisation have access to its internal decision making procedures. Certainly this was one reason I favoured openness of OU content (eg Innovating from the Inside, Outside) – not for it to be open, per se, but because it meant I could actually discover it and make use of it, rather than it being siloed and hidden away from me in another part of the organisation, preventing me from using it elsewhere in the organisation.

Written by Tony Hirst

June 24, 2014 at 9:59 am

Posted in Thinkses

Tagged with , ,

Tracking Changes in IPython Notebooks?

with 3 comments

Managing the tracking suggested changes to the same set of docs, along with comments and observations, from multiple respondents in is one of the challenges any organisation who business is largely concerned with the production of documents has to face.

Passing shared/social living documents by reference rather than value, so that folk don’t have to share multiple physical copies of the same document, each annotated separately, is one way. Tools like track changes in word processor docs, wiki page histories, or git diffs, is another.

All documents have an underlying representation – web pages have HTML, word documents have whatever XML horrors lay under the hood, IPython notebooks have JSON.

Change tracking solutions like git show differences to the raw representation, as in this example of a couple of changes made to a (raw) IPython notebook:

Track changes in github

Notebooks can also be saved in non-executable form that includes previously generated cell outputs as HTML, but again a git view of the differences would reveal changes at the HTML code level, rather than the rendered HTML level. (Tracked changes also include ‘useful’ ones, such as changes to cell contents, and (at a WYSWYG level at least) irrelevant ‘administrative’ value changes such as changes to hash values recored in the notebook source JSON.

Tracking changes in a WYSIWYG display shows the changes at the rendered, WYSIWYG level, as for example this demo of a track changes CKEditor plugin demonstrates [docs]:

lite - ck editor track changes

However, the change management features are typically implemented through additional additional metadata/markup to the underlying representation:

lite changes src

For the course we’re working on at the moment, we’re making significant use of IPython notebooks, requiring comments/suggested changes from multiple reviewers over the same set of notebooks.

So I was wondering – what would it take to have an nbviewer style view in something like github that could render WYSIWYG track changes style views over a modified notebook in just cell contents and cell outputs?

This SO thread maybe touches on related issues: Using IPython notebooks under version control.

A similar principle would work the same for HTML too, of course. Hmm, thinks… are there any git previewers for HTML that log edits/diffs at the HTML level but then render those diffs at the WYSIWYG level in a traditional track changes style view?

Hmm… I wonder if a plugin for Atom.io might do this? (Anyone know if atom.io can also run as a service? Eg could I put it onto a VM and then axis it through localhost:ATOMIOPORT?)

PS also on the change management thing in IPython Notebooks, and again something that might make sense in a got context, is the management of ‘undo’ features in a cell.

IPython notebooks have a powerful cell-by-cell undo feature that works at least during a current session (if you shut down a notebook and then restart it, I assume the cell history is lost?). [Anyone know a good link describing/summarising the history/undo features of IPython Notebooks?]

I’m keen for students to take ownership of notebooks and try things out within them, but I’m also mindful that sometimes they make make repeated changes to a cell, lose the undo history for whatever reason, and then reset the cell to the “original” contents, for some definition of “original” (such as the version that was issued to the learner by the instructor, or the version the learner viewed at their first use of the notebook.)

A clunky solution is for students to duplicatea each notebook before they start to work on it so they have an original copy. But this is a bit clunky. I just want an option to reveal a “reset” button by each cell and then be able to reset it. Or perhaps in line with the other cell operations, reset either a specific highlight cell, reset all, cells, or reset all cells above or below a selected cell.

Written by Tony Hirst

June 5, 2014 at 9:16 am

Posted in OU2.0, Thinkses

Tagged with ,

Open Data, Transparency, Fan-In and Fan-Out

with one comment

In digital electronics, the notions of fan in and fan out describe, respectively, the number of inputs a gate (or, on a chip, a pin) can handle, or the number of output connections it can drive. I’ve been thinking about this notion quite a bit, recently, in the context of concentrating information, or data, about a particular service.

For example, suppose I want to look at the payments made by a local council, as declared under transparency regulations. I can get the data for a particular council from a particular source. If we consider each organisation that the council makes a payment to as a separate output (that is, as a connection that goes between that council and the particular organisation), the fan out of the payment data gives the number of distinct organisations that the council has made a declared payment to.

One things councils do is make payments to other public bodies who have provided them with some service or other. This may include other councils (for example, for the delivery of services relating to out of area social care).

Why might this be useful? If we aggregate the payments data from different councils, we can set up a database that allows us to look at all payments from different councils to a particular organisation, (which may also be a particular council, which is obliged to publish its transaction data, as well as a private company, which currently isn’t). (See Using Aggregated Local Council Spending Data for Reverse Spending (Payments to) Lookups for an example of this. I think startup Spend Network are aggregating this data, but they don’t seem to be offering any useful open or free services, or data collections, off the back of it. OpenSpending has some data, but it’s scattergun in what’s there and what isn’t, depending as it does on volunteer data collectors and curators.)

The payments incoming to a public body from other public bodies are therefore available as open data, but not in a generally, or conveniently, concentrated way. The fan in public payments is given by the number of public bodies that have made a payment to a particular body (which may itself be a public body or may be a private company). If the fan in is large, it can be a major chore searching through the payments data of all the other public bodies trying to track down payments to the body of interest.

Whilst I can easily discover fan out payments from a public body, I can’t easily discover the originators of fan in public payments to a body, public or otherwise. Except that I could possibly FOI a public body for this information (“please send me a list of payments you have received from these bodies…”).

As more and more public services get outsourced to private contractors, I wonder if those private contractors will start to buy services off the public providers? I may be able to FOI the public providers for their receipts data (any examples of this, successful or otherwise?), but I wouldn’t be able to find any publicly disclosed payments data from the private provider to the public provider.

The transparency matrix thus looks something like this:

  • payment from public body to public body: payment disclosed as public data, receipts available from analysis of all public body payment data (and reciipts FOIable from receiver?)
  • payment from public body to private body: payment disclosed as public data; total public payments to private body can be ascertained by inspecting payments data of all public bodies. Effective fan-in can be increased by setting up different companies to receive payments and make it harder to aggregate total public monies incoming to a corporate group. (Would be useful if private companied has to disclose: a) total amount of public monies received from any public source, exceeding some threshold; b) individual payments above a certain value from a public body)
  • payment from private body to public body: receipt FOIable from public body? No disclosure requirement on private body? Private body can effectively reduce fan out (that is, easily identified concentration of outgoing payments) by setting up different companies through which payments are made.
  • payment from private body to private body: no disclosure requirements.

I have of course already wondered Do We Need Open Receipts Data as Well as Open Spending Data?. My current take on this would perhaps argue in favour of requiring all bodies, public or private, that receive more than £25,000, for example, in total per financial year from a particular corporate group* to declare all the transactions (over £500, say) from that body. A step on the road towards that would be to require bodies that receive more than a certain amount of receipts summed from across all public bodies to be subject to FOI at least in respect of payments data received from public bodies.

* We would need to define a corporate group somehow, to get round companies setting up EvilCo Public Money Receiving Company No. 1, EvilCo Public Money Receiving Company No. 2354 Ltd, etc, each of which only ever invoices up to £24,999. There would also have to be a way of identifying payments from the same public body but made through different accounts (for example, different local council directorates).

Whilst this would place a burden on all bodies, it would also start to level out the asymmetry between public body reporting and private body reporting in the matter of publicly funded transactions. At the moment, private company overheads for delivering subcontracted public services are less than public body overheads for delivering the same services in the matter of, for example, transparency disclosures, placing the public body at a disadvantage compared to the private body when it comes to transparency disclosures. (Note that things may be changing, at least in FOI stakes… See for example the latter part of Some Notes on Extending FOI.)

One might almost think the government was promoting transparency of public services gleeful in the expectation that as there privatisation agenda moves on a decreasing proportion of service providers will actually have to make public disclosures. Again, this asymmetry would make for unfair comparisons between service providers based on publicly available data if only data from public body providers of public services, rather than private providers of tendered public services, had to be disclosed.

So the take home, which has got diluted somewhat, is the proposal that the joint notions of fan in and fan out, when it comes to payment/receipts data, may be useful when it comes to helping us think about out how easy it is to concentrate data/information about payments to, or from, a particular body, and how policy can be defined to shine light where it needs shining.

Comments?

Written by Tony Hirst

April 17, 2014 at 7:58 pm

Posted in Open Data, Thinkses

A Nudge Here, A Nudge There, But With Meaning..

A handful of posts caught my attention yesterday around the whole data thang…

First up, a quote on the New Aesthetic blog: “the state-of-the-art method for shaping ideas is not to coerce overtly but to seduce covertly, from a foundation of knowledge”, referencing an article on Medium: Is the Internet good or bad? Yes. The quote includes mention of an Adweek article (this one? Marketers Should Take Note of When Women Feel Least Attractive; see also a response and the original press release) that “noted that women feel less attractive on Mondays, and that this might be the best time to advertise make-up to them.”

I took this as a cautionary tale about the way in which “big data” qua theoryless statistical models based on the uncontrolled, if large, samples that make up “found” datasets, to pick up on a phrase used by Tim Harford in Big data: are we making a big mistake? [h/t @schmerg et al]) can be used to malevolent affect. (Thanks to @devonwalshe for highlighting that it’s not the data we should blame (“the data itself has no agency, so a little pointless to blame … Just sensitive to tech fear. Shifts blame from people to things.”) but the motivations and actions of the people who make use of the data.)

Which is to say – there’s ethics involved. As an extreme example, consider the possible “weaponisation” of data, for example in the context of PSYOP – “psychological operations” (are they still called that?) As the New Aesthetic quote, and the full Medium article itself, explain, the way in which data models allow messages to be shaped, targeted and tailored provides companies and politicians with a form of soft power that encourage us “to click, willingly, on a choice that has been engineered for us”. (This unpicks further – not only are we modelled so that the prompts are issued to us at an opportune time, but the choices we are provided with may also have been identified algorithmically.)

So that’s one thing…

Around about the same time, I also spotted a news announcement that Dunnhumby – an early bellwether of how to make the most of #midata consumer data – has bought “advertising technology” firm Sociomantic (press release): “dunnhumby will combine its extensive insights on the shopping preferences of 400 million consumers with Sociomantic’s intelligent digital-advertising technology and real-time data from more than 700 million online consumers to dramatically improve how advertising is planned, personalized and evaluated. For the first time, marketing content can be dynamically created specifically for an individual in real-time based on their interests and shopping preferences, and delivered across online media and mobile devices.” Good, oh…

A post on the Dunnhumby blog (It’s Time to Revolutionise Digital Advertising) provides further insight about what we might expect next:

We have decided to buy the company because the combination of Sociomantic’s technological capability and dunnhumby’s insight from 430m shoppers worldwide will create a new opportunity to make the online experience a lot better, because for the first time we will be able to make online content personalised for people, based on what they actually like, want and need. It is what we have been doing with loyalty programs and personalised offers for years – done with scale and speed in the digital world.

So what will we actually do to make that online experience better for customers? First, because we know our customers, what they see will be relevant and based on who they are, what they are interested in and what they shop for. It’s the same insight that powers Clubcard vouchers in the UK which are tailored to what customers shop for both online and in-store. Second, because we understand what customers actually buy online or in-store, we can tell advertisers how advertising needs to change and how they can help customers with information they value. Of course there is a clear benefit to advertisers, because they can spend their budgets only where they are talking to the right audience in the right way with the right content at the right time, measuring what works, what doesn’t and taking out a lot of guesswork. The real benefit though must be to customers whose online experience will get richer, simpler and more enjoyable. The free internet content we enjoy today is paid for by advertising, we just want to make it advertising offers and content you will enjoy too.

This needs probing further – are Dunnhumby proposing merging data about actual shopping habits in physical and online store with user cookies so that ads can be served based on actual consumption? (See for example Centralising User Tracking on the Web. How far has this got, I wonder? Seems like it may be here on mobile devices? Google’s New ‘Advertising ID’ Is Now Live And Tracking Android Phones — This Is What It Looks Like. Here’s the Android developer docs on Advertising ID. See also GigaOm on As advertisers phase out cookies, what’s the alternative?, eg in context of “known identifiers” (like email addresses and usernames) and “stable identifiers” (persistent device or browser level identifiers).)

That’s the second thing…

For some reason, it’s all starting to make me think of supersaturated solutions

PS FWIW, the OU/BBC co-produced Bang Goes the Theory (BBC1) had a “Big Data” episode recently – depending on when you read this, you may still be able to watch it here: Bang Goes the Theory – Series 8 – Episode 3: Big Data

Written by Tony Hirst

April 4, 2014 at 11:40 am

Posted in Thinkses

Tagged with

Mixing Stuff Up

Remember mashups? Five years or so ago they were all the rage. At their heart, they provided ways of combining things that already existed to do new things. This is a lazy approach, and one I favour.

One of the key inspirations for me in this idea combinatorial tech, or tech combinatorics, is Jon Udell. His Library Lookup project blew me away in its creativity (the use of bookmarklets, the way the project encouraged you to access one IT service from another, the using of “linked data”, common/core-canonical identifiers to bridge services and leverage or enrich one from another, and so on) and was the spark that fired many of my own doodlings. (Just thinking about it again excites me now…)

As Jon wrote on his blog yesterday (Shiny old tech) (my emphasis):

What does worry me, a bit, is the recent public conversation about ageism in tech. I’m 20 years past the point at which Vinod Khosla would have me fade into the sunset. And I think differently about innovation than Silicon Valley does. I don’t think we lack new ideas. I think we lack creative recombination of proven tech, and the execution and follow-through required to surface its latent value.

Elm City is one example of that. Another is my current project, Thali, Yaron Goland’s bid to create the peer-to-peer web that I’ve long envisioned. Thali is not a new idea. It is a creative recombination of proven tech: Couchbase, mutual SSL authentication, Tor hidden services. To make Thali possible, Yaron is making solid contributions to Thali’s open source foundations. Though younger than me, he is beyond Vinod Khosla’s sell-by date. But he is innovating in a profoundly important way.

Can we draw a clearer distinction between innovation and novelty?

Creative recombination.

I often think of this in terms of appropriation (eg Appropriating Technology, Appropriating IT: innovative uses of emerging technologies or Appropriating IT: Glue Steps).

Or repurposing, a form of reuse that differs from the intended original use.

Openness helps here. Open technologies allow users to innovate without permission. Open licensing is just part of that open technology jigsaw; open standards another; open access and accessibility a third. Open interfaces accessed sideways. And so on.

Looking back over archived blog posts from five, six, seven years ago, the web used to be such fun. An open playground, full of opportunities for creative recombination. Now we have Facebook, where authenticated APIs give you access to local social neighbourhoods, but little more. Now we have Google using link redirection and link pollution at every opportunity. Services once open are closed according to economic imperatives (and maybe scaling issues; maybe some creative recombinations are too costly to support when a network scales). Maybe my memory of a time when the web was more open is a false memory?

Creative recombination, ftw.

PS just spotted this (Walking on custard), via @plymuni. If you don’t see why it’s relevant, you probably don’t get the sense of this post!

Written by Tony Hirst

April 3, 2014 at 9:21 am

Visualising Pandas DataFrames With IPythonBlocks – Proof of Concept

A few weeks ago I came across IPythonBlocks, a Python library developed to support the teaching of Python programming. The library provides an HTML grid that can be manipulated using simple programming constructs, presenting the outcome of the operations in a visually meaningful way.

As part of a new third level OU course we’re putting together on databases and data wrangling, I’ve been getting to grips with the python pandas library. This library provides a dataframe based framework for data analysis and data-styled programming that bears a significant resemblance to R’s notion of dataframes and vectorised computing. pandas also provides a range of dataframe based operations that resemble SQL style operations – joining tables, for example, and performing grouping style summary operations.

One of the things we’re quite keen to do as a course team is identify visually appealing ways of illustrating a variety of data manipulating operations; so I wondered whether we might be able to use ipythonblocks as a basis for visualising – and debugging – pandas dataframe operations.

I’ve posted a demo IPython notebook here: ipythonblocks/pandas proof of concept [nbviewer preview]. In it, I’ve started to sketch out some simple functions for visualising pandas dataframes using ipythonblocks blocks.

For example, the following minimal function finds the size and shape of a pandas dataframe and uses it to configure a simple block:

def pBlockGrid(df):
    (y,x)=df.shape
    return BlockGrid(x,y)

We can also colour individual blocks – the following example uses colour to reveal the different datatypes of columns within a dataframe:

ipythinblocks pandas type colour

A more elaborate function attempts to visualise the outcome of merging two data frames:

ipythonblocks pandas demo

The green colour identifies key columns, the red and blue cells data elements from the left and right joined dataframes respectively, and the black cells NA/NaN cells.

One thing I started wondering about that I have to admit quite excited me (?!;-) was whether it would be possible to extend the pandas dataframe itself with methods for producing ipythonblocks visual representations of the state of a dataframe, or the effect of dataframe based operations such as .concat() and .merge() on source dataframes.

If you have any comments on this approach, suggestions for additional or alternative ways of visualising dataframe transformations, or thoughts about how to extend pandas dataframes with ipythonblocks style visualisations of those datastructures and/or the operations that can be applied to them, please let me know via the comments:-)

PS some thoughts on a possible pandas interface:

  • DataFrame().blocks() to show the blocks
  • .cat(blocks=True) and .merge(blocks=True) to return (df, blocks)
  • DataFrame().blocks(blockProperties={}) and eg .merge(blocks=True, blockProperties={})
  • blockProperties: showNA=True|False, color_base=(), color_NA=(), color_left=(), color_right=(), color_gradient=[] (eg for a .cat() on many dataframes), colorView=structure|datatypes|missing (the colorView reveals the datatypes of the columns, the structure origins of cells returned from a .merge() or .cat(), or a view of missing data (reveal NA/NaN etc over a base color), colorTypes={} (to set the colors for different datatypes)

Written by Tony Hirst

March 26, 2014 at 11:37 pm

So Is This Guerrillla Research?

A couple of days ago I delivered a workshop with Martin Weller on the topic of “Guerrilla Research”.

guerrilapdf

The session was run under the #elesig banner, and was the result of an invitation to work through the germ of an idea that was a blog post Martin had published in October 2013, The Art Of Guerrilla Research.

In that post, Martin had posted a short list of what he saw as “guerrilla research” characteristics:

  1. It can be done by one or two researchers and does not require a team
  2. It relies on existing open data, information and tools
  3. It is fairly quick to realise
  4. It is often disseminated via blogs and social media

Looking at these principles now, as in, right now, as a I type (I don’t know what I’m going to write…), I don’t necessarily see any of these as defining, at least, not without clarification. Let’s reflect, and see how my fingers transcribe my inner voice…

In the first case, a source crowd or network may play a role in the activity, so maybe it’s the initiation of the activity that only requires one or two people?

Open data, information and tools helps, but I’d gear this more towards pre-existing data, information and tools, rather than necessarily open: if you work inside an organisation, you may be able to appropriate resources that are not open or available outside the organisation, and may even have limited access within the organisation; you may have to “steal” access to them, even; open resources do mean that other people can engage in the same activity using the same resources, though, which provides transparency and reproducibility; open resources also make inside, outside activities possible.

The activity may be quick to realise, sort of: I can quickly set a scraper going to collect data about X, and the analysis of the data may be quick to realise; but I may need the scraper to run for days, or weeks, or months; more qualifying, I think, is that the activity only requires a relatively short number of relatively quick bursts of activity.

Online means of dissemination are natural, because they’re “free”, immediate, have potentially wide reach; but I think an email to someone who can, or a letter to the local press, or an activity that is it’s own publication, such as a submission to a consultation in which the responses are all published, could also count too.

Maybe I should have looked at those principles a little more closely before the workshop…;-) And maybe I should have made reference to them in my presentation. Martin did, in his.

PS WordPress just “related” this back to me, from June, 2009: Guerrilla Education: Teaching and Learning at the Speed of News

Written by Tony Hirst

March 21, 2014 at 8:44 am

Posted in OU2.0, Thinkses

Tagged with ,

Time to Drop Calculators in Favour of Notebook Programming?

With the UK national curriculum for schools set to include a healthy dose of programming from September 2014 (Statutory guidance – National curriculum in England: computing programmes of study) I’m wondering what the diff will be on the school day (what gets dropped if computing is forced in?) and who’ll be teaching it?

A few years ago I spent way too much time engaged in robotics related school outreach activities. One of the driving ideas was that we could use practical and creative robotics as a hands-on platform in a variety of curriculum context: maths and robotics, for example, or science and robotics. We also ran some robot fashion shows – I particularly remember a two(?) day event at the Quay Arts Centre on the Isle of Wight where a couple of dozen or so kids put on a fashion show with tabletop robots – building and programming the robots, designing fashion dolls to sit on them, choosing the music, doing the lights, videoing the show, and then running the show itself in front of a live audience. Brilliant.

On the science side, we ran an extended intervention with the Pompey Study Centre, a study centre attached to the Portsmouth Football Club, that explored scientific principles in the context of robot football. As part of the ‘fitness training’ programme for the robot footballers, the kids had to run scientific experiments as they calibrated and configured their robots.

The robot platform – mechanical design, writing control programmes, working with sensors, understanding interactions with the real world, dealing with uncertainty – provided a medium for creative problem solving that could provide a context for, or be contextualised by, the academic principles being taught from a range of curriculum areas. The emphasis was very much on learning by doing, using an authentic problem solving context to motivate the learning of principles in order to be able to solve problems better or more easily. The idea was that kids should be able to see what the point was, and rehearse the ideas, strategies and techniques of informed problem solving inside the classroom that they might then be able to draw on outside the classroom, or in other classrooms. Needless to say, we were disrespectful of curriculum boundaries and felt free to draw on other curriculum areas when working within a particular curriculum area.

In many respects, robotics provides a great container for teaching pragmatic and practical computing. But robot kit is still pricey and if not used across curriculum areas can be hard for schools to afford. There are also issues of teacher skilling, and the set-up and tear-down time required when working with robot kits across several different classes over the same school day or week.

So how is the new computing curriculum teaching going to be delivered? One approach that I think could have promise if kids are expected to used text based programming languages (which they are required to do at KS3) is to use a notebook style programming environment. The first notebook style environment I came across was Mathematica, though expensive license fees mean I’ve never really used it (Using a Notebook Interface).

More recently, I’ve started playing with IPython Notebooks (“ipynb”; for example, Doodling With IPython Notebooks for Education).

(Start at 2 minutes 16 seconds in – I’m not sure that WordPress embeds respect the time anchor I set. Yet another piece of hosted WordPress crapness.)

For a history of IPython Notebooks, see The IPython notebook: a historical retrospective.

Whilst these can be used for teaching programming, they can also be used for doing simple arithmetic, calculator style, as well as simple graph plotting. If we’re going to teach kids to use calculators, then maybe:

1) we should be teaching them to use “found calculators”, such as on their phone, via the Google search box, in those two-dimensional programming surfaces we call spreadsheets, using tools such as WolframAlpha, etc;

2) maybe we shouldn’t be teaching them to use calculators at all? Maybe instead we should be teaching them to use “programmatic calculations”, as for example in Mathematica, or IPython Notebooks?

Maths is a tool and a language, and notebook environments, or other forms of (inter)active, executable worksheets that can be constructed and or annotated by learners, experimented with, and whose exercises can be repeated, provide a great environment for exploring how to use and work with that language. They’re also great for learning how the automated execution of mathematical statements can allow you to do mathematical work far more easily than you can do by hand. (This is something I think we often miss when teaching kids the mechanics of maths – they never get a chance to execute powerful mathematical ideas with computational tool support. One argument against using tools is that kids don’t learn to spot when a result a calculator gives is nonsense if they don’t also learn the mechanics by hand. I don’t think many people are that great at estimating numbers even across orders of magnitude even with the maths that they have learned to do by hand, so I don’t really rate that argument!)

Maybe it’s because I’m looking for signs of uptake of notebook ideas, or maybe it’s because it’s an emerging thing, but I noticed another example of notebook working again today, courtesy of @alexbilbie: reports written over Neo4J graph databases submitted to the Neo4j graph gist winter challenge. The GraphGist how to guide looks like they’re using a port of, or extensions to, an IPython Notebook, though I’ve not checked…

Note that IPython notebooks have access to the shell, so other languages can be used within them if appropriate support is provided. For example, we can use R code in the IPython notebook context.

Note that interactive, computationaal and data analysis notebooks are also starting to gain traction in certain areas of research under the moniker “reproducible research”. An example I came across just the other day was The Dataverse Network Project, and an R package that provides an interface to it: dvn – Sharing Reproducible Research from R.

In much the same way that I used to teach programming as a technique for working with robots, we can also teach programming in the context of data analysis. A major issue here is how we get data in to and out of a programming environment in an seamless way. Increasingly, data sources hosted online are presenting APIs (programmable interfaces) with wrappers that provide a nice interface to a particular programming language. This makes it easy to use a function call in the programming language to pull data into the programme context. Working with data, particularly when it comes to charting data, provides another authentic hook between maths and programming. Using them together allows us to present each as a tool that works with the other, helping answer the question “but why are learning this?” with the response “so now you can do this, see this, work with this, find this out”, etc. (I appreciate this is quite a utilitarian view of the value of knowledge…)

But how far can we go in terms of using “raw”, but very powerful, computational tools in school? The other day, I saw this preview of the Wolfram Language:

There is likely to be a cost barrier to using this language, but I wonder: why shouldn’t we use this style of language, or at least the notebook style of computing, in KS3 and 4? What are the barriers (aside from licensing cost and machine access) to using such a medium for teaching computing in context (in maths, in science, in geography, etc)?

Programming puritans might say that notebook style computing isn’t real programming… (I’m not sure why, but I could imagine they might… erm… anyone fancy arguing that line in the comments?!:-) But so what? We don’t want to teach everyone to be a programmer, but we do maybe want to help them realise what sorts of computational levers there are, even if they don’t become computational mechanics?

Written by Tony Hirst

February 26, 2014 at 12:38 pm

Posted in Infoskills, Thinkses

Data Textualisation – Making Human Readable Sense of Data

A picture may be worth a thousand words, but whilst many of us may get a pre-attentive gut reaction reading from a data set visualised using a chart type we’re familiar with, how many of us actually take the time to read a chart thoroughly and maybe verbalise, even if only to ourselves, what the marks on the chart mean, and how they relate to each other? (See How fertility rates affect population for an example of how to read a particular sort of chart.)

An idea that I’m finding increasingly attractive is the notion of text visualisation (or text visualization for the US-English imperialistic searchbots). That is, the generation of mechanical text from data tables so we can read words that describe the numbers – and how they relate – rather than looking at pictures of them or trying to make sense of the table itself.

Here’s a quick example of the sort of thing I mean – the generation of this piece of text:

The total number of people claiming Job Seeker’s Allowance (JSA) on the Isle of Wight in October was 2781, up 94 from 2687 in September, 2013, and down 377 from 3158 in October, 2012.

from a data table that can be sliced like this:

slicing nomis JSA figures

In the same way that we make narrative decisions when it comes to choosing what to put into a data visualisation, as well as how to read it (and how the various elements displayed in it relate to each other), so we make choices about the textual, or narrative, mapping from the data set to the text version (that is, the data textualisation) of it. When we present a chart or data table to a reader, we can try to influence their reading of it in variety of ways: by choosing the sort of order of bars on a bar chart, or rows in table, for example; or by highlighting one or more elements in a chart or table through the use of colour, font, transparency, and so on.

The actual reading of the chart or table is still largely under the control of the reader, however, and may be thought of as non-linear in the sense that the author of the chart or table can’t really control the order in which the various attributes of the table or chart, or relationships between the various elements, are encountered by the reader. In a linear text, however, the author retains a far more significant degree of control over the exposition, and the way it is presented to the reader.

There is thus a considerable amount of editorial judgement put into the mapping from a data table to text interpretations of the data contained within a particular row, or down a column, or from some combination thereof. The selection of the data points and how the relationships between them are expressed in the sentences formed around them directs attention in terms of how to read the data in a very literal way.

There may also be a certain amount of algorithmic analysis used along the way as sentences are constructed from looking at the relationships between different data elements; (“up 94″ is a representation (both in sense of rep-resentation and re-presentation) of a month on month change of +94, “down 377″ generated mechanically from a year on year comparison).

Every cell in a table may be a fact that can be reported, but there are many more stories to be told by comparing how different data elements in a table stand in relation to each other.

The area of geekery related to this style of computing is known as NLG – natural language generation – but I’ve not found any useful code libraries (in R or Python, preferably…) for messing around with it. (The JSA example above was generated using R as a proof of concept around generating monthly press releases from ONS/nomis job-figures.

PS why “data textualisation“, when we can consider even graphical devices as “texts” to be read? I considered “data characterisation” in the sense of turning data in characters, but characterisation is more general a term. Data narration was another possibility, but those crazy Americans patenting everything that moves might think I was “stealing” ideas from Narrative Science. Narrative Science (as well as Data2Text and Automated Insights etc. (who else should I mention?)) are certainly interesting but I have no idea how any of them do what they do. And in terms of narrating data stories – I think that’s a higher level process than the mechanical textualisation I want to start with. Which is not to say I don’t also have a few ideas about how to weave a bit of analysis into the textualisation process too…

Written by Tony Hirst

November 18, 2013 at 4:36 pm

MOOC Busting: Personal Googalytics…

Reading Game Analytics: Maximizing the Value of Player Data earlier this morning (which I suggest might be a handy read if you’re embarking on a learning analytics project…) I was struck by the mention of “player dossiers”. A Game Studies article from 2011 by Ben Medler- Player Dossiers: Analyzing Gameplay Data as a Reward describes them as follows:

Recording player gameplay data has become a prevalent feature in many games and platform systems. Players are now able to track their achievements, analyze their past gameplay behaviour and share their data with their gaming friends. A common system that gives players these abilities is known as a player dossier, a data-driven reporting tool comprised of a player’s gameplay data. Player dossiers presents a player’s past gameplay by using statistical and visualization methods while offering ways for players to connect to one another using online social networking features.

Which is to say – you can grab your own performance and achievement data and then play with it, maybe in part to help you game the game.

The Game Analytics book also mentioned the availability of third party services built on top of game APIs that let third parties build analytics tools for users that are not otherwise supported by the game publishers.

Hmmm…

What I started to wonder was – are there any services out there that allow you aggregate dossier material from different games to provide a more rounded picture of your performance as a gamer, or maybe services that homologate dossiers from different games to give overall rankings?

In the learning analytics space, this might correspond to getting your data back from a MOOC provider, for example, and giving it to a third party to analyse. As a user of MOOC platform, I doubt that you’ll be allowed to see much of the raw data that’s being collected about you; I’m also wary that institutions that sign up to MOOC platforms will also get screwed by the platform providers when it comes to asking for copies of the data. (I suggest folk signing their institutions up to MOOC platforms talk to their library colleagues, and ask how easy it is for them to get data, (metadata, transaction data, usage data etc etc) out of the library system vendors, and what sort of contracts got them into the mess they may admit to being in.)

(By the by, again the Game Analytics book made a useful distinction – that of viewing folk as customers, (i.e. people you can eventually get money from), or as players of the game (or maybe in MOOC land, learners). Whilst you may think of yourself as a player (learner), what they really want to do is develop you as a customer. In this respect, I think one of the great benefits of the arrival of MOOCs is that it allows us to see just how we can “monetise” education and let’s us talk freely and, erm, openly, in cold hard terms about the revenue potential of these things, and how they can be used as part of a money making/sales venture, without having to pretend to talk about educational benefits, which we’d probably feel obliged to do if we were talking about universities. Just like game publishers create product (games) to make money, MOOCspace is about businesses making money from education. (If it isn’t, why is venture capital interested?))

Anyway, all that’s all by the by, not just the by the by bit: this was just supposed to be a quick post, rather than a rant, about how we might do a little bit to open up part of the learning analytics data collection process to the community. (The technique generalises to other sectors…) The idea is built on appropriating a technology that many website publishers use to collect data, the third party service that is Google Analytics (eg from 2012, 88% of Universities UK members use Google Analytics on their public websites). I’m not sure how many universities use Google Analytics to track VLE activity though? Or how many MOOC operators use Google Analytics to track activity on course related pages? But if there are some, I think we can grab that data and pop it into a communal data pool; or grab that data into our own Google Account.

So how might we do that?

Almost seven years ago now – SEVEN YEARS! – in a post entitled They Stole OUr Learning Environment – Now We’re Stealing It Back, I described a recipe for customising a VLE (virtual learning environment – the thing that MOOC operators are reimagining and will presumably start (re)selling back to educational institutions as “Cloud based solutions”) – by injecting a panel that allowed you to add your own widgets from third part providers. The technique relied on a browser extension that allowed you to write your own custom javascript programmes that would be injected into the page just before it finished loading. In short, it used an extension that essentially allowed you to create you own additional extensions within it. It was an easy way of writing browser extensions.

That’s all a rather roundabout way of saying we can quite easily write extensions that change the behaviour of a web page. (Hmm… can we do this for mobile devices?) So what I propose – though I don’t have time to try it and test it right now (the rant used up the spare time I had!) – is an extension that simply replaces the Google Analytics tracking code with another tracking code:

- either a “common” one, that pools data from multiple individuals into the same Google Analytics account;
- or a “personal” one, that lets you collect all the data that the course provider was using Google Analytics to collect about you.

(Ideally the rewrite would take place before the tracking script is loaded? Or we’d have to reload the script with the new code if the rewrite happens too late? I’m not sure how the injection/replacement of the original tracking code with the new one actual takes place when the extension loads?)

Another “advantage” of this approach is that you hijack the Google Analytics data so it doesn’t get sent to the account of the person whose site you’re visiting. (Google Analytics docs suggest that using multiple tracking codes is “not supported”, though this doesn’t mean it can’t be done if you wanted to just overload the data collection (i.e. let the publisher collect the data to their account, and you just grab a copy of it too…).

(An alternative, cruder, approach might be to create an extension that purges Google Analytics code within a page, and then inject your own Google Analytics scripts/code. This would have the downside of not incorporating the instrumentation that the original page publisher added to the page. Hmm.. seems I looked at this way back when too… Collecting Third Party Website Statistics (like Yahoo’s) with Google Analytics.)

All good fun, eh? And for folk operating cMOOCs, maybe this represents a way of tracking user activity across multiple sites (though to mollify ethical considerations, tracking/analytics code should probably only be injected onto whitelisted course related domains, or users presented with a “track my activity on this site” button…?)

Written by Tony Hirst

August 28, 2013 at 3:01 pm

Posted in Stirring, Thinkses

Tagged with

Follow

Get every new post delivered to your Inbox.

Join 757 other followers