OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education

Archive for the ‘Infoskills’ Category

Progress Tracking Google Docs as Tasks?

As part of a new course I’m working on, the course team has been making use of shared Google docs for working up the course proposal and “D0″ (zero’th draft; key topics to be covered in each of the weekly sessions). Although the course production hasn’t been approved yet, we’ve started drafting the actual course materials, with an agreement to share them for comment via Google docs.

The approach I’ve taken is to created a shared folder with the rest of the course teams, and set up documents for each of the weekly sessions I’ve taken the lead on.

TM351 files

The documents in this folder are all available to other members of the course team – for reference and /or comment – at any time, and represent the “live”/most current version of each document I’m working on. I suspect that others in the course team may take a more cautious approach, only sharing a doc when it’s in a suitable state for handover – or at least, comment – but that’s fine too. My docs can of course be used that way as well – no-one has to look at them until I do “hand them over” for full comment at the end of the first draft stage.

But what if others, such as the course team chair or course manager, do want to keep check on progress over the coming weeks?

The file listing shown above doesn’t give a lot away about the stare of each document, not even a file size, only when it was last worked on. So it struck me that it might be useful to have a visual indicator (such as a horizontal progress bar) about the progress on each document so that someone looking at the listing would know whether there was any point opening a document to have a look inside at all…

..because at the current time, a lot of the docs are just stubs, identifying tasks to be done.

Progress could be measured by proxy indicators, such as file size, “page count” equivalent, or line count. In these cases, the progress meter could be updated automatically. Additional insight could be provided by associating a target line count or page length metadata element, providing additional feedback to the author about progress with respect to that target. If a document exceeds the planned length, the progress meter should carry on going, possibly with a different colour denoting the overrun.

There are a couple of problems at least with this approach – documents that are being worked on may blend scruffy working notes along with actual “finished” text; several versions of the same paragraph may exist as authors try out different approaches, all adding to the line count. Long copied chunks from other sources may be in the text as working references, and so on.

So how about an additional piece of metadata for docs additionally tagged as “task” type in which a user can set a quick progress percentage estimate (a slider widget would make this easy to update) that is displayed in a bar on the file listing. Anyone checking the folder could then – at a glance – see which docs were worth looking at based on progress within the document-as-task. (Of course, having metadata available also opens up the possibility of additional mission creeping features, rulesets for generating alerts when a doc hits a particular percentage completion, for example.)

I’m not looking for more project management tools to take time away from a task, but in this case think the simple addition of a “progress” metadata element could weave an element of project management support into this sort of workflow? (changing the title of the doc would be another way – eg adding (20% done) to the title…

Thinks: hmm, I procrastinating, aren’t I? I should really be working on one of those docs…;-)

Written by Tony Hirst

February 7, 2014 at 10:55 am

Posted in Infoskills

Using One Programming Language In the Context of Another – Python and R

Over the last couple of years, I’ve settled into using R an python as my languages of choice for doing stuff:

  • R, because RStudio is a nice environment, I can blend code and text using R markdown and knitr, ggplot2 and Rcharts make generating graphics easy, and reshapers such as plyr make wrangling with data realtvely easy(?!) once you get into the swing of it… (though sometimes OpenRefine can be easier…;-)
  • python, because it’s an all round general purpose thing with lots of handy libraries, good for scraping, and a joy to work with in iPython notebook…

Sometimes, however, you know – or remember – how to do one thing in one language that you’re not sure how to do in another. Or you find a library that is just right for the task hand but it’s in the other language to the one in which you’re working, and routing the data out and back again can be a pain.

How handy it would be if you could make use of one language in the context of another? Well, it seems as if we can (note: I haven’t tried any of these recipes yet…):

Using R inside Python Programs

Whilst python has a range of plotting tools available for it, such as matplotlib, I haven’t found anything quite as a expressive as R’s ggplot2 (there is a python port of ggplot underway but it’s still early days and the syntax, as well as the functionality, is still far from complete as compared to the original [though not a far as it was given the recent update;-)] ). So how handy would it be to be able to throw a pandas data frame, for example, into an R data frame and then use ggplot to render a graphic?

The Rpy and Rpy2 libraries support exactly that, allowing you to run R code within a python programme. For an example, see this Example of using ggplot2 from IPython notebook.

There also seems to be some magic help for running R in iPython notebooks and some experimental integrational work going on in pandas: pandas: rpy2 / R interface.

(See also: ggplot2 in Python: A major barrier broken.)

Using python Inside R

Whilst one of the things I often want to do in python is plot R style ggplots, one of the hurdles I often encounter in R is getting data in in the first place. For example, the data may come from a third party source that needs screenscraping, or via a web API that has a python wrapper but not an R one. Python is my preferred tool for writing scrapers, so is there a quick way I can add a python data grabber into my R context? It seems as if there is: rPython, though the way code is included looks rather clunky and WIndows support appears to be moot. What would be nice would be for RStudio to include some magic, or be able to support python based chunks…

(See also: Calling Python from R with rPython.)

(Note: I’m currently working on the production of an Open University course on data management and use, and I can imagine the upset about overcomplicating matters if I mooted this sort of blended approach in the course materials. But this is exactly the sort of pragmatic use that technologists use code for – as a tool that comes to hand and that can be used quickly and relatively efficiently in concert with other tools, at least when you’re working in a problem solving (rather than production) mode.)

Written by Tony Hirst

January 22, 2014 at 12:11 pm

Posted in Infoskills, Rstats

Tagged with , , ,

So You Want to be a Data Journalist? Current Opportunities

Every so often I do a round up of job openings in different areas. This is particular true around year end, as I look at my dwindling salary (no more increments, ever, and no hope of promotion, …) and my overall lack of direction, and try to come up with sort sort of resolution to play with during the first few weeks of the year.

The data journalism phrase has being around for some time now (was it really three and half years ago since Data Driven Journalism Round Table at the European Journalism Centre? (FFS, what have I been doing for the last three years?!:-( and it seems to be maturing a little. We’ve had the period of shiny, shiny web apps requiring multiskilled development teams and designers working with the hacks to produce often confusing and wtf am I supposed to be doing now?! interactives and things seem to be becoming a little more embedded… Perhaps…

My reading (as an outsider) is that there is now more of a move towards developing some sort of data skillbase that allows journalists to do “investigative” sorts of things with data, often using very small data sets or concise summary datasets. To complement this, there seems to be some sort of hope that visually appealing charts can be used to hook eyeballs into a story (rather than pushing eyeballs away) – Trinity Mirror’s Ampp3d (as led by Martin Belam) is a good example of this, as is the increasing(?) use of the DataWrapper library.

From working with the School of Data, as well as a couple of bits of data journalism not-really-training with some of the big news groups, I’ve come to realise there is probably some really basic, foundational work to be done in the way people think (or don’t think) about data. For example, I don’t believe that people in general read charts. I think they may glance at them, but they don’t try to read them. They have no idea what story they tell. Given a line chart that plots some figure over time. How many people ride the line to get a feel for how it really changed?

Hans Rosling famously brings data alive with his narrative commentary around animated development data charts, including bar charts…

But if you watch the video with the sound off, or just look at the final chart, do you have the feeling of being told the same story? Can you even retell yourself the story by looking at the chart. And how about if you look at another bar chart? Can you use any of Hans Rosling’s narrative or rhetorical tricks to help you read through those?

(The rhetoric of data (and the malevolent arts of persuasion) is something I want to ponder in more depth next year, along with the notion of data aesthetics and the theory of forms given a data twist.)

Another great example of narrated data storytelling comes from Kurt Vonnegut as he describes the shapes of stories:

Is that how you read a line chart when you see one?

One thing about the data narration technique is that it is based around the construction of a data trace. There is a sense of anticipation about where the line will go next, and uncertainty as to what sort of event will cause the line to move one way or another. Looking back at a completed data chart, what points do we pick from it that we want to use as events in our narration or reading of it? (The lines just connect the points – they are processional in the way they move us from one point of interest to the next, although the gradient of the line may provide us with ideas for embellishing or decorating the story a little.)

It’s important to make art because the people that get the most out of art are the ones that make it. It’s not … You know there’s this idea that you go to a wonderful art gallery and it’s good for you and it makes you a better person and it informs your soul, but actually the person who’s getting the most out of any artistic activity is the person who makes it because they’re sort of expressing themselves and enjoying it, and they’re in the zone and you know it’s a nice thing to do. [Grayson Perry, Reith Lectures 2013, Lecture 2, Q&A [transcript, PDF], audio]

In the same way, the person who gets the most out of a chart is the person who constructed it. They know what they left in and what they left out. They know why the axes are selected as they are, why elements are coloured or sized as they are. They know the question that led up to the chart and the answers it provides to those questions. They know where to look. Like an art critic who reads their way round a painting, they know how to read one or many different stories from the chart.

The interactives that appeared during the data journalism wave from a couple of years ago sought to provide a playground for people to play with data and tells their own stories with it. But they didn’t. In part because they didn’t know how to play with data, didn’t know how to use it in a constructive way as part of a narrative, (even a made up, playful narrative). And in part this comes back to not knowing how to read – that is, recover stories from – a chart.

It is often said that a picture saves a thousand words, but if the picture tells a thousand word story, how many of us try to read that thousand word story from each picture or chart? Maybe we need to use a thousand words as well as the chart? (How many words does Hans Rosling use? How many, Kurt Vonnegut?)

When producing a chart that essentially represents a summary of a conversation with have had with a dataset, it’s important to remember that for someone looking at the final chart it might not make as much sense in absence of the narrative that was used to construct it. Edward de Bono’s constructed illustrations helps read a the final image through recalling his narrative. But if we just look at a “completed” sketch from one of his talks, it will probably be meaningless.

One of the ideas that works for me when I reflect on my own playing with data is that it is a conversation. Meaning is constructed through the conversation I have with a dataset, and the things it reveals when I pose particular questions to it. In many cases, these questions are based on filtering a dataset, although the result may be displayed in many ways. The answers I get to a question inform the next question I want to ask. Questions take the form of constructing this chart as opposed to that chart, though I am free to ask the same question in many slightly different ways if the answers don’t appear to be revealing of anything.

It is in this direction – of seeing data as a source that can be interviewed and coaxed into telling stories – that I sense elements of the data journalism thang are developing. This leads naturally into seeing data journalism skills as core investigative style skills that all journalists would benefit from. (Seeing things as data allows you to ask particular sorts of question in very particular ways. Being able to cast things into a data form – as for example in Creating Data from Text – Regular Expressions in OpenRefine) – so that they become amenable to data-style queries, is the next idea I think we need to get across…

So what are the jobs that are out at the moment? Here’s a quick round-up of some that I’ve spotted…

  • Data editor (Guardian): “develop and implement a clear strategy for the Data team and the use of data, numbers and statistics to generate news stories, analysis pieces, blogs and fact checks for The Guardian and The Observer.
    You will take responsibility for commissioning and editing content for the Guardian and Observer data blogs as well as managing the research needs of the graphics department and home and foreign news desks. With day-to-day managerial responsibility for a team of three reporters / researchers working on the data blog, you will also be responsible for data analysis and visualisation: using a variety of specialist software and online tools, including Tableau, ARCGis, Google Fusion, Microsoft Access and Excel”

Perpetuating the “recent trad take” on data journalism, viewed as gonzo journalist hacker:

  • Data Journalist [Telegraph Media Group]: “[S]ource, sift and surface data to find and generate stories, assist with storytelling and to support interactive team in delivering data projects.
    “The Data Journalist will sit within the Interactive Data team, and will work with a team of designers, web developers and journalists on data-led stories and in developing innovative interactive infographics, visualisations and news applications. They will need to think and work fast to tackle on-going news stories and tight deadlines.
    “Applicants should have a portfolio of relevant work and bylines on data-led stories and/or interactive graphics. The role will include mentoring and training opportunities, but candidates should feel confident working with HTML/CSS, Javascript, PHP and MySQL, even if they are not writing code themselves. Experience of writing scrapers and using statistical software (e.g. R) is desired, but not essential.
  • One of the most exciting opportunities that I can see around data related published is in new workflows and minimising the gap between investigative tools and published outputs. This seems to me a bit risky in that it seems so conservative when it comes to getting data outputs actually published?
    Designer [Trinity Mirror]: “Trinity Mirror’s pioneering data unit is looking for a first-class designer to work across online and print titles. … You will be a whizz with design software – such as Illustrator, Photoshop and InDesign – and understand the principles of designing infographics, charts and interactives for the web. You will also be able to design simple graphical templates for re-purposing around the group.
    “You should have a keen interest in current affairs and sport, and be familiar with – and committed to – the role data journalism can play in a modern newsroom.”
  • [Trinity Mirror]: Can you take an API feed and turn it into a compelling gadget which will get the whole country talking?
    “Trinity Mirror’s pioneering data unit is looking for a coder/developer to help it take the next step in helping shape the future of journalism. …
    “You will be able to create tools which automatically grab the latest data and use them to create interactive, dynamically-updated charts, maps and gadgets across a huge range of subjects – everything from crime to football. …
    “The successful candidate will have a thorough knowledge of scraping techniques, ability to manage a database using SQL, and demonstrable ability in at least one programming language.”

But there is hope about the embedding of data skills as part of everyday journalistic practice:

  • Culture report (Guardian): “We are looking for a Culture Reporter to generate stories and cover breaking news relating to Popular Culture, Film and Music … Applicants should also have expertise with digital tools including blogging, social media, data journalism and mobile publishing. “
  • Investigations Correspondent [BBC Newsnight]: “Reporting to the Editor, the Investigations Correspondent will contribute to Newsnight by producing long term investigations as well as sometimes contributing to big ongoing stories. Some investigations will take months, but there will also be times when we’ll need to dig up new lines on moving the stories in days.
    “We want a first rate reporter with a proven track record of breaking big stories who can comfortably work across all subject areas from politics to sport. You will be an established investigative journalist with a wide range of contacts and sources as well as having experience with a range of different investigative approaches including data journalism, Freedom Of Information (FOI) and undercover reporting.”
  • News Reporter, GP [Haymarket Medical Media]: “GP is part of Haymarket Medical Media, which also produces MIMS, Medeconomics, Inside Commissioning, and mycme.com, and delivers a wide range of medical education projects. …
    “Ideally you will also have some experience of data journalism, understand how social media can be used to enhance news coverage and have some knowledge of multimedia journalism, including video and blogs.”
  • Reporter, ENDS Report [Haymarket]: “We are looking for someone who has excellent reporting and writing skills, is enthusiastic and able to digest and summarise in depth documents and analysis. You will also need to be comfortable with dealing with numbers and statistics and prepared to sift through data to find the story that no one else spots.
    “Ideally you will have some experience of data journalism, understand how social media can be used to enhance news coverage.”

Are there any other current ones I’m missing?

I think the biggest shift we need is to get folk treating data as a source that responds to a particular style of questioning. Learning how to make the source comfortable and get it into a state where you can start to ask it questions is one key skill. Knowing how to frame questions so that discover the answers you need for a story are another. Choosing which bits of the conversation you use in a report (if any – maybe the conversation is akin to a background chat?) yet another.

Treating data as a source also helps us think about how we need to take care with it – how not to ask leading questions, how not to get it to say things it doesn’t mean. (On the other hand, some folk will undoubtedly force the data to say things it never intended to say…

“If you torture the data enough, nature will always confess” [Ronald Coase]

[Disclaimer: I started looking at some medical data for Haymarket.]

Written by Tony Hirst

December 19, 2013 at 7:54 pm

More Digital Traces…

Via @wilm, I notice that it’s time again for someone (this time at the Wall Street Journal) to have written about the scariness that is your Google personal web history (the sort of thing you probably have to opt out of if you sign up for a new Google account, if other recent opt-in by defaults are to go by…)

It may not sound like much, but if you do have a Google account, and your web history collection is not disabled, you may find your emotional response to seeing months of years of your web/search history archived in one place surprising… Your Google web history.

Not mentioned in the WSJ article was some of the games that the Chrome browser gets up. @tim_hunt tipped me off to a nice (if technically detailed, in places) review by Ilya Grigorik of some the design features of the Chrome browser, and some of the tools built in to it: High Performance Networking in Chrome. I’ve got various pre-fetching tools switched off in my version of Chrome (tools that allow Chrome to pre-emptively look up web addresses and even download pages pre-emptively*) so those tools didn’t work for me… but looking at chrome://predictors/ was interesting to see what keystrokes I type are good predictors of web pages I visit…

chrome predictors

* By the by, I started to wonder whether webstats get messed up to any significant effect by Chrome pre-emptively prefetching pages that folk never actually look at…?

In further relation to the tracking of traffic we generate from our browsing habits, as we access more and more web/internet services through satellite TV boxes, smart TVs, and catchup TV boxes such as Roku or NowTV, have you ever wondered about how that activity is tracked? LG Smart TVs logging USB filenames and viewing info to LG servers describes not only how LG TVs appear to log the things you do view, but also the personal media you might view, and in principle can phone that information home (because the home for your data is a database run by whatever service you happen to be using – your data is midata is their data).

there is an option in the system settings called “Collection of watching info:” which is set ON by default. This setting requires the user to scroll down to see it and, unlike most other settings, contains no “balloon help” to describe what it does.

At this point, I decided to do some traffic analysis to see what was being sent. It turns out that viewing information appears to be being sent regardless of whether this option is set to On or Off.

you can clearly see that a unique device ID is transmitted, along with the Channel name … and a unique device ID.

This information appears to be sent back unencrypted and in the clear to LG every time you change channel, even if you have gone to the trouble of changing the setting above to switch collection of viewing information off.

It was at this point, I made an even more disturbing find within the packet data dumps. I noticed filenames were being posted to LG’s servers and that these filenames were ones stored on my external USB hard drive.

Hmmm… maybe it’s time I switched out my BT homehub for a proper hardware firewalled router with a good set of logging tools…?

PS FWIW, I can’t really get my head round how evil on the one hand, or damp squib on the other, the whole midata thing is turning out to be in the short term, and what sorts of involvement – and data – the partners have with the project. I did notice that a midata innovation lab report has just become available, though to you and me it’ll cost 1500 squidlly diddlies so I haven’t read it: The midata Innovation Opportunity. Note to self: has anyone got any good stories to say about TSB supporting innovation in micro-businesses…?

PPS And finally, something else from the Ilya Grigorik article:

The HTTP Archive project tracks how the web is built, and it can help us answer this question. Instead of crawling the web for the content, it periodically crawls the most popular sites to record and aggregate analytics on the number of used resources, content types, headers, and other metadata for each individual destination. The stats, as of January 2013, may surprise you. An average page, amongst the top 300,000 destinations on the web is:

- 1280 KB in size
– composed of 88 resources
– connects to 15+ distinct hosts

Let that sink in. Over 1 MB in size on average, composed of 88 resources such as images, JavaScript, and CSS, and delivered from 15 different own and third-party hosts. Further, each of these numbers has been steadily increasing over the past few years, and there are no signs of stopping. We are increasingly building larger and more ambitious web applications.

Is it any wonder that pages take so long to load on a mobile phone off the 3G netwrok, and that you can soon eat up your monthly bandwidth allowance!

Written by Tony Hirst

November 21, 2013 at 12:37 am

Posted in Infoskills, Paranoia, privacy

Tagged with , ,

Asking Questions of Data Contained in a Google Spreadsheet Using a Basic Structured Query Language

There is an old saying along the lines of “give a man a fish and you can feed him for a day; teach a man to fish and you’ll feed him for a lifetime”. The same is true when you learn a little bit about structure queries languages… In the post Asking Questions of Data – Some Simple One-Liners, we can see how the SQL query language could be used to ask questions of an election related dataset hosted on Scraperwiki that had been compiled by scraping a “Notice of Poll” PDF document containing information about election candidates. In this post, we’ll see how a series of queries constructed along very similar lines can be applied to data contained within a Google spreadsheet using the Google Chart Tools Query Language.

To provide some sort of context, I’ll stick with the local election theme, although in this case the focus will be on election results data. If you want to follow along, the data can be found in this Google spreadsheet – Isle of Wight local election data results, May 2013 (the spreadsheet key is 0AirrQecc6H_vdEZOZ21sNHpibnhmaEYxbW96dkNxZGc).

IW Poll spreadsheet

The data was obtained from a dataset originally published by the OnTheWight hyperlocal blog that was shaped and cleaned using OpenRefine using a data wrangling recipe similar to the one described in A Wrangling Example With OpenRefine: Making “Oven Ready Data”.

To query the data, I’ve popped up a simple query form on Scraperwiki: Google Spreadsheet Explorer

Google spreadsheet explorer

To use the explorer, you need to:

  1. provide a spreadsheet key value and optional sheet number (for example, 0AirrQecc6H_vdEZOZ21sNHpibnhmaEYxbW96dkNxZGc);
  2. preview the table headings;
  3. construct a query using the column letters;
  4. select the output format;
  5. run the query.

So what sort of questions might we want to ask of the data? Let’s build some up.

We might start by just looking at the raw results as they come out of the spreadsheet-as-database: SELECT A,D,E,F

SImple query

We might then want to look at each electoral division seeing the results in rank order: SELECT A,D,E,F WHERE E != 'NA' ORDER BY A,F DESC

Results in order

Let’s bring the spoiled vote count back in: SELECT A,D,E,F WHERE E != 'NA' OR D CONTAINS 'spoil' ORDER BY A,F DESC (we might equally have said OR D = 'Papers spoilt').

Papers spoilt included

How about doing some sums? How does the league table of postal ballot percentages look across each electoral division? SELECT A,100*F/B WHERE D CONTAINS 'Postal' ORDER BY 100*F/B DESC

Postal Turnout

Suppose we want to look at the turnout. The “NoONRoll” column B gives the number of people eligible to vote in each electoral division, which is a good start. Unfortunately, using the data in the spreadsheet we have, we can’t do this for all electoral divisions – the “votes cast” is not necessarily the number of people who voted because some electoral divisions (Brading, St Helens & Bembridge and Nettlestone & Seaview) returned two candidates (which meant people voting were each allowed to cast up to an including two votes; the number of people who voted was in the original OnTheWight dataset). If we bear this caveat in mind, we can run the number for the other electoral divisions though. The Total votes cast is actually the number of “good” votes cast – the turnout was actually the Total votes cast plus the Papers spoilt. Let’s start by calculating the “good vote turnout” for each ward, rank the electoral divisions by turnout (ORDER BY 100*F/B DESC), label the turnout column appropriately (LABEL 100*F/B 'Percentage') and format the results ( FORMAT 100*F/B '#,#0.0') using the query SELECT A, 100*F/B WHERE D CONTAINS 'Total' ORDER BY 100*F/B DESC LABEL 100*F/B 'Percentage' FORMAT 100*F/B '#,#0.0'

Good vote turnout

Remember, the first two results are “nonsense” because electors in those electoral divisions may have cast two votes.

How about the three electoral divisions with the lowest turn out? SELECT A, 100*F/B WHERE D CONTAINS 'Total' ORDER BY 100*F/B ASC LIMIT 3 LABEL 100*F/B 'Percentage' FORMAT 100*F/B '#,#0.0' (Note that the order of the arguments – such as where to put the LIMIT – is important; the wrong order can prevent the query from running…

Worst 3 by turnout

The actual turn out (again, with the caveat in mind!) is the total votes cast plus the spoilt papers. To calculate this percentage, we need to sum the total and spoilt contributions in each electoral division and divide by the size of the electoral roll. To do this, we need to SUM the corresponding quantities in each electoral division. Because multiple (two) rows are summed for each electoral division, we find the size of the electoral roll in each electoral division as SUM(B)/COUNT(B) – that is, we count it twice and divide by the number of times we counted it. The query (without tidying) starts off looking like this: SELECT A,SUM(F)*COUNT(B)/SUM(B) WHERE D CONTAINS 'Total' OR D CONTAINS 'spoil' GROUP BY A

Summing rows in a group

In terms of popularity, who were the top 5 candidates in terms of people receiving the largest number of votes? SELECT D,A, E, F WHERE E!='NA' ORDER BY F DESC LIMIT 5

Top 5 by votes cast

How about if we normalise these numbers by the number of people on the electoral roll in the corresponding areas – SELECT D,A, E, F/B WHERE E!='NA' ORDER BY F/B DESC LIMIT 5

TOp 5 as percentage on roll

Looking at the parties, how did the sum of their votes across all the electoral divisions compare? SELECT E,SUM(F) where E!='NA' GROUP BY E ORDER BY SUM(F) DESC

VOtes by party

How about if we bring in the number of candidates who stood for each party, and normalise by this to calculate the average “votes per candidate” by party? SELECT E,SUM(F),COUNT(F), SUM(F)/COUNT(F) where E!='NA' GROUP BY E ORDER BY SUM(F)/COUNT(F) DESC

Average votes per candidate

To summarise then, in this post, we have seen how we can use a structured query language to interrogate the data contained in a Google Spreadsheet, essentially treating the Google Spreadsheet as if it were a database. The query language can also be used to to perform a series of simple calculations over the data to produce a derived dataset. Unfortunately, the query language does not allow us to nest SELECT statements in the same way we can nest SQL SELECT statements, which limits some of the queries we can run.

Written by Tony Hirst

May 15, 2013 at 10:00 am

Ephemeral Citations – When Presentations You Have Cited Vanish from the Public Web

A couple of months ago, I came across an interesting slide deck reviewing some of the initiatives that Narrative Science have been involved with, including the generation of natural language interpretations of school education grade reports (I think: some natural language take on an individual’s academic scores, at least?). With MOOC fever in part focussing on the development of automated marking and feedback reports, this represents one example of how we might take numerical reports and dashboard displays and turn them into human readable text with some sort of narrative. (Narrative Science do a related thing for reports on schools themselves – How To Edit 52,000 Stories at Once.)

Whenever I come across a slide deck that I think may be in danger of being taken down (for example, because it’s buried down a downloads path on a corporate workshop promoter’s website and has CONFIDENTIAL written all over it) I try to grab a copy of it, but this presentation looked “safe” because it had been on Slideshare for some time.

Since I discovered the presentation, I’ve been recommending it to variou folk, particularly slides 20-22? that refer to the educational example. Trying to find the slidedeck today, a websearch failed to turn it up so I had to go sniffing around to see if I had mentioned a link to the original presentation anywhere. Here’s what I found:

no narrative science slideshow

The Wayback machine had grabbed bits and pieces of text, but not the actual slides…

wayback narrative science

Not only did I not download the presentation, I don’t seem to have grabbed any screenshots of the slides I was particularly interested in… bah:-(

For what it’s worth, here’s the commentary:

Introduction to Narrative Science — Presentation Transcript

We Transform Data IntoStories and Insight…In Seconds
Automatically,Without Human Intervention and at a Significant Scale
To Help Companies: Create New Products Improve Decision-MakingOptimize Customer Interactions
Customer Types Media and Data Business Publishing Companies Reporting
How Does It Work? The Data The Facts The Angles The Structure Stats Tests Calls The Narrative Language Completed Text Our technology platform, Quill™, is a powerful integration of artificial intelligence and data analytics that automatically transforms data into stories.
The following slides are examples of our work based upon a simple premise: structured data in, narrative out. These examples span several domains, including Sports Journalism, Financial Reporting, Real Estate, Business Intelligence, Education, and Marketing Services.
Sports Journalism: Big Ten Network – Data InTransforming Data into Stories
Sports Journalism: Big Ten Network – NarrativeTransforming Data into Stories
Financial Journalism: Forbes – Data InTransforming Data into Stories
Financial Journalism: Forbes – NarrativeTransforming Data into Stories
Short Sale Reporting: Data Explorers – JSON Input
Short Sale Reporting: Data Explorers – Overview North America Consumer Services Short Interest Update There has been a sharp decline in short interest in Marriott International (MAR) in the face of an 11% increase in the companys stock price. Short holdings have declined nearly 14% over the past month to 4.9% of shares outstanding. In the last month, holdings of institutional investors who lend have remained relatively unchanged at just below 17% of the companys shares. Investors have built up their short positions in Carnival (CCL) by 54.3% over the past month to 3.1% of shares outstanding. The share price has gained 8.3% over the past week to $31.93. Holdings of institutional investors who lend are also up slightly over the past month to just above 23% of the common shares in issue by the company. Institutional investors who make their shares available to borrow have reduced their holdings in Weight Watchers International (WTW) by more than 26% to just above 10% of total shares outstanding over the past month. Short sellers have also cut back their positions slightly to just under 6% of the market cap. The price of shares in the company has been on the rise for seven consecutive days and is now at $81.50.
Sector Reporting: Data Explorers – JSON Input
Sector Reporting: Data Explorers – OverviewThursday, October 6, 2011 12:00 PM: HEALTHCARE MIDDAY COMMENTARY:The Healthcare (XLV) sector underperformed the market in early trading on Thursday. Healthcarestocks trailed the market by 0.4%. So far, the Dow rose 0.2%, the NASDAQ saw growth of 0.8%, andthe S&P500 was up 0.4%.Here are a few Healthcare stocks that bucked the sectors downward trend.MRK (Merck & Co Inc.) erased early losses and rose 0.6% to $31.26. The company recentlyannounced its chairman is stepping down. MRK stock traded in the range of $31.21 – $31.56. MRKsvolume was 86.1% lower than usual with 2.5 million shares trading hands. Todays gains still leavethe stock about 11.1% lower than its price three months ago.LUX (Luxottica Group) struggled in early trading but showed resilience later in the day. Shares rose3.8% to $26.92. LUX traded in the range of $26.48 – $26.99. Luxottica Group’s early share volumewas 34,155. Todays gains still leave the stock 21.8% below its 52-week high of $34.43. The stockremains about 16.3% lower than its price three months ago.Shares of UHS (Universal Health Services Inc.) are trading at $32.89, up 81 cents (2.5%) from theprevious close of $32.08. UHS traded in the range of $32.06 – $33.01…
Real Estate: Hanley Wood – Data InTransforming Data into Stories
Real Estate: Hanley Wood – NarrativeTransforming Data into Stories
BI: Leading Fast Food Chain – Data InTransforming Data into Stories
BI: Leading Fast Food Chain – Store Level Report January Promotion Falling Behind Region The launch of the bagels and cream cheese promotion began this month. While your initial sales at the beginning of the promotion were on track with both your ad co-op and the region, your sales this week dropped from last week’s 142 units down to 128 units. Your morning guest count remained even across this period. Taking better advantage of this promotion should help to increase guest count and overall revenue by bringing in new customers. The new item with the greatest growth opportunity this week was the Coffee Cake Muffin. Increasing your sales by just one unit per thousand transactions to match Sales in the region would add another $156 to your monthly profit. That amounts to about $1872 over the course of one year.Transforming Data into Stories
Education: Standardized Testing – Data InTransforming Data into Stories
Education: Standardized Testing – Study RecommendationsTransforming Data into Stories
Marketing Services & Digital Media: Data InTransforming Data into Stories
Marketing Services & Digital Media: NarrativeTransforming Data into Stories

Bah…:-(

PS Slideshare appears to have a new(?) feature – Saved Files – that keeps a copy of files you have downloaded. Or does it? If I save a file and someone deletes it, will the empty shell only remain in my “Saved Files” list?

Written by Tony Hirst

May 7, 2013 at 5:35 pm

Posted in Infoskills

Tagged with

Follow

Get every new post delivered to your Inbox.

Join 819 other followers