Trackbacks, Tweetbacks and the Conversation Graph, Part I

Whenever you write a blog post that contains links to other posts, and is maybe in turn linked to from other blog posts, how can you keep track of where you blog post sits “in the wider scheme of things”?

In Trackforward – Following the Consequences with N’th Order Trackbacks, I showed a technique for tracking the posts that link to a particular URI, the posts that link to those posts and so on, suggesting a way of keeping track of any conversational threads that are started by a particular post. (This is also related to OUseful Info: Trackback Graphs and Blog Categories.)

In this post, I’ll try to generalise that thinking a little more to see if there’s anything we might learn by exploring that part of the “linkgraph” in the immediate vicinity of a particular URI. I’m not sure where this will go, so I’ve built in the possibility of spreading this thought over several posts.

So to begin with, imagine I write a post (POST) that contains links to three other posts (POST1, POST2, POST3). (Graphs are plotted using Ajax/Graphviz.)

In turn, two posts (POSTA, POSTB) might link back to my post:

So by looking at the links from my post to other posts, and looking at trackbacks to my post (or using the link: search limit applied to the URI of my post on a search engine) I can locate my post in its immediate “link neighbourhood”:

Now it might be that I want to track the posts that refer to posts that referred to my post (which is what the trackforward demo explored).

You might also be interested in seeing what else the posts that have referred to my original post have linked to:

Another possibility is tracking posts that refer to posts that I referred to:

It might be that one of those posts also refers to my post:

So what…. so I need to take a break now – more in a later post…

See also: Tweetbacks, a beta service that provides a trackback like service from tweets that reference a particular URL.

PS and also BackType and BackTweets

From Sketch-Up to Mock Up…

I know Christmas is over for another year, but how much would you like a tool that lets you create 3D drawings with ease, import those drawings as models into an interactive 3D world, and then maybe print out your designs as physical 3D models using a 3D printer?

For those of you who haven’t come across SketchUp, it’s a 3D-design/drawing package published by Google that allows you to construct simple three dimensional models with very little training, and far more involved models with a bit of practise (for some example tutorial videos showing just how easy it is to use, see 3D Modeling with SketchUp).

Part of the attraction of SketchUp is the ready integration of SketchUp models into Google Earth – so anything you design in Sketchup can be viewed within that environment. This feature provided part of the rationale for my pitch to the “Show Us a Better Way” call last year on 3D Planning applications. The idea there was that planning applications to local authorities might come with 3D plans that could be viewed using a geo-interface – so at a glance I’d be able to see markers for planning applications on the Isle of Wight, for example, and then zoom in to see them in more detail (specifically, 3D detail – because not everybody knows how to “read” a 2D plan, right?!;-) An educational extension to this idea imagined school pupils in a DT lesson creating 3D models based on current planning applications in their locale, and then having this marked according to how well they corresponded to the original 2D planning application drawings. High scoring models (produced in a timely fashion) could then be made available “for real” within the planning consultation exercise.

(SketchUp is already being used in the real-world by companies like Simplified Building Concepts, who solicit user-designs of constructions assembled from a particular range of tubular building components.)

Another attractive feature of SketchUp is the ability to import models into the 3D Cobalt virtual world, as shown in this tutorial video – Using Google 3D Warehouse to Build Cobalt & Edusim Virtual Worlds (obtained via Cobalt – Edusim Quick Start Tutorials):

(Cobalt/Open Croquet rely on the user running an instance of a world in a client on their own computer, and then optionally connecting to other people who are also running OpenCroquet on their computer.)

For more on the educational use of Cobalt, and other 3D worlds, visit the Cobalt/Edusim Group.

As well as viewing models in virtual worlds, it’s also possible to “print out” scale models using 3D-printing technology, as Sweet Onion Creations describe. For example, the following video shows how to print out a 3D model from Google SkethcUp warehouse.

Oh yes, did I mention SketchUp is also scriptable – so you can write code to create your models? Google SketchUp Ruby API (e.g. architecture related Ruby plugins).

Interactive Photos from Obama’s Inauguration

Now the dust has settled from last week’s US Presidential inauguration, I thought I’d have a look around for interactive photo exhibits that recorded the event. (I’ll maintain a list here if and when I find anything else to add.)

So here’s what I found…

Time Lapse photo (Washington Post)

Satellite Image of the National Mall (Washington Post)

A half-metre resolution satellite image over Washington taken around the time of the inauguration.

You can also see this GeoEye image in Google Earth.

Gigapixel Photo of the Inauguration (David Bergman)

Read more about how this photo was taken here: How I Made a 1,474-Megapixel Photo During President Obama’s Inaugural Address.

Interactive Panorama From the Crowds (New York Times)

PhotoSynth collage (CNN)

I suppose the next thing to consider is this: what sort of mashup is possible using these different sources?!;-)

[PS If you find any more interactive photo exhibits with a similar grandeur of scale, please add a link in a comment to this post:-)]

Barriers to Open Availability of Information? IW Planning Committee Audio Recordings

Chatting to Simon Perry of the Ventnor Blog over a pint at the Yarbridge Inn last week, he mentioned that recording of the Isle of Wight Council Planning Committee were available on the Isle of Wight Council website, albeit in an obfuscated and hard to find way (you know the sort of thing: a full stop or 1×1.gif is used as the link text and the anchor has {text-decoration: none};-).

So I thought it might be interesting to see how easy it would be to plot the recordings on a map, “locating” the recording of each application at the place where the planned changes would actually take place. The idea being, of course, that a map based index makes it easier to find information about planning applications in your own locale.

The recipe I had in mind was something like the following:

– scrape a list of recordings, along with the location each one referred to, and make it available as JSON or RSS feed;
– take the feed, geocode it as required, and hook it into a “geopodcast” map, (that is, a pre-existing application that would take a geocoded podcast feed and display it on a map, letting you click on a marker and play the audio file located at that point).

Easy, right? A half hour job, I thought…

Hmmm…maybe not:

– the only place I could find links to the audio files were in the minutes of the relevant committee meeting, minutes that are only published as PDF documents; and PDF scraping is not something I know how to do (yet…?!);
– a quick search around turned up no obvious geopodcast plotting maps.

(Maybe the 4ip funded AudioBoo project, which lets users users to record and share audio from their mobile phones, will also spin off an easy to use “geopodcast map plotter”…? (Let’s also hope AudioBoo gets more traction than audiotagger did!))

So here’s what I ended up doing instead for a proof of concept. Firstly, from the Planning Committee webpage, I opened one of the minutes PDFs, and cut and pasted the details of a planning decision into a Yahoo pipe to create a test feed. (The title of the application links to the audio recording of it’s consideration – so you can check the veracity of the minutes if you want to…)

Here’s the pipe:

A couple of things to note:
– the planning committee minutes don’t make it easy to get a geocoded position for the application; I ended up using the Post Office postcode finder to get a postcode from the address stated in the application, so that I could get a reasonable fix on the location with the (rather crappy) Yahoo pipes geocoder block.

– the URl of the audio files are very long and truly horrible; when I was testing embed codes for various media players, they would occasionally choke on the URI (possibly because I wasn’t escaping or encoding it when I should have been?); anyway, a simple fix was to just get a minified version of the URI and pass that to the audio player (the is.gd block is one I found that will minify a supplied URI).

So that’s the pipe.

For the map, I pulled out the code from my (now broken…) geotwitterous app, and added the ability to display an embedded audio player in marker pop-up box:

Here’s the demo: GeoAudio demo: IW planning applications.

For the future? An obvious next step would be to just cut details from planning committee minutes and paste them into a Google spreadsheet, then take a CSV output into a pipe, geocode it, and pass it into the map. But that’s for another day…

In the longer term, pulling together all the relevant documents associated with a planning application (maybe using the Plannig Alerts API?) into a single interface would be handy. (I’d also love to see kids in local schools and the local college doing some practical ICT/DT CAD work generating 3D Sketch-up versions of planning applications so they can be viewed in Google Earth;-)

Glanceable Committee Memberships with Treemaps

A quickie post, this one, to complement a post from a long time ago where I plotted out – as a network – the links between people who served on the same committee on the Isle of Wight Council (Visualising CoAuthors in Open Repository Online Papers, Part 3, half way through the post).

In this case, I trawled the Isle of Wight Council committees to populate the rows of a spreadsheet with column headings “Committee Name” and “Councillor”.

Pasting the results into Many Eyes gives an IW Council membership dataset that can be easily visualised. So for example, here’s a glanceable treemap showing the membership of each committee:

The search tool adds yet another dimension to the visualisation, in this case allowing us to pick out the various committees the searched for named individual sits on.

Here’s a glanceable treemap showing the committees each councillor is a member of:

It strikes me that if the search tool supported Boolean expressions, such as AND and OR (maybe with each term being realised by a different colour bounding box?), it would be possible to explore the variation – or similarity – in make-up of different committees? On the first tree map, this approach would make it obvious which committees the same groups of people were sitting on?

And why would we want to do this? To identify potential clashes of interest, maybe, or a lack of variation in the composition of different committees that might, ideally, be independent of each other?

PS Hmm, I suppose you could use a similar visualisation to look at the distribution of named directors across FTSE 100 companies and their subsidiaries, suppliers and competitors, for example? ;-) Does anyone have simple lists of such information in a spreadsheet anywhere?;-)

What Are JISC’s Funding Priorities?

I’ve just got back home from a rather wonderful week away at the JISC Developer Happiness Days (dev8D), getting a life (of a sort?!;-) so now it’s time to get back to the blog…

My head’s still full of things newly learned from the last few days, so while I digest it, here’s a quick taster of something I hope to dabble a little more with over the next week for the developer decathlon, along with the SplashURL.net idea (which reminds me of my to do list…oops…)

A glimpse of shiny things to do with JISC project data (scraped from Ross’s Simal site… [updated simal url] (see also: Prod).

Firstly, a Many Eyes tag cloud showing staffing on projects by theme:

Secondly, a many Eyes pie chart showing the relative number of projects by theme:

As ever, the data may not be that reliable/complete, because I believe it’s a best effort scrape of the JISC website. Now if only they made their data available in a nice way???;-)

Following a session in the “Dragon’s Den”, where I was told by Rachel Bruce that these charts might be used for good as a well as, err, heckling, I guess, Mark van Harmalen that I should probably pay lip service to who potential users might be, and Jim Downing’s suggestion that I could do something similar for research council projects, I also started having a play with data pulled from the the JISC website.

So for example, here’s a treemap showing current EPSRC Chemistry programme area grants >2M UKP by subprogramme area:

And if you were wondering who got the cash in the Chemistry area, here’s a bubble chart showing projects held by named PIs, along with their relative value:

If you try out the interactive visualisation on Many Eyes, you can hover over each person bubble to see what projects they hold and how much they’re worth:

PS thanks to Dave Flanders and all at JISC for putting the dev8D event on and managing to keep everything running so smoothly over the week:-) Happiness 11/10…

Many Eyes Wiki Dashboard – Online Visualisation Tools That Feed From Online Data Sources

Aren’t blog comments wonderful things? Today, I learned from a comment by Nicola on Visualising Financial Data In a Google Spreadsheet Motion Chart that Many Eyes can now be used to visualise live data via Many Eyes Wikified.

Wikified has apparently been in beta for a month or two (somehow I missed it…) but it was launched as a public service earlier this week: Many Eyes Wikified now open to the public:

Many Eyes Wikified is a “remix” of Many Eyes, using a wiki markup syntax to enable you to easily edit datasets and lay out visualizations side-by-side.

It also functions just like a normal wiki: you can collaboratively edit pages, add explanations or documentation to your visuals, see a page’s edit history, and revert changes.

Unlike a normal wiki, you can embed content from your blog or other data source within Wikified and visualize it. You can also embed the content you make in Wikified elsewhere, just like you can in Many Eyes.

I have to admit to hitting a few, err, issues with Many Eyes wikified whilst playing with it on an old Mac, but the promise is just, like, awesome, dude…

So what’s in store?

First up, you can add data to a page by simply copying and pasting a CSV table into it. So far, so Many Eyes – except that the page where you paste your content is actually a wiki page – so you can have all sorts of explanatory text in the page as well.

What’s really useful, though – and something I’ve been wanting for some time – is the ability to pull live data into the wiki page from another online source.

So far I’ve only tried pulling in CSV data from a Google spreadsheet, but as that seems to work okay, I assume pulling in CSV data from a Yahoo! pipe, or DabbleDB database should work too.

(I’m not sure if Many Eyes Wikified will pull in other data types too, such as TSV? Please add a comment to this post if you find out…)

Once you have a data page defined, you can call on that data from a visualisation within another page. This is where I hit a wobbly… I could create a page, and get a stub for the visualisation okay:

And I got the link that let me fire up the visualisation editor:

And I even got the viz editor:

You’ll notice that the data table has been pulled in, with the ability to set the data type for each column, and a toolbar is provided that lets you select the desired Many Eyes style visualisation type – wtih no typing and no programming required…:-)

(However, when I tried to change the visualisation type on my 10.4 OS/X Mac, I just got thrown back to the wikified home page…:-(

Anyway – the promise is there, and from examples like Nicola’s dashboard, it seems as if other people have been coping fine with the visualisation editor…

…which brings me neatly to the idea of the Wikified dashboards

Many Eyes Wikified allows you to define “dashboards”, which are essentially URI path namespaces within which you can collect a series of separate pages. I’m not sure if you can assert ownership or edit privileges over dashboards, though? At the moment, it looks as if all pages are editable by anyone, in true public/open wiki style…

So to sum up, Many Eyes visualization tools are now available as endpoints for wholly online data mashups. May the fun begin…

Experiments in Displaying Google Form/Survey Results in Many Eyes

A couple of weeks ago, I posted a workaround for Creating Your Own Results Charts for Surveys Created with Google Forms. With the release of Many Eyes Wikified, it’s now possible to power Many Eyes visualisations from online data (e.g. as described in Many Eyes Wiki Dashboard – Online Visualisation Tools That Feed From Online Data Sources).

So I was wondering – would it be possible to just pull data from the results spreadsheet for a Google form, and visualise it directly in Many Eyes without having to do any results processing on the spreadsheet side?

Firs step – find a form. I created a test one some time ago doodling ideas for a mobile survey form which contains some data, so that’s a start: Demo Mobile User Form.

Second step – get the results file as CSV: Mobile survey results:

Hmm – Many Eyes Wikified doesn’t see the columns…???

It’s is ok with a different subsets though… e.g. this one:

(Note that I can’t seem to specify “to end of column” in the Google spreadsheet CSV export? e.g. setting the range to A1:J doesn’t work:-( So i need to define an arbitrary final row…)

Trying out the visualisations on this data, I can sort of get the text cloud visualisation to work:

Unfortunately, in many of the chart types, there doesn’t seem to be the ability to plot a count of particular results(?).

For example, in the bubble chart, I can’t seem to plot bubble size as a count of the results in each results category? (Would I expect to be able to do that…? Hmmm… I think so…?!) Instead, I can only plot size according to data values in one of the numerical columns?

In many cases, in order to plot sensible visualisations that process and display the form results data, I need to be able to count the occurrence of different results classes within a results column. A count option is available in the Matrix chart, but not in many of the other visualisation types?

There’s also the issue that many of the results contain multiple items; so for a example, in answer to the question “What do you use your mobile phone for?” we might get the answers Voice calls, Text Messaging/SMS, Web search, Maps/directions, Camera (stills) (selected from a drop down list on the original form).

What would be really nice would be the ability to specify a delimiter/separator to split out the different results in a particular column, then let Many Eyes enumerate the different possible answer choices in that column, and count on each one. So for example, I’d like to select a bubble chart based on the column “What do you use your mobile phone for?” and have Many Eyes identify the different segments, (Voice Calls, Web Search etc), count the occurrence of each of those and plot each segment as a bubble, with size proportional to the counted occurrence of the segment in the results.

In the meantime, I suppose it’s always possible to process the results in the spreadsheet as demonstrated in Creating Your Own Results Charts for Surveys Created with Google Forms and then just export the CSV of the particular question results tables to Many Eyes Wikified? Or alternatively, design questions that work nicely when the raw results are passed to Many Eyes Wikified?

Simple Embeddable Twitter Map Mashup

Earlier today, I was pondering the Digital Planet Listeners’ map on open2.net and the #digitalplanet Twitter feed we have running alongside it:

and I started to wonder whether there was a simple way of generating an embeddable map showing the location of people tweeting given a Twitter feed.

I couldn’t find one offhand, so here’s a simple pipe that will do the job: Simple Tweetmap pipe.

Here’s how it works: start off by grabbing a feed from Twitter, such as a Twitter search feed.

Using a Twitter feed URL as an input to the pipe, grab the feed and then find the Twitter username of each individual from the user’s Twitter URL. So for example, map http://twitter.com/psychemedia onto psychemedia.

We now call on another pipe that calls the Twitter API to get personal details for each user who has a Tweet in the feed.

Here’s how that embedded pipe works: Twitter location pipe (it should really be called “Twitter User Details” pipe).

First, construct a URI that ask the Twitter API for the user details associated a particular Twitter username, (e.g. using the construction http://twitter.com/users/show/USERNAME.json) then pull the data back as a JSON feed. Finally, just make sure only a single record is returned (there should only be one anyway).

So the embedded pipe passes back an annotation to the original feed with user details. One of the user details is the user’s location – so let’s geocode it:

Sending the output of the location finder to item.y:location allows the pipe to produce a well formed KML and geoRSS output, that can be displayed in a map, such as the Yahoo! Pipes output preview map:

We can grab the KML URL from the More Options output and display the feed in an embeddable Google map in the normal way (simply enter the KML URI in the Google maps search box and hit Search):

If you want to embed the map in your own page, just grab the embed code…

To summarise the hack, here’s a quick review of the whole pipe:

So now if you want to plot where people who have tagged their tweets in a particular way are tweeting from, you can do :-)

Next step is to persuade the open2 team to start archiving appropriately tagged tweets and displaying them on a cluster map view over time :-) We could maybe even link in a timeline using something like TimeMap, the MIT Simile timeline and Google Maps integration library…?

HEFCE Grant Funding, in Pictures

Somehow earlier today I managed to pop open a tab in browser pointing to the HEFCE funding allocation spreadsheets for 2009/2010 (maybe from twitter? It’s been one of those days where losing track has been the norm!): HEFCE Core funding/operation (Allocation of funds, Recurrent grants for 2009-10).

So I thought, like you do, how much nivcer it would have been if they’d published the data in visualisation environment… So here’s the HEI data, republished in some Many Eyes Wikified pages:

And here are some sample interactive visualisations you can use to explore the data (click through to get to the actual interactive demo):

There’s a full list of demo thumbnails available on this Wikified page: HEFCE Viz Test.

Feel free to create your own pages/discussion around the charts (it is a wiki, after all).

In order to pull the data in to your own wiki page, use the following “data include” commands in your wiki page (one for each visualisation; the visualisation page name musn’t contain any spaces (I think??)):

You’ll notice I was a little careless in naming the three data pages, which consequently have inconsistent URIs.

Enjoy! … and don’t forget, you can create your own wiki pages using the data, and add text/discussion into them too.