Given that workshops at ILI2012 last a day (10 till 5), I thought I’d better start prepping the workshop I’m delivering with Martin Hawksey at this year’s Internat Librarian International early… W2 – Appropriating IT: innovative uses of emerging technologies:
Are you concerned that you are not maximising the potential of the many tools available to you? Do you know your mash-ups from your APIs? How are your data visualisation skills? Could you be using emerging technologies more imaginatively? What new technologies could you use to inspire, inform and educate your users? Learn about some of the most interesting emerging technologies and explore their potential for information professionals.
The workshop will combine a range of presentations and discussions about emerging information skills and techniques with some practical ‘makes’ to explore how a variety of free tools and applications can be appropriated and plugged together to create powerful information handling tools with few, if any, programming skills required.
– Visualisation tools
– Maps and timelines
– Data wrangling
– Social media hacks
– Screenscraping and data liberation
– Data visualisation
(If you would like to join in with the ‘makes’, please bring a laptop)
I have some ideas about how to fill the day – and I’m sure Martin does too – but I thought it might be worth asking what any readers of this blog might be interested in learning about in a little more detail and using slightly easier, starting from nowhere baby steps than I usually post.
My initial plan is to come up with five or six self contained elements that can also be loosely joined, structuring the day something like this:
- opening, and an example of the sort of thing you’ll be able to do by the end of the day – no prior experience required, handheld walkthroughs all the way; intros from the floor along with what folk expect to get out of the day/want to be able to do at the day (h/t @briankelly in the comments; of course, if folks’ expectations differ from what we had planned….;-). As well as demo-ing how to use tools, we’ll also discuss why you might want to do these things and some of the strategies involved in trying to work out how to do them, knowing what you already know, or how to find out/work out how to do them if you don’t..
- The philosophy of “appropriation”, “small pieces, lightly joined”, “minimum viability” and ‘why Twitter, blogs and Stack Overflow are Good Things”;
- Visualising Data – because it’s fun to start playing straight away…
- Google Fusion Tables – visualisations and queries
- Google visualisation API/chart components
Payoff: generate some charts and dashboards using pre-provided data (any ideas what data sets we might use…? At least one should have geo-data for a simple mapping demo…)
- — Morning coffee break? —
- Data scraping:
- Google spreadsheets – import CSV, import HTML table;
- Google Refine – import XLS, import JSON, import XML
- (Briefly) – note the existence of other scraper tools, incl. Scraperwiki, and how they can be used
Payoff: scrape some data and generate some charts/views… Any ideas what data to use? For the JSON, I thought about finishing with a grab of Twitter data, to set up after lunch…
- — Lunch? —
- (Social) Network Analysis with Gephi
- Visually analyse Twitter data and/or Facebook data grabbed using Google Refine and/or TAGSExplorer
- Wikipedia graphing using DBPedia
- Other examples of how to think in graphs…
- The scary session…
- Working with large data files – examples of some simple text processing command line tools
- Data cleansing and shaping – Google Refine, for the most part, including the use of reconciliation; additional examples based on regular expressions in a text editor, Google spreadsheets as a database, Stanford Data Wrangler, and R…
- — Afternoon coffee break? —
- Writing Diagrams – examples referring back to Gephi, mentioning Graphviz, then looking at R/ggplot2, finishing with R’s googleVis library as a way of generating Google Visualisation API Charts…
- Wrap up – review of the philosophy, showing how it was applied throughout the exercises; maybe a multi-step mashup as a final demo?
Requirements: we’d need good wifi/network connections; also, it would help if participants pre-installed – and checked the set up of: a) a Google account; b) a modern browser (standardising on Google Chrome might be easiest?) c) Google Refine; d) Gephi (which may also require the installation of a Java runtime, eg on a new-ish Mac); e) R; f) RStudio and a raft of R libraries (ggplot2, plyr, reshape, RCurl, stringr, googleViz); g) a good text editor (?I use TextWrangler on a Mac); h) commandline tools (Windows machines);
Throughout each session, participants will be encouraged to identify datasets or IT workflow issues they encounter at work and discuss how the ideas presented in the workshop may be appropriated for use in those contexts…
Of course, this is all subject to change (I haven’t asked Martin how he sees the day panning out yet;-), but it gives a flavour of my current thinking… So: what sorts of things would you like to see? And would you like to book any of the sessions for a workshop at your place…?!;-)
7 thoughts on “ILI2012 Workshop Prep – Appropriating IT: innovative uses of emerging technologies”
Thanks for sharing this. A suggestion I’d make is that after you’ve introduced yourselves and described the aims of the session, ask the participants to say what they’d like to get from the session (possibly discussed in small groups before sharing with everyone). That will help you get a feel for their interests and level of experience and should make it less intimidating for the participants if they feel the session is beyond their levels of expertise.
A while ago I was asked to do a session on cleaning up some oai-pmh data in Google Refine. I could walk-through the Jorum data again (more show-and-tell then hands on to avoid participants doing a local install of refine).
I also helped Natalie Pollecutt with an interesting recipe where she used Google Spreadsheets to scrap wikipedia’s ‘Today’s featured article’ and using the result to then search their library catalogue for related resources. The result was emailed but there’s maybe a way to wrap in a visualisation.
Will ponder some more
Some example tutorials we’ve done which your readers might want to comment on in terms of relevance/interest
Dev8Ed: Slides and video of short version of hacking stuff together with Google Spreadsheets
Using Google Spreadsheets Like a Database – The QUERY Formula
Generating Twitter Wordclouds in R (Prompted by an Open Learning Blogpost) https://blog.ouseful.info/2012/02/15/generating-twitter-wordclouds-in-r-prompted-by-an-open-learning-blogpost/
Notes on generating live wordclouds from Yahoo Pipes using D3.js
Looking up Images Trademarked By Companies Using OpenCorporates and Google Refine
Exploring UKOER/JORUM via OAI with Google Refine and visualising with Gource [day 11]
Gephi – Network Graphs
Visualising Twitter Friend Connections Using Gephi: An Example Using the @WiredUK Friends Network https://blog.ouseful.info/2011/07/07/visualising-twitter-friend-connections-using-gephi-an-example-using-wireduk-friends-network/
NodeXL equivalent http://mashe.hawksey.info/2011/09/twitter-network-analysis-and-visualisation-ii-nodexl/
Using Google Spreadsheets to combine Twitter and Google Analytics data to find your top content distributors http://mashe.hawksey.info/2012/03/combine-twitter-and-google-analytics-data-to-find-your-top-content-distributors/
Although some participants may value learning about simple development and processing environments such as Yahoo Pipes and Google Refine, it might also be useful to highlight some simple shrink-wrapper solutions. So perhaps a lead-in to introducing TAGS it might be useful to show some of the commercial equivalents (e.g. Twubs for Twitter archiving and even Klout,20ft.net, etc. Twitter analytics). You can mention the limitations of such services and then describe how TAGS may address such limitations (e.g. not spamming one’s followers as 20ft.net does) or suffer from the same limitations.
Comments are closed.