Meeting/Workshop Amplification at DMU

How many times have you been to a meeting or a workshop within your institution where group discussions result in flip charts and posters that are used as part of a “reporting back” activity, and then are taken away at the end of the day for who knows what reason?

flipchart

Way back when, in a real-time computing course I think, I was introduced to the notion of an “atomic transaction”. As Wikipedia succinctly puts it: “atomicity: a property of database transactions which are guaranteed to either completely occur, or have no effects.” Now I’m not saying that meetings completely occur and have no effects, but many of them do seem to be atomic in that what happens in the meeting stays in the meeting, to paraphrase another well known saying…

In a handful of recent posts, I’ve started thinking about how we can soften the boundaries of meetings so that they can become part of a wider – and ongoing – “conversation”, rather than being activities that are located in a very specific time and place (e.g. Amplified Meetings and Participatory Deliberation…, Using WriteToReply to Publish Committee Papers and Backchannel Side Effects – Personal Meeting Notes).

That is, there are now weveral ways where we can widen the availability of papers and discussions both in terms of time (extending the period of time over which participants can draw on and contribute back to meeting resources) and reach (i.e. making it possible for me people to contribute).

Examples of how we might do this include:

– annotating documents using commenting platforms such as WriteToReply and JISCPress;
– capturing backchannel comments and interlacing them with meeting reports or using them as video or audio captions.

Anyway, earlier today I spotted a great example of the use of a commenting platform to extend the life of a workshop via a tweet from @josswinn pointing to a new site at DMU – First meeting.

Commentable documents at DMU

This document summarises the outcomes from discussions in the first DUALL engagement meeting on July 1st 2010 and offers a set of recommendations for the design of an ICT reporting tool. It is not detailed set of minutes but rather aims to present the broad overview of discussion. The full presentation from the meeting is available below. There was an extremely good representation from both the IESD and the Faculty of Technology. For the group discussion it was decided to break into two groups, based on departmental basis so as to allow for discussion on the detailed requirement of each area to be sub-metered.

This document has been published so that you can comment on the outcome of the meeting in detail. Each paragraph can be directly responded to and threaded discussions can occur around each paragraph. To leave a comment, simply click on the speech bubble next to the paragraph.

A few things to note:

– the document is published using the digress.it theme on a local installation of WordPress at DMU;
– the document is published on the public web – although it could equally have been published behind the DMU authentication layer (i.e. “on the intranet”);
– the documents are viewable by, and commentable on, by anyone (I think? But I think it’s also the case that comments could be limited to people who log on the blog, e.g. using DMU credentials or single sign-on… so I think that comments could be restricted to DMU folk if required?)
– this opening up of discussion particularly around the IT area should be heartwarming for Brian Kelly at least, who’s been trying to get institutional web managers to share via web team blogs (e.g. Revisiting Web Team Blogs); maybe they should also be sharing policy discussions…?!
– exploring the use of new ICT systems to discuss ICT is a Good Thing and an Appropriate Thing. For example, on WriteToReply, the Cabinet Office have been keen to publish several of their documents (e.g. Government ICT Strategy, Government Open Source Action Plan).

If any other institutions have started exploring the use of the digress.it theme and the WriteToReply approach to document publishing, please add a link below :-)

Using Twitter Lists to Define Custom Search Engines

A long time ago, I used to play with search engines all the time, particularly in the context of bounded search, (that is, search over a particular set of web pages of web domains, e.g. Search Hubs and Custom Search at ILI2007). Although I’m not at IWMW this year, I can’t not have an IWMW related tinker, so here’s a quick play around IWMW related twittering folk…

To start with, let’s have a look at the IWMW Twitter account:

IWMW lists

We see there are several twitter lists associated with the account, including one for participants…

Looking around the IWMW10 website, I also spy a community area, with a Google Custom search engine that searches over institutional web management blogs that @briankelly, I presume, knows about:

Institutional Web Managemet blogs search engine

It seems a bit of a pain to manage though… “Please contact Brian Kelly if you would like your blog to be included in this list of blogs which are indexed”

Ever one to take the lazy approach, I wondered whether we could create a useful search engine around the URLs disclosed on the public Twitter profile page of folk listed on the various IWMW Twitter lists. The answer is “not necessarily”, because the URLs folk have posted on their Twitter profiles seem to point all over the place, but it’s easy enough to demonstrate the raw principle.

So here’s the recipe:

– find a Twitter list with interesting folk on it;
– use the Twitter API to grab the list of members on a list;
– the results include profile information of everyone on the list – including the URL they specified as a home page in their profile;
– grab the URLs and generate an annotations file that can be used to import the URLs into a Google Custom Search Engine;
– note that the annotations file should include a label identifier that specifies which CSE should draw on the annotations:

Google CSE config

Once the file is uploaded, you should have a custom search engine built around the URLs folk followed in the twitter list have revealed in their twitter profiles (here’s my IWMW Participants CSE (list date: 12:00 12/7/10)

Note that to create sensibly searchable URLs, I used the heuristics:

– if page URL is example.com or example.com/, search on example.com/*
– by default, if page is example.com/page.foo, just search on that page.

I used Python (badly!;-) and the tweepy library to generate my test CSE annotations feed:

import tweepy

#these are the keys you would normally use with oAuth
consumer_key=''
consumer_secret=''

#these are the special keys for single user apps from http://dev.twitter.com/apps
#as described in http://dev.twitter.com/pages/oauth_single_token
#select your app, then My Access Token from the sidebar
key=''
secret=''

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(key, secret)
api = tweepy.API(auth)

#this identifier is the identifier of the Google CSE you want to populate
cseLabelFromGoogle=''

listowner='iwmw'
tag='iwmw10participant'

auth = tweepy.BasicAuthHandler(accountName, password)
api = tweepy.API(auth)

f=open(tag+'listhomepages.xml','w')

cse=cseLabelFromGoogle

f.write("<GoogleCustomizations>\n\t<Annotations>\n")

#use the Cursor object so we can iterate through the whole list
for un in tweepy.Cursor(api.list_members,owner=listowner,slug=tag).items():
    if  type(un) is tweepy.models.User:
      l=un.url
      if l:
        l=l.replace("http://","")
        if not l.endswith('/'):
          l=l+"/*"
        else:
          if l[-1]=="/":
            l=l+"*"
        f.write("\t\t<Annotation about=\""+l+"\" score=\"1\">\n")
        f.write("\t\t\t<Label name=\""+cse+"\"/>\n")
        f.write("\t\t</Annotation>\n")

f.write("\t</Annotations>\n</GoogleCustomizations>")

f.close()

(Here’s the code as a gist, with tweaks so it runs with oAUth.)

Running this code generates a file (listhomepages.xm) that contains Google custom search annotations for a particular Google CSE, based around the URLs declared in the public twitter profiles of people listed in a particular list. This file can then be uploaded to the Google CSE environment and used to help configure a bounded search engine.

So what does this mean? It means that if you have a identified a set of people sharing a particular set of interests using a Twitter list, it’s easy enough to generate a custom search engine around the webpages or domains they have declared in their Twitter profile.