Backchannel Side Effects – Personal Meeting Notes

So at some point during yesterday, @eingang tweeted:

… how can I quickly get a list of all my tweets from #ouconf10?

and picked up a response from @mhawksey after I suggested that the conference tweets were archived on Twapperkeeper:

@Eingang twapperkeeper can filter by user as well. Here are your tweets for #ouconf10 http://bit.ly/cuqFVn

That is, use a URL of the form:

http://twapperkeeper.com/hashtag/eventHashtag?&l=numberOfResults&from_user=twitterUsername

The numberOfResults parameter (e.g. 10, 25) is the number of tweets that will be displayed that were sent most recently (I think, in the default case?) by @twitterUsername using the hashtag #eventHashtag.

Handy…

One of the things I’ve found myself using hashtags for is blatant self-promotion annotating issues raised in conference events using links to related resources that I am either aware of, or have authored…

As with @eingang, sometimes I feel that it would be handy to be able use an archive of tweets I’ve made around a particular event as a set of crude notes for the event… and it seems that Twapperkeeper offers just that sort of service:-)

PS this also puts me in mind of several other things. Firstly, tweeting around an event generates collateral damage – if you can grab your tweets around a hashtag, you’ve got a free set of memory prompts/notes from the event. The ability to grab this feed and then repurpose it has some similarities to the way in which someone can comment on WriteToReply or JISCPress documents, and get a personal feed out of just their comments (e.g. see also Document/Comment Interlacing with Digress.it in this regard). Thirdly, and a little more tangentially, if we can ever generate a list of event related comments/tweets that contain links, they represent an ideal source for a “discovered” custom search engine.

PPS Here’s another thought… if we can anchor a tweet to a meeting discussion paper (e.g. via a URL to a particular paragraph of a particular meeting paper that has been posted to something like WriteToReply), or a tweet to the relative time of a recorded event (either video or audio), we can then annotate the original document or recording with tweeted notes (cf. Document/Comment Interlacing with Digress.it or JISC10 Conference Keynotes with Twitter Subtitle). Hmmm… there’s a thought… @mhawksey’s iTitle and uTitle are a bit like a Twitter version of Livescribe, aren’t they..?! (Which reminds me: must look at what Livescribe API offers.)

PPPS Hmmm – maybe we could flip the use of tweeted links as annotations to a document around, and instead annotate a twitter feed that links to e.g. unique paragaphs in something like WriteToReply with those paragraphs. For an early corollary to this, see Pivotwitter, where I describe a recipe for how to annotate tweeted links with commentaries from people who have bookmarked those links on delicious.

Google Charts Now Plot Functions

I didn’t get round to posting this at the time it was announced, but as I’ve got a few posts on a similar theme already (e.g. RESTful Image Generation – When Text Just Won’t Do) I think it’s worth a quick post for continuity, if nothing else: Google Charts support for TeX images and formula plotting (i.e. provide an equation and it will give you an image back of the formula plotted out); there’s also an interactive Google Charts Playground that I hadn’t seen before…

You can also add labels to images…

So for example, take this URL:

http://chart.apis.google.com/chart
?cht=lc
&chd=t:-1|15,45
&chs=250×150&chco=FF0000,000000
&chfd=0,x,0,11,0.1,sin(x)*50%2B50
&chxt=x,y
&chm=c,00A5C6,0,110,10|a,00A5C6,0,60,10

and it delivers this:

GOogle chart demo

The following is also plotted:

google chart formula plot

from this URL:

http://chart.apis.google.com/chart
?cht=lxy&chs=250×250&chd=t:0|0|0
&chxs=0,ff0000,12,0,lt|1,0000ff,10,1,lt
&chfd=0,x,0,360,1.9,sin(4*x)*40%2b50|1,y,0,360,1.9,cos(6*y)*40%2b50
&chf=c,lg,90,FFFF00,0,FF9933,1&chco=006699

On the TeX front, this URL:
http://chart.apis.google.com/chart
?cht=tx
&chl=x%20=%20%5Cfrac%7B-b%20%5Cpm%20%5Csqrt%20%7Bb%5E2-4ac%7D%7D%7B2a%7D

delivers:
google TeX chart demo

Getting the escaping can be a bit of a pain, but the interactive playground makes things slightly easier:

Google chart playground

(You still need to escape things like “+” signs, though…)

Open Course Production

Following a chat with Mark Surman of the Mozilla Foundation a week or two ago, I’ve been pondering a possible “flip” between:

a) the production of course materials as part of a (closed) internal process, primarily for use within a (closed) course in a particular institution, and then released under an open license (such as a Creative commons license); and

b) the production of course materials in the open that are then:

i) pulled into the institution for use within a (closed) course; or

ii) used (or not) to support self-directed learning towards an assessment only award.

In the OU, the course production model can take a team of several academics, supported by a course manager, media project manager, editor, picture researcher, rights chasers, developers, artists, et al. several years to produce a course that will then last for between five and ten years of presentation. In addition, handover of course materials may take place up to a year before the first presentation of the course. Course units are typically drafted by individual authors, and then passed for comment and critical reading to the rest of the course team. Typically, materials will pass through at least two drafts before final handover.

(After a little digging, and the help of @ostephens, I managed to track down some reports on how course production was managed in the early years of the OU: Course Production: Some Basic Problems, Course Production: Activities and Activity Networks, Course Production: Planning and Scheduling, Course Production: The Problem of Assessment, though I haven’t had chance to read them yet…)

For the OU short course T151 Digital Worlds, the majority of the course team authored content was published as it was being written on a public WordPress blog (Digital Worlds Uncourse Blog); in the current version of the course, students are referred to that public content from within the VLE. (Note that the copyright and licensing of content on the public blog is left deliberately vague!)

Although the Digital Worlds content was written by a single author (me;-), the model was intended to support at the very least a team blog approach, or a distributed blog network authoring approach. Rather than authors writing large chunks of text and then passing them for comment to other course team members, the blogged approach encourages authors to: a) read along with what others are producing; b) create short chunks of material (500-800 words, typical blog post length) on a particular topic (probably linked to other posts on the topic) that are convenient to study in a single study session or interstitial learning break (cf. @lorcand on Interstitial reading); c) link out to related resources; d) act as a focus for trackbacks (passive related resource discovery) and comments that might influence the direction taken in future blog posts.

The use of WordPress as the blogging platform was deliberate, in part because of the wide support WordPress offers for RSS/Atom feed generation. By linking between posts, as well as tagging and categorising posts appropriately, a structure emerges that offers many different possible pathways through the content. RSS feeds with everything means that it’s then relatively straightforward to republish different pathways apparently as linear runs of content elsewhere, if required (e.g. as in an edufeedr environment, perhaps?)

Authoring content in a public forum – ideally under an open content license – means that content becomes available for re-use even as it is being drafted. By opening up comments, feedback can be solicited that allows content to be improved by updating blog posts, if necessary, as well as identifying topics or clarifications that can be addressed in separate backlinking blog posts. By opening up the production process, we make it far more likely that others will contribute to that process, helping shape and influence that content, than expecting others to take openly licensed content as a large chunk and then produced openly licensed derived works as a result (i.e. forks?!)

In short: maybe we shouldn’t just be releasing content created in a closed process as Open Educational Resources (OERs); rather, we should be producing them in public using an open source production model?

As Cameron Neylon suggests in a critique of academic research publishing (It’s not information overload, nor is it filter failure: It’s a discovery deficit):

t is very easy to say there is too much academic literature – and I do. But the solution which seems to be becoming popular is to argue for an expansion of the traditional peer review process. To prevent stuff getting onto the web in the first place. This is misguided for two important reasons. Firstly it takes the highly inefficient and expensive process of manual curation and attempts to apply it to every piece of research output created. This doesn’t work today and won’t scale as the diversity and sheer number of research outputs increases tomorrow. Secondly it doesn’t take advantage of the nature of the web. They way to do this efficiently is to publish everything at the lowest cost possible, and then enhance the discoverability of work that you think is important. We don’t need publication filters, we need enhanced discovery engines. Publishing is cheap, curation is expensive whether it is applied to filtering or to markup and search enhancement.

Filtering before publication worked and was probably the most efficient place to apply the curation effort when the major bottleneck was publication. Value was extracted from the curation process of peer review by using it reduce the costs of layout, editing, and printing through simple printing less. But it created new costs, and invisible opportunity costs where a key piece of information was not made available. Today the major bottleneck is discovery. …

The problem we have in scholarly publishing is an insistence on applying this print paradigm publication filtering to the web alongside an unhealthy obsession with a publication form, the paper, which is almost designed to make discovery difficult. If I want to understand the whole argument of a paper I need to read it. But if I just want one figure, one number, the details of the methodology then I don’t need to read it, but I still need to be able to find it, and to do so efficiently, and at the right time.

Currently scholarly publishers vie for the position of biggest barrier to communication. The stronger the filter the higher the notional quality. But being a pure filter play doesn’t add value because the costs of publication are now low. The value lies in presenting, enhancing, curating the material that is published.

And so on… (read the whole thing).

Maybe we need to think about educational materials in a similar way? By creating the materials in the open, we start to identify what the good stuff is, as well as being able to benefit from direct and relevant feedback from people who are interested in the topic because they discovered it by looking for it, or at least something like it. (For educators, if they think they are helping shape content, for example through commenting on it, they may be more likely to link back to it and direct their students to it because they have a stake in it, albeit weakly and possibly indirectly.)

In response to a call I put out out on Twitter last night for links to work relating to the use of open source production models in course development, @mweller suggested that Andreas Meiszner‘s PhD work may be relevant here? “My PhD research is aimed at investigating the impact of the organizational structure and operational organization on ICT enriched education by conducting a comparative study between FLOSS (Free / Libre Open Source Software) communities and Higher Education Institutions (HEIs). This work will conduct a comparative study between FLOSS communities and HEIs. The primary unit of analysis is (i.) the organizational structure of FLOSS communities and HEIs, (ii.) the operational organization of FLOSS communities and HEIs and (iii.) the learning process, outcome and environment in FLOSS communities and HEIs.”

(These are also relevant, I think? OSS-Watch briefings on Community source vs open source and The community source development model.)

By placing content out in the open, we also provide a stepping stone towards producing “assessment only” courses. By decoupling the teaching/learning content from the assessment, we can offer assessment only products (such as derivatives of the OU’s APEL containers, maybe?) that assess students based on their informal study of our open materials. (I’m not sure if any courses are yet assessing students who have studied materials placed on OpenLearn?) Once mechanisms are in place for writing robust assessments under the assumption that students will have been drawing at least in part on the study of open OU materials, we can maybe start to be more flexible in assessing students who have made used of other OERs (or indeed, any resources that they have been able to use to further their understanding on a topic).

Just by the by, it’s also worth noting that decoupling of assessment from teaching at the degree level is in the air at moment (e.g. New universities could teach but not test for degrees, says Vince Cable) …

Related: an old and confused post about what happens when content on the inside is opened up to the outside so that folk from the inside can work on it on the outside using all their skills from the inside but not having to adhere to any of its constraints… Innovating from the Inside, Outside

Dazed and Confused…

So via several twittering sources, today I learn that:

– ‘the government now says Facebook will be its “primary channel” for communicating with the public about spending cuts’ [BBC News: Ministers turn to Facebook users for cuts suggestions]

As @adrianshort pointed out, “You’ll need a Facebook account to vote in a few years time. Whatever happened to the public web?”, which reminded me of something I skimmed on Technology Review earlier this week (The Government Has an Online Identity Plan for You):

the U.S. government is hoping to step in and improve the state of online identity management. In a draft recently posted online, the Department of Homeland Security outlined a possible National Strategy for Trusted Identities in Cyberspace–a document that suggests how the government could facilitate a system for managing identities. The system could be used not only by government sites such as the Internal Revenue Service, but by other websites, including commercial ones.

The draft document does not suggest creating a national ID card or government-mandated Internet identity system. Instead it proposes a way to combine existing online identity technologies to create a simpler, more privacy-conscious identity system, without the government taking control of the whole thing.

… The draft suggests starting with accounts that users might already have, like those from Google or Facebook. …

If the UK Gov would rather go for physical ID card by proxy, there’s always the Tesco Clubcard of course (“Tesco now has 16 million active Clubcard holders in the UK, compared to 11.7 million people who have a Barclaycard” [Tesco Clubcard signs up one million customers since relaunch]). Or your mobile phone; even the under 8s have mobile phones…

And the second thing from my Twitter feeds?

– “Money-saving plans to separate teaching from examining in higher education are to be outlined by the business secretary, Vince Cable. The proposals would allow new institutions to teach students for degrees that would be then awarded by prestige universities. … All universities would be offered the opportunity to teach to an externally set, globally recognised exam. One by-product of this would be the emergence of a new breed of private universities.” [Guardian: New universities could teach but not test for degrees, says Vince Cable]

Hmm… maybe I should repitch my idea for a qualification verification webservice so employers three or four years down the line can stand a chance of checking whether job applicants’ degrees are valid or not. (QVS could also play really nicely with Facebook, as it happens…;-) [The QVS doc was a blue sky pitch to the SocialLearn project a couple of years ago, that spawned a small internal project reviewing how a less ambitious service might be used to simplify internal OU processes. The doc linked to above is an edited version of the original draft doc, that was itself revised for presentation to the SocialLearn steering group. The views contained within it barely reflect my own views, let alone those of my employer.]

The move to decouple teaching from assessment using a national HE exam (good for Pearson via EdExcel, methinks?!) is something that might help us make the case internally for some assessment only versions of courses… More about that in a future post, but now I’m going to have another quick peek at the wires to see what else has happened in the last couple of hours!

Amplified Meetings and Participatory Deliberation…

So according to the Guardian (David Cameron tells civil servants he wants to ‘turn government on its head’, via @neillyneil), it seems that:

[David Cameron] told a civil service conference in London that he wants to replace what he described as “the old system of bureaucratic accountability” with a democratic accountability “to the people, not the government machine”.

As part of that, every government department will be required to publish structural reform plans setting out how they will put “people in charge, not politicians”.

So I’m wondering… maybe there are certain boards and committees that might benefit from opening their processes to public view, not just for transparency but also so that folk who are interested (and maybe qualified) can contribute too…? After all, select committees formally solicit views from witnesses called to present evidence to the committee; so why not also require other committees to do the same, although in a more casual way? I started doodling some ideas on this topic in a blog post yesterday (Using WriteToReply to Publish Committee Papers. Is an Active Role for WTR in Meetings Also Possible?), essentially around the idea that by opening up committee papers before a meeting, comments could be solicited from the interested, and optionally drawn on in the meeting itself.

(I’ve never really understood why the business of meetings is required to take place in a particular location at a particular time…?)

By amplifying the business of a committee, both before and after its meetings, the members of the committee also get to draw on the combined wisdom of whoever happens to be following the business of the committee, or who is interested in it, if they wish; and by opening up the closed walls of the committee, we allow the potential for participatory deliberation of the matters at hand.

After all, why should it only be conferences that get amplified online?

Mulling Over an Idea for Hashtag Community Maturity Profiles

A couple of weeks ago, I put started cobbling together some clunky scripts to collate network data files from lists of people twittering with a particular hashtag (First Glimpses of the OUConf10 Hashtag Community). I’ve got a Twapperkeeper key now, so the next step is to pull archived hashtagged tweets from there to generate my hashtaggers list, and then use that data as the basis for pulling in friends and followers links for particular individuals from the Twitter API.

One thing I’d like to start pulling together is a set of tools for providing network and backchannel analysis around hashtag communities. Andy Powell has already published a site that summarises hashtag activity in the form of Summarizr using a Twapperkeeper archive:

Summarizr

So what else might we look for?

Mulling over my own Personal Twitter Networks in Hashtag Communities, the metrics I report include:

– Number of hashtaggers [Ngalaxy]
– Hashtaggers as followers (‘hashtag followers’) [Gfollowers]
– Hashtaggers as friends (‘hashtag friends’) [Gfriends]
– Hashtagger followers not friended (‘serfs’) [Gserfs]
– Hashtagger friends not following (‘slebs’) [Gslebs]
– Hashtaggers not friends or followers (‘the hashtag void’) [Gvoid]
– Reach into hashtag community [Greach=Gfollowers/Ngalaxy]
– Reception of hashtag community the proportion of the the hashtag community that are followed by (i.e. are friends of) the named individual; [Greception=Gfriends/Ngalaxy]
– Hashtag void (normalised) [Normvoid=Gvoid/Ngalaxy]
– Total personal followers the total number of followers of the named individual [Nfollowers]
– Total personal friends: the total number of friends of the named individual [Nfriends]
– Hashtag community dominance of personal reach: the extent to which the hashtag community dominates the set of people who follow the named individual, [Domreach=Gfollowers/Nfollowers]
– Hashtag community dominance of personal reception: the extent to which the set of the named individual’s friends is dominated by members of the hashtag community, [Domreception=Gfriends/Nfriends]

Anyway, it strikes me that calculating those measures as means (and standard deviations) across all the members of the network, along with more traditional social network analysis network centrality or clustering measures, might help identify different signatures relating to the maturity of different hashtag communities (for example, the extent to which they are just forming, or the extent to which they have largely saturated in terms of members knowing each other).

These metrics might also change over the course of an event being discussed via a particular hashtag.

Who Owes Whom? A Handful of Links Relating to International Debt…

Reading the Sunday papers over the last couple of weekends, and the ongoing saga of how much debt the UK is in, I’ve kept asking myself the question “who is the UK government actually in debt to?”

The answer in the simplest form is (I think) the holders of “gilts”, interest bearing bonds sold by the government to whoever’s willing to buy them, such as pension funds or sovereign funds.

But then the quuestion arises: who owns these bonds, and how much are these bond owners in hock to other bond holders and, in the case of sovereign funds, other countries. That is, could we in principle do a great debt cancelling exercise based on doing the sums around: A owes B x and C y; C owes A p and C q; and C owes A l and B m, to see who actually owes whom what if all the debts were settled. (This omits things like different repayment rates, the extent to which countries holding each other’s debt plays a role in mediating exchange rates, and so on.)

I haven’t got very far with this, but I have found a few starting points (I think) as who owes whom in the most general of terms, so I thought I’d just link to them here in case for the sake of convenience and rediscoverability.

My starting point: a BBC report on Who owns the UK’s debt?

From there, I ended up finding loosely related data at:
National Statistics – UK Accounts
UK Debt Management Office Quarterly Review
Bank for International Settlements

Hopefully next weekend I’ll actually get round to building something… (then again, if it’s kite flying weather, maybe not…;-)