Trackbacks, Tweetbacks and the Conversation Graph, Part I

Whenever you write a blog post that contains links to other posts, and is maybe in turn linked to from other blog posts, how can you keep track of where you blog post sits “in the wider scheme of things”?

In Trackforward – Following the Consequences with N’th Order Trackbacks, I showed a technique for tracking the posts that link to a particular URI, the posts that link to those posts and so on, suggesting a way of keeping track of any conversational threads that are started by a particular post. (This is also related to OUseful Info: Trackback Graphs and Blog Categories.)

In this post, I’ll try to generalise that thinking a little more to see if there’s anything we might learn by exploring that part of the “linkgraph” in the immediate vicinity of a particular URI. I’m not sure where this will go, so I’ve built in the possibility of spreading this thought over several posts.

So to begin with, imagine I write a post (POST) that contains links to three other posts (POST1, POST2, POST3). (Graphs are plotted using Ajax/Graphviz.)

In turn, two posts (POSTA, POSTB) might link back to my post:

So by looking at the links from my post to other posts, and looking at trackbacks to my post (or using the link: search limit applied to the URI of my post on a search engine) I can locate my post in its immediate “link neighbourhood”:

Now it might be that I want to track the posts that refer to posts that referred to my post (which is what the trackforward demo explored).

You might also be interested in seeing what else the posts that have referred to my original post have linked to:

Another possibility is tracking posts that refer to posts that I referred to:

It might be that one of those posts also refers to my post:

So what…. so I need to take a break now – more in a later post…

See also: Tweetbacks, a beta service that provides a trackback like service from tweets that reference a particular URL.

PS and also BackType and BackTweets

So Google Loses Out When It Comes to Realtime Global Events?

There’s been quite a few posts around lately commenting on how Google is missing out on real time web traffic (e.g. Sorry Google, You Missed the Real-Time Web! and Why Google Must Worry About Twitter). Stats just out showing search’n’twitter activity during Obama’s inauguration yesterday show how…

First up, Google’s traffic slump:

And then Twitter’s traffic peak:

And how did I find out? Via this:

which led to the posts containing the traffic graphs shown above.

And I got a link to that tweet from Adam Gurri who was responding to a tweet I’d made about a the Google traffic post… (which in turn was a follow-up to a tweet I posted about whether anyone “turned on a TV to watch the US presidential inauguration yesterday?”)

Adam also pointed me to this nice observation:

And here’s how a few people responded to how they watched the event (I watched on the web):

So, web for video, broadcast radio for audio and Twitter for text. And TV for, err… time for change, maybe?

Change the Law to Fit the Business Model

I’ve just been watching the following video from the Open Rights Group (ORG) on copyright extension, and realised something…

[Via Ray]

…something that’s probably obvious to anyone who lobbies against this sort of thing (extending copyright of works to prevent them from entering the public domain), but came like a doodah out of the whatsit to me…

The companies that are lobbying for copyright extension built their business models around the idea that “our artists’ work is in copyright, so we can exploit it like this and this and that.”

But as these companies are now getting on a bit, that’s not true any more. They need a business model built around the idea that “we are purveyors of in and out-of-copyright material”.

[As prompted by a clarification request from @mweller: the industry’s business model is broken in other ways, of course, not least the changing costs of reproduction and distribution. My “insight” was limited to a realisation (that works for me;-) that lobbying for copyright extension is the industry’s attempt to protect a revenue stream built on an incorrect assumption – i.e. the existence of a never-ending revenue stream from an ever-growing in-copyright back catalogue, or maybe the assumption that growth in new release sales would make up for loss of copyright based revenues from older stock? That’s probably not right though, is it? It’s probably more a blend of complacency – living off the fat of Beatles and Cliff Richard early-recording revenues, not being able to develop the desired level of new artist revenues, and the dreadful realisation that large amounts of moneymaking product is about to go out-of-copyright, whereas you might once have expected the longterm sales value of that product to dwindle over time? Like it did with Shakespeare… Err…?]

[Cf. also software companies, where the value generating life of a piece of software only extends as far as the current, and maybe previous, version, rather than version 1.0a of a product that’s now at version 10? Though thinking through something Alma H said to me yesterday, in a slightly different content, I guess if the media companies followed the lead of the software industry, they’d just delete/remix/re-release the same old songs and keep refreshing the copyright with every new “release”!]

But that’s too hard to imagine – so it’s easier to lobby to changes in the law and keep the same business model ticking over.

Cf. also academia and library sectors, which were built around the idea that access to high quality information and knowledge was scarce. Oops…

Getting Bits to Boxes

Okay – here’s a throwaway post for the weekend – a quick sketch of a thought experiment that I’m not going to follow through in this post, though I may do in a later one…

  • The setting: “the box” that sits under the TV.
  • The context: the box stores bits that encode video images that get played on the TV.
  • The thought experiment: what’s the best way of getting the bits you want to watch into the box?

That is, if we were starting now, how would we architect a bit delivery network using any or all of the following:

1) “traditional” domestic copper last mile phone lines (e.g. ASDL/broadband);
2) fibre to the home;
3) digital terrestrial broadcast;
4) 3G mobile broadband;
4.5) femtocells, hyperlocal, domestic mobile phone base stations that provide mobile coverage within the home or office environment, and use the local broadband connection to actually get the bits into the network; femtocells might be thought of as the bastard lovechild of mobile and fixed line telephony!
5) digital satellite broadcasts (sort of related: Please Wait… – why a “please wait” screen sometimes appear for BBC red button services on the Sky box…).

Bear in mind that “the box” is likely to have a reasonable sized hard drive that can be used to cache, say, 100 hrs of content alongside user defined recordings.

All sorts of scenarios are allowed – operators like BT or Sky “owning” a digital terrestrial channel; the BBC acting as a “public service ISP”, with a premium rate BBC license covering the cost of a broadband landline or 3G connection; Amazon having access to satellite bursts for a couple of hours a day; and so on…

Hybrid return paths are possible too – the broadband network, SMS text messages, a laptop on your knee or – more likely – an iPhone or web capable smartphone in your hand, and so on. Bear in mind that the box is likely to be registered with an online/web based profile, so you can change settings on the web that will be respected by the box.

If you want to play the game properly, you might want to read the Caio Review of Barriers to investment in Next Generation Broadband first.

PS If this thought experiment provokes any thoughts in you, please share them as a comment to this post:-)

So What Else Are You Doing At The Moment?

I was intending not to write any more posts this year, but this post struck a nerve – What’s Competing for Internet Users’ Attention? (via Stephen’s Lighthouse) – so here’s a quick “note to self” about something to think about during my holiday dog walks…:

What else are out students doing whilst “studying” their course materials?

Here’s what some US respondents declared when surveyed about what else they were doing whilst on the internet:

A potentially more interesting thing to think about though is a variation of this:

In particular, the question: what other media do you consume whilst you are using OU course materials?

And then – thinking on from this – do we really think – really – that contemporary course materials should be like books? Even text books? Even tutorial-in-print, SAQ filled books?

Newspapers are realising that newsprint in a broadsheet format is not necessarily the best way to physically package content any more (and I have a gut feeling that the physical packaging does have some influence on the structure, form and layout of the content you deliver). Tabloid and Berliner formats now dominate the physical aspect of newspaper production, and online plays are increasingly important.

OU study guides tend to come either as large format books or A4 soft cover bindings with large internal margins for note taking. Now this might be optimal for study, but the style is one that was adopted in part because of financial concerns, as well as pedagogical ones, several decades ago.

http://flickr.com/photos/54459164@N00/
“what arrived in the post today” – Johnson Cameraface

As far as I know, the OU don’t yet do study guides as print-on-demand editions (at least, not as a rule, except when we get students to print out PDF copies of course materials;-). Print runs are large, batch job affairs that create stock that needs warehousing for several years of course delivery.

So I wonder – if we took the decision today about how to deliver print material, would the ongoing evolution of the trad-format be what we’d suggest? Or do we need an extinction event? The above image shows an example of a recent generation of print materials – which represents an evolution of trad-OU study guides. But if we were starting now, is this where we’d choose to start from? (It might be that it is – I’m just asking…;-)

One other trad-OU approach to course design was typically to try to minimise the need for students to go outside the course materials (one of the personas we consider taking each course is always a submariner, who only has access to their OU course materials) but I’m not sure how well this sits any more.

Now I can’t remember the last time I read a newspaper whilst at home and didn’t go online at least once whilst doing so, either to follow a link or check something related, and I can’t remember the last non-fiction book I read that didn’t also act as a jumping off point – often “at the point of reading” – for several online checks and queries.

So here’s a niggle that I’m going to try to pin down over the holidays. To what extent should our course materials be open ended and uncourse like, compared to tighly scoped with a single, strong and unwavering narrative that reflects the academic author’s teaching line through a set of materials?

The “this is how it is”, single linear narrative model is easier for the old guard to teach, easier to assess, and arguably easier to follow as a student. It’s tried, trusted, and well proven.

The uncourse is all over the place, designed in part to be sympathetic to study moments in daily rituals (e.g. feed reading) or interstitial time (see Interstitial Publishing: A New Market from Wasted Time for more on this idea). The uncourse is ideally invisible, integrated into life.

The trad. OU course is a traditional board game, neatly packaged, well-defined, self-contained. The uncourse is an Alternate Reality Game.

(Did you see what I just did, there?;-)

And as each day goes by, I appreciate a little more that I don’t think the traditional game is a good one to be in, any more… Because the point about teaching is to help people become independent learners. And for all the good things about books (and I have thousands of them), developing skills for bookish learning is not necessarily life-empowering any more…

[Gulp… where did that come from?!]

Decoding Patents – An Appropriate Context for Teaching About Technology?

A couple of nights ago, as I was having a rummage around the European patent office website, looking up patents by company to see what the likes of Amazon, Google, Yahoo and, err, Technorati have been posting recently, it struck me that IT and engineering courses might be able to use patents in the similar way to the way that Business Schools use Case Studies as a teaching tool (e.g. Harvard Business Online: Undergraduate Course Materials)?

This approach would seem to offer several interesting benefits:

  • the language used in patents is opaque – so patents can be used to develop reading skills;
  • the ideas expressed are likely to come from a commercial research context; with universities increasingly tasked with taking technology transfer more seriously, looking at patents situates theoretical understanding in an application area, as well as providing the added advantage of transferring knowledge in to the ivory tower, too, and maybe influencing curriculum development as educators try to keep up with industrial inventions;-)
  • many patents locate an invention within both a historical context and a systemic context;
  • scientific and mathematical principles can be used to model or explore ideas expressed in a patent in more detail, and in a the “situated” context of an expression or implementation of the ideas described within the patent.

As an example of how patents might be reviewed in an uncourse blog context, see one of my favourite blogs, SEO by the SEA, in which Bill Slawski regularly decodes patents in the web search area.

To see whether there may be any mileage in it, I’m going to keep an occasional eye on patents in the web area over the next month or two, and see what sort of response they provoke from me. To make life easier, I’ve set up a pipe to scrape the search results for patents issued by company, so I can now easily subscribe to a feed of new patents issued by Amazon, or Yahoo, for example.

You can find the pipe here: European Patent Office Search by company pipe.

I’ve also put several feeds into an OPML file on Grazr (Web2.0 new patents, and will maybe look again at the styling of my OPML dashboard so I can use that as a display surface (e.g. Web 2.0 patents dashboard).

Immortalising Indirection

So it seems that Downes is (rightly) griping again;-) this time against “the whims of corporate software producers (that’s … why I use real links in th[e OLDaily] newsletter, and not proxy links such as Feedburner – people using Feedburner may want to reflect on what happens to their web footprint should the service disappear or start charging)”.

I’ve been thinking about this quite a bit lately, although more in the context of the way I use TinyURLs, and other URL shortening services, and about what I’d do if they ever went down…

And here’s what I came up with: if anyone hits the OUseful.info blog (for example) via a TinyURL or feedburner redirect, I’m guessing that the server will see something to that effect in the header? If that is the case, then just like WordPress will add trackbacks to my posts when other people link to them, it would be handy if it would also keep a copy of TinyURLs etc that linked there. Then at least I’d be able to do a search on those tinyURLs to look for people linking to my pages that way?

Just in passing, I note that the Twitter search engine has a facility to preview shortened URLs (at least, URLs shortened with TinyURL):

I wonder whether they are keeping a directory of these, just in case TinyURL were to disappear?

Visual Controls for Spreadsheets

Some time ago, Paul Walk remarked that “Yahoo Pipes [might] do for web development what the spreadsheet did for non-web development before it (Microsoft Excel has been described as the most widely used Integrated Development Environment)”. After seeing how Google spreadsheets could be used as part of quick online mashup at the recent Mashed Library, Paul revised this observation along the lines of “the online spreadsheet [might] do for web development what the spreadsheet did for non-web development before it”.

In An Ad Hoc Youtube Playlist Player Gadget, Via Google Spreadsheets, I showed how a Google gadget can be used as a container for arbitrary Javascript code that can be used to process the contents of one or more Google spreadsheet cells, which, combined with the ability to pull in XML content from a remote location into a spreadsheet in real time, suggests that there is a lot more life in the spreadsheet than one might previously have thought.

So in a spirit of “what if” I wonder whether there is an opportunity for spreadsheets to take the next step towards being a development platform for the web in the following ways:

  • by offering support for a visual controls API, (cf. the Google Visualization API) which would provide a set of visual controls – sliders, calendar widgets and so on – that could directly change the state of a spreadsheet cell. I don’t know if the Google spreadsheet gadgets have helper functions that already support the ability to write, or change, cell values, but the Google Gdata spreadsheet does support updates (e.g. updating cells and Updating rows). Just like the visualization API lets you visually chart the contents of a set of cells, a visual controls API could provide visual interfaces for writing and updating cell values. So if anyone from the Lazyweb is listening, any chance of a trivial demo showing how to use something like a YUI slider widget within a Google spreadsheet gadget to update a spreadsheet cell? Or maybe a video type, that would take the URL of a a media file, or the splash page URl for a video on something like Youtube, and automatically create a player/popup player for the video if you select it? Or similarly, an audio player for an MP3 file? Or a slideshow widget for a set of image file cells?
  • “Rich typed” cells; for example, Pamela Fox showed how to use a map gadget in Google spreadsheets to geocode some spreadsheet location cells (Geocoding with Google Spreadsheets (and Gadgets)), so how would it be if we could define a location type cell which actually had a couple of other cells associated with in “another dimension” that were automatically populated with latitude and longitude values, based on a geocoding of the location entered in to a “location type” cell?
  • “real cell relative” addressing; I don’t really know much about spreadsheets, so I don’t know whether such a facility already exists, but it is possible to “really relatively reference” one cell from another; for example, could I create a formula along the lines of ={-1,-1}*{-1,0} that would take a cell “left one and up one” ({-1, -1}) and multiply it by the contents of the cell “left one” {-1, 0})? So e.g. if i paste the formula into C3, it performs the calculation B2*B3?
  • Rich typed cells could go further, and automatically pop-up an appropriate visual control if the cell as typed that way? (e.g. as a “slider controlled value”, for example; and a date type cell might launch a calendar control when you try to edit it, for example?

PS for my thoughts on reinventing email, see Sending Wikimail Messages in Gmail ;-)

Open Content Anecdotes

Reading Open Content is So, Like, Yesterday just now, the following bits jumped out at me:

Sometimes– maybe even most of the time– what I find myself needing is something as simple as a reading list, a single activity idea, a unit for enrichment. At those times, that often-disparaged content is pure gold. There’s a place for that lighter, shorter, smaller content… one place among many.

I absolutely agree that content is just one piece of the open education mosaic that is worth a lot less on its own than in concert with practices, context, artifacts of process, and actually– well, you know– teaching. Opening content up isn’t the sexiest activity. And there ain’t nothin’ Edupunk about it. But I would argue that in one way if it’s not the most important, it’s still to be ranked first among equals. Not just for reasons outlined above, but because for the most part educators have to create and re-create anew the learning context in their own environment. Artifacts from the processes of others– the context made visible– are powerful and useful additions that can invigorate one’s own practice, but I still have to create that context for myself, regardless of whether it is shared by others or not. Content, however, can be directly integrated and used as part of that necessary process. When all is said and done, neither content nor “context” stand on their own particularly well.

For a long time now, I’ve been confused about what ‘remixing’ and ‘reusing’ open educational content means in practical terms that will see widespread, hockey stick growth in the use of such material.

So here’s where I’m at… (err, maybe…?!)

Open educational content at the course level: I struggle to see the widespread reuse of courses, as such; that is, one insitution delivering another; if someone from another institution wants to reuse our course materials (pedagogy built in!), we license it to them; for a fee. And maybe we also run the assessment, or validate it. It might be that some institutions direct their students to a pre-existing, open ed course produced by another instituion where the former instituion doesnlt offer the course; maybe several institutions will hook up together around specialist open courses so they can offer them to small numbers of their own students in a larger, distributed cohort, and as such gain some mutual benefit from bringing the cohort up to a size where it works as a community, or where it becomes financially viable to provide an instructor to lead students through the material.

For indidividuals working through a course on their own, it’s worth bearing in mind that most OERs released by “trad” HEIs are not designed as distance education materials, created with the explicit intention that they are studied by an individual at a remote location. The distance educational materials we create at the OU often follow a “tutorial-in-print” model, with built in pacing and “pedagogical scaffolding” in the form of exercises and self-assessment questions. Expecting widespread consumption of complete courses by individuals is, I think, unlikely. As with a distributed HEI cohort model, it may be that gorups of individuals will come together around a complete course, and maybe even collectively recruit a “tutor”, but again, I think this could only ever be a niche play.

The next level of granularity down is what would probably have been termed a “learning object” not very long ago, and is probably called something like an ‘element’ or ‘item’ in a ‘learning design’, but which I shall call instead a teaching or learning anecdote (i.e. a TLA ;-); be it an exercise, a story, an explanation or an activity, it’s a narrative something that you can steal, reuse and repurpose in your own teaching or learning practice. And the open licensing means that you know you can reuse it in a fair way. You provide the context, and possibly some customisation, but the original narrative came from someone else.

And at the bottom is the media asset – an image, video, quote, or interactive that you can use in your own works, again in a fair way, without having to worry about rights clearance. It’s just stuff that you can use. (Hmmm I wonder: if you think about a course as a graph, a TLA is a fragment of that graph (a set of nodes connected by edges), and a node, (and maybe even an edge?) is an asset?)

The finer the granularity, the more likely it is that something can be reused. To reuse a whole course maybe requires that I invest hours of time on that single resource. To reuse a “teaching anecdote”, exercise or activity takes minutes. To drop in a video or an image into my teaching means I can use it for a few a seconds to illustrate a point, and then move on.

As educators, we like to put our own spin on the things we teach; as learners viewed from a constructivist or constructionist stance, we bring our own personal context to what we are learning about. The commitment required to teach, or follow, a whole course is a significant one. The risk associated with investing a large amount of attention in that resource is not trivial. But reusing an image, or quoting someone else’s trick or tip, that’s low risk… If it doesn’t work out, so waht?

For widespread reuse of the smaller open ed fragments, then we need to be able to find them quickly and easily. A major benefit of reuse is that a reused component allows you to costruct your story quicker, because you can find readymade pieces to drop into it. But if the pieces are hard to find, then it bcomes easier to create them yourself. The bargain is soemthing like this:

if (quality of resource x fit with my story/time spent looking for that resource) > (quality of resource x fit with my story/time spent creating that resource), then I’m probably better of creating it myself…

(The “fit with my story” is the extent to which the resource moves my teaching or learning on in the direction I want it to go…)

And this is possible where the ‘we need more‘ OERs comes in; we need to populate something – probably a search engine – with enough content so that when I make my poorly formed query, something reasonable comes back; and even if the results don’t turn up the goods with my first query, the ones that are returned should give me the clues – and the hope – that I will be able to find what I need with a refinement or two of my search query.

I’m not sure if there is a “flickr for diagrams” yet (other than flickr itself, of course), maybe something along the lines of O’Reilly’s image search, but I could see that being a useful tool. Similarly, a deep search tool into the slides on slideshare (or at least the ability to easily pull out single slides from appropriately licensed presentations).

Now it might be that any individual asset is only reused once or twice; and that any individual TLA is only used once or twice; and that any given course is only used once or twice; but there will be more assets than TLAs (becasue resources can be disaggreated from TLAs), and more TLAs than courses (becuase TLAs can be disaggregated from courses), so the “volume reuse” of assets summed over all assets might well generate a hockey stick growth curve?

In terms of attention – who knows? If a course consumes 100x as much attention as a TLA, and a TLA consumes 10x as much attenion as an asset. maybe it will be the course level open content that gets the hiockey stcik in terms of “attention consumption”?

PS being able to unlock things at the “asset” level is one of the reasons why I don’t much like it when materials are released just as PDFs. For example, if a PDF is released as CC non-derivative, can I take a screenshot of a diagram contained within it and just reuse that? Or the working through of a particular mathematical proof?

PS see also “Misconceptions About Reuse”.

On Writing “Learning Content” in the Cloud

A couple of weeks ago, I posted about an experiment looking at the “mass authoring” of a book on Processing (2.0 1.0, and a Huge Difference in Style).

Darrel Ince, who’s running the experiment, offered to post a public challenge for me to produce 40, 000 words as an adjunct to the book using my own approach… I declined, partly because I’m not sure what I really had in mind would work to produce 40,000 words of “book adjunct”, partly because I don’t know what my approach would be (and I don’t have the time to invest in finding out at the moment, more’s the pity:-(….

Anyway, here’s some of my further thinking on the whole “mass authoring experiment”…

Firstly, three major things came to my mind as ‘issues’ with the process originally suggested for the ‘mass authoring experiment’ – two related to the technology choice, the third to the production model.

To use an application such as Google docs, or a even a wiki, to write a book in a sense respects the structure of the book. Separate documents represent separate chapters, or sections, and multiple authors can have access the document. If “version control” is required – that is, if separate discrete drafts are required – then separate documents can be spawned for each draft. Alternatively, if a the process is one of continual refinement, each chapter can evolve in a single document, potentially authored, edited, critically read and commented on by several people.

There are quite a few books out there that have been written by one or two people round a blog, but there the intent was to create posts that acted as tasters or trial balloons for content and get feedback from the community relating to it. John Battelle’s book on search (Dear Blog: Today I Worked on My Book), and the Groundswell book (7 ways the Web makes writing a book better & faster) are prime examples of this. “The Googlization of Everything” is another, and is in progress at the moment (Hi. Welcome to my book.).

The Google Hacks book I contributed a single hack to (;-) used separate Google docs docs for each hack, as described in Writing a Book in Google Docs. (In part, the use of Google docs as the authoring environment was a ‘medium is the message’ hack!) There the motivation was to author a ‘trad’ book in a new environment – and it seemed to work okay.

In each case, it’s worth remembering that the motivation of the authors was to write a book book, as with the mass authoring experiment, so in that sense it will provide another data point to consider in the “new ways of authoring books” landscape.

The second technology choice issue was the medium chosen for doing the code development. In a book book, intended for print, you necessarily have to refer the reader to a computer in order for them to run the code – offline or online doesn’t really come into it. But if you are writing for online delivery, then there is the option of embedding interactive code development activities withing the test, using something like Obsessing, for example. Potentially, Obsessing, and even the processing.js library, might be pretty unstable, which would provide for an unsatisfactory learning experience for a novice working through the materials (“is my code broken or is the environment broken?”), but with use and a community around it, either the original developer might be motivated to support the libraries, or someone else might be minded to provide maintenance and ongoing development and support an engaged and contributory audience. After all, having a community finding bugs and testing fixes for you is one of the reasons people put time into their open code.

The other major issue I had was with respect to the structuring and organising of the “book”. If you want to play to network strengths in recruiting authors, critical readers, editors and testers, I’m not sure that providing a comprehensively broken down book structure is necessarily the best model? At its worst, this is just farming out word creation to “word monkeys” who need to write up each structural element until they hit the necessary word count (that maybe a little harsh, but you maybe get the gist of what I’m trying to say?). The creativity that comes from identifying what needs to go into a particular section, and how it relates to other sections, is, in the worst case, denied to the author.

In contrast, if you provide a book stub wiki page as a negotiation environment and then let “the community” create further stub pages identifying possible book topics, then the ‘outline’ of the book – or the topics that people feel are important – would have had more play – and more sense of ownership would belong with the community.

A more ‘natural’ way of using the community, to my mind, would be to explore the issue of a ‘distributed uncourse’ in a little more detail, and see how a structure could emerge from a community of bloggers cross referencing each other through posts, comments and trackbacks – Jim Groom’s UMW edu-publishing platform or D’Arcy Norman’s UCalgary Blogs platform are examples of what a hacked-off-the-shelf solution might look like to support this “within” an institution?

The important thing is that the communities arise from discovering a shared purpose. Rather than being given a set of explicit tasks to do, the community identifies what needs doing and then does it. Scott Leslie recently considered another dimension to this problem, in considering how “getting a community off the shelf” is a non-starter: Planning to Share versus Just Sharing.

It strikes me that the “mass authoring” experiment is trying to source and allocate resource to perform a set of pre-defined tasks, rather than allowing a community to grow organically through personal engagement and identify meaningful tasks that need to be completed within that community – that is, allowing the tasks to be identified on an ‘as required’ basis, or as itches that occur that come to need scratching?

The output of an emergent community effort would potentially be non-linear and would maybe require new ways of being read, or new ways of having the structure exposed to the reader? I tried to explore some of these issues as they came to mind when I was writing the Digital Worlds uncourse blog:


(though it probably doesn’t make a lot of sense without me talking to it!)

As part of the challenge, I was advised that I would need about 16 authors. I’m really intrigued about how this number was arrived at. On the basis of porducity (circa 2,500 words per person, assuming a 40, 000 words deliverable?). When I as doing the uncourse posts, my gut feeling was that an engaging 500-800 word blog post might get say a handful of 50-200 word comments back, and possibly even a link back from another blog post. But what does that mean in terms of word count and deliverables?

Another issue that I had with taking the ‘recruit from cold’ approach were I to take up the challenge is that there is potentially already a community around Resig’s processing library, the obsessing interactive editor for it, and Processing itself.

For example, there are plenty of resources already out in the wild to support Processing (eg at the Processing.org website) that might just need some scaffolding or navigation wrapped around them on order to make a “processing course” (copyright and license restrictions allowing, of course…)? So why not use them? (cf. Am I missing the point on open educational resources? and Content Is Infrastructure.) Of course, if the aim was to manufacture a “trad book” according to a prespecified design, this approach may not be appropriate, compared to seeing the structure of the “unbook” arise as engagement in an emergent and ongoing conversation – the next chapter is the next post I read or write on the topic.

From my own experience of Digital Worlds, I wrote a post or two a day for maybe 12 weeks, and then the flow was broken. That required maybe 2-4 hours a day commitment, learning about the topics, tinkering with ideas, seeing what other conversations were going on. It was time consuming, and the community I was engaging with (in terms of people commenting and emailing me) was quite small. Playing a full role in a larger community is more time consuming still, and is maybe one downside to managing an effective community process?

The idea behind the experiment – of looking for new ways to author content – is a good one, but for me the bigger question is to find new ways of reading and navigating content that already exists, or that might emerge through conversation. If we assume the content is out there, how can we aggregate it into sensible forms, or scaffold it so that it is structured in an appropriate way for students studying a particular “course”, If the content is produced through conversation, then does it make sense to talk about creating a content artefact that can be picked up an reused? Or is the learning achieved through the conversation, and should instructor interventions in the form of resource discovery and conducting behaviour, maybe, replace the “old” idea of course authoring?

In terms of delivering content that is authored in a distributed fashion on a platform such as the UMW WPMU platform, I am still hopeful that a “daily feed” widget that producing 1 or more items per day form a “static blog” according to a daily schedule, starting at the day the reader subscribes to the blog, will be one way of providing pacing to linearised feed powered content. (I need to post the WP widget we had built to do this, along with a few more thoughts about a linear feed powered publishing system built to service it).

For example, if you define a static feed – maybe one that replays a blog conversation – then maybe this serves as an artefact that can be reused by other people down the line, and maybe you can post in your own blog posts in “relative time”. I have lots of half formed ideas about a platform that could support this, e.gg on WPMU, but it requires a reengineering (I think), or at least a reimagining, of the whole trackback and commenting engine (you essentially have to implement a notion of sequence rather than time…).

(To see some related example of “daily feeds’, see this ‘daily feeds’ bookmark list.)

So to sum up what has turned out to be far too long a post? Maybe we need to take some cues from this:

and learn to give up some of the control we strive for, and allow our “students” to participate a little more creatively?

See also: Learning Outcomes – again . It strikes me that predefining the contents of the book is like an overkill example of predefining learning outcomes written to suit the needs of a “course author”, rather than the students…?