Amazon “Edge Services” – Digital Manufacturing

When is a web service not a web service? When it’s an edge service, maybe?

Last night I was pondering the Amazon proposition, which at first glance broadly seems to break down into:

The retail bit splits down further: physical goods and digital downloads, shipped by Amazon; and marketplace goods, where products from other retailers are listed (using Amazon ecommerce webservices, I guess) and Amazon takes a cut from each sale.

It was while I was looking at the digital downloads that the idea of “edge services” came to mind – web services that result in physical world actions (if you’re familiar with the Terminator movies, think: “Skynet manufacturing”;-) [It seems an appropriate phrase has already been coined: direct digital manufacturing, (DDM) – “the process of going directly from an electronic digital representation of a part to the final product [for example] via additive manufacturing”. See also “Digital Manufacturing — Bridging Imagination and Manufacturing“.]

But first let’s set the scene: just what is Amazon up to in the digital download space?

Quite a lot, as it happens – here’s what they offer directly under the Amazon brand, for example (on the Amazon.com domain):
Amazon MP3 Downloads store, a DRM free music downloads site;
Amazon Video on Demand Store – for movie and TV downloads;
Amazon e-books and docs – download e-books and electronic documents (“eDocs”);
– the Kindle store. If you haven’t heard about it already, Kindle is Amazon’s consumer electronics play, an e-book reader with wi-fi connectivity and a direct line back to the Amazon store;
– and just this week (and what prompted this post initially), Amazon bought up Reflexive, a company that among other things is into the online game distribution business.

And although it doesn’t quite fit into the “digital download” space, don’t forget the person-in-the-machine product Amazon Mechanical Turk, a web service for farming out piece work to real people.

But that’s not all – here, for example, are the companies that I know about that are in the Amazon Group of Companies:
IMDb – The Internet Movie Database (which apparently is now streaming movies and TV programmes for free);
Audible – audio book downloads;
Booksurge – book printing on-demand (just by-the-by, in the UK, Amazon’s Milton Keynes fulfilment centre is about to go into the PoD business (press release);
CreateSpace: PoD plus, I guess? Create print on demand books DVDs and CDs, backed up by online audio and video distribution services.

(Amazon also own Shelfari, a site for users to organise and manage their own online bookshelves, and have a stake in LibraryThing, another service in the same vein, through the acquisition of second-hand, rare and out-of-print book retailer Abebooks.)

UPDATE: And they’ve just bought the Stanza e-book reader.

So here’s where it struck me: Amazon is increasingly capable of turning digital bits into physical stuff. This is good for warehousing, of course – the inventory in a PoD driven distribution service is blanks, not one or two copies of as many long tail books you can fit in the warehouse – though of course the actual process of PoD is possibly a huge bottleneck. And it takes Amazon from retailer, to manufacturer? Or to a retailer with an infinite inventory?

If this is part of the game plan, then maybe we can expect Amazon to buy up the following companies (or companies like them) over the next few months:
MOO.com – personalised business card printing, that’s also moving into more general card printing. Upload your photos (or import them from services like flickr) and then print ’em out… photobox does something similar, though it maybe prints onto a wider range of products than MOO currently does?
Spreadshirt – design (and sell) your own printed T-shirts;
Ponoko, or Shapeways – upload your CAD plans and let their 3D printers go to work fabricating your design;
Partybeans – personalised “candy boxes”. Put your own image on a tin containing your favourite sweets:-)

(For a few more ideas, see Money on Demand: New Revenue Streams for Online Content Publishers.)

That said, Amazon built up it’s retail operation based on reviews and recommendations (“people who bought this, also bought that”). The recommendation engine was (is) one way of surfacing long tail products to potential purchasers. And I’m not convinced that the long tail rec engine will necessarily work on ‘user-generated’ content (although maybe it will scale across to that?!). But if you run an inventoryless operation, then does it matter?! Because maybe you can resell uploaded, user-contributed content to friends and family anyway, and several sales for the price of one upload that way?

Or maybe they’ll move into franchising POD and fab machines, and scale-up manufacturing that way? One thing I keep noticing at conferences and events is that coffee increasingly comes in Starbucks labeled dispensers (Starbucks – For Business). So maybe we’ll start seeing Amazon branded POD and fab machines in our libraries, bookstores and catalogue shops? (Espresso, anyone? Blackwell brews up Espresso: “Blackwell is introducing an on-demand printer the Espresso Book Machine to its 60-store chain after signing an agreement with US owner On Demand Books.“) Also on the coffee front – brand your latte

A few further thoughts:
– if Amazon is deliberately developing a digital manufacturing capacity to supplement it’s retail operation (and find ways of reducing stock levels of “instanced products” (i.e. particular books, or particular DVDs) then is the next step moving into the design and user-contributed content business? Like photo-sharing, or video editing..? How’s the Yahoo share price today, I wonder?! ;-)
– Amazon starts (privacy restrictions allowing, and if it doesn’t already do so) to use services like Shelfari and IMDb (though its playlists) to feed its recommendation engine, and encourages the growth of consumer curated playlists; will it have another go at pushing the Your Media Library service, or will it happily exploit verticals like IMDb and Shelfari?
– will companies running on Amazon webservices that are offering “edge services” start to become acquisition targets? After all, if they’re already running on Amazon infrastructure, that makes integration easier, right? Because the Amazon website itself is built on top of those services (and is itself actually a presentation layer for lots of loosely coupled web services already?) (And if you go into conspiracy mode, was the long term plan always to use Amazon webservices as a way of fostering external business innovation that might then be bought up and rolled up into Amazon itself?!)

There’s a book in there somewhere, I think?!

PS another riff on services at the edge, AWS satellite ground stations: Instead of building your own ground station or entering in to a long-term contract, you can make use of AWS Ground Station on an as-needed, pay-as-you-go basis. You can get access to a ground station on short notice in order to handle a special event: severe weather, a natural disaster, or something more positive such as a sporting event. If you need access to a ground station on a regular basis to capture Earth observations or distribute content world-wide, you can reserve capacity ahead of time and pay even less. AWS Ground Station is a fully managed service. You don’t need to build or maintain antennas, and can focus on your work or research.

[Dec 2020] Or how about this? Amazon Monitron, "an end-to-end system that uses machine learning (ML) to detect abnormal behavior in industrial machinery, enabling you to implement predictive maintenance and reduce unplanned downtime [using] Monitron Sensors to capture vibration and temperature data". So: Amazon start to automate manufacturing analytics, use that to in part bootstrap the development of their own (additive) manufacturing processes, initially for producing own-brand lines (whose adoption is identified from analysis of their marketplace sales data) but then rolled as a service and then sold as industrial equipment (built in part, rep-rap style, from their own manufacturing lines).

I wonder if Jeff Bezos (or Elon Musk..] has ever read this 1981 NASA funded research report: A SELF-REPLICATING, GROWING LUNAR FACTORY.

Time for a TinyNS?

In a comment to Printing Out Online Course Materials With Embedded Movie Links Alan Levine suggests: “I’d say you are covered for people lacking a QR reader device since you have the video URL in print; about all you could is run through some process that generates a shorter link” [the emphasis is mine].

I suspect that URL shortening services have become increasingly popular because of the rise of the blog killing (wtf?!) microblogging services, but they’ve also been used for quite some time in magazines and newspapers. And making use of them in (printed out) course materials might also be a handy thing to do. (Assessing the risks involved in using such services is the sort of thing Brian Kelly may well have posted about somewhere; but see also towards the end of this post.)

Now anyone who knows me knows that my mobile phone is a hundred years old and won’t go anywhere near the interweb (though I can send short emails through a free SMS2email gateway I found several years ago!). So I don’t know if the browsers in smart phones can do this already… but it seems to me a really useful feature for a mobile browser would be something like the Mozilla/Firefox smart keywords.

Smart keywords are essentially bookmarks that are invoked by typing a keyword in the browser address bar and hitting return – the browser will then take you to the desired URL. Think of it like a URL “keyboard shortcut”…

One really nice feature of smart keywords is that they can handle an argument… For example, here’s a smart keyword I have defined in my browser (Flock, which is built from the Firefox codebase).

Given a TinyURL (such as http://tinyurl.com/6nf2z) all I need to type into my browser address bar is t 6nf2z to go there.

Which would seem like a sensible thing to be able to do in a browser on a mobile device… (maybe you already can? But how many people know how to do it, if so?)

(NB To create a TinyURL for the page you’re currently viewing at the click of a button, it’s easiest to use something like the TinyURL bookmarklet.)

Now one of the problems with URL shortening services is that you become reliant on the short URL provider to decode the shortened URL and redirect you to the intended “full length” URL. The relationship between the actual URL and the shortened URL is arbitrary, which is where the problem lies – the shortened URL is not a “lossless compressed” version of the original URL, it’s effectively the assignment of a random code that can be used to look up the full URL in a database owned by the short URL service provider. Cf. the scheme used by services like delicious, which generate an “MD5 hash” of a URL which does decode (usually!) to the original URL (see Pivotal Moments… (pivotwitter?!) for links to Yahoo pipes that decode both TinyURLs and delcious URL encodings).

So this got me thinking – what would a “TinyNS” resolution service look like that sat one level above DNS resolution – the domain name resolution service that takes you from a human readable domain name (e.g. http://www.open.ac.uk) to an IP (internet protocol) address (something like 194.66.152.28).

Could (should) we set up trusted parties to mirror the mapping of shortened URL codes from the different URL shortening services (TinyURL, bit.ly, is.gd and so on) and provide distributed resolution of these short form URLs, just in case the original services go down?

Innovation in Online Higher Education

In an article in the Guardian a couple of days ago – UK universities should take online lead, it was reported that “UK universities should push to become world leaders in online higher education”, with universities secretary, John Denham, “likely to call” for the development of a “global Open University in the UK”. (Can you imagine how well that call went down here?;-)

Anyway, the article gave me a heads-up about the imminent publication of a set of reports to feed into a Debate on the Future of Higher Education being run out of the Department for Innovation, Universities and Skills.

The reports cover

The “World leader in elearning” report, (properly titled “On-line Innovation in Higher Education“), by Professor Sir Ron Cooke is the only one I’ve had a chance to skim through so far, so here are some of the highlights from it for me…

HE and the research funding bodies should continue to support and promote a
world class ICT infrastructure and do more to encourage the innovative
exploitation of this infrastructure through … a new approach to virtual education based on a corpus of open learning content

Agreed – but just making more content available under an open license won’t necessarily mean that anyone will use this stuff… free content works when there’s an ecosystem around it capable of consuming that content, which means confusion about rights, personal attitudes towards reuse of third party material, and a way of delivering and consuming that material all need to be worked on.

The OERs “[need] to be supported by national centres of excellence to provide quality control, essential updating, skills training, and research and development in educational technology, e-pedagogy and educational psychology”.

“National Centres of Excellence”? Hmmm… I’d rather that networked communities had a chance of taking this role on. Another centre of excellence is another place to not read the reports from… Distributed (or Disaggregated) Centres of Excellence I could maybe live with… The distributed/disaggregated model is where the quality – and resilience – comes in. The noise the distributed centre would have to cope with because it is distributed, and because its “nodes” are subject to different local constraints, means that the good will out. Another centralised enclave (black hole, money sink, dev/null) is just another silo…

“[R]evitalised investment into e-infrastructures” – JISC wants more money…

[D]evelopment of institutional information strategies: HEIs should be encouraged and supported to develop integrated information strategies against their individual missions, which should include a more visionary and innovative use of ICT in management and administration

I think there’s a lot of valuable data locked up in HEIs, and not just research data; data about achievement, intent and sucessful learning pathways, for example. Google has just announced a service where it can track flu trends, which is “just the first launch in what we hope will be several public service applications of Google Trends in the future”. Google extracts value from search data and delivers services built on mining that data. So in a related vein, I’ve been thinking for a bit now about how HEIs should be helping alumni extract ongoing value from their relationship with their university, rather than just giving them 3 years of content, then tapping them every so often with a request to “donate us a fiver, guv?” or “remember us? We made you who you are… So don’t forget us in your will”. (I once had a chat with some university fundraisers who try to pull in bequests… vultures, all of ’em ;-)

“It is however essential that central expenditure on ICT infrastructure (both at the national level through JISC and within institutions in the form of ICT services and libraries) are maintained.” – JISC needs more cash. etc etc. I won’t mention any more of these – needless to say, similar statements appear every page or two… ;-)

“The education and research sectors are not short of strategies but a visionary thrust across the UK is lacking” – that’s because people like to do their own thing, in their own place, in their own way. And retain “ownership” of their ideas. And they aren’t lazy enough…;-) I’d like to see people trying to mash-up and lash-up the projects that are already out there…

the library as an institutional strategic player is often overlooked because the changes and new capabilities in library services over the past 15 years are not sufficiently recognised

Academic Teaching Library 2.0 = Teaching University 2.0 – discuss… The librarians need to get over their hang-ups about information (the networked, free text search environment is different – get over it, move on, and make the most of it…;-) and the academics need to get their heads round the fact that the content that was hard to access even 20 years ago is now googleable; academics are no longer the only gateways to esoteric academic content – get over it, move on, and make the most of it…;-)

Growth in UK HE can come from professional development, adult learning etc. but might be critically dependent on providing attractive educational offerings to this international market.

A different model would be to encourage some HEIs to make virtual education offerings aimed at the largely untapped market of national and overseas students who cannot find (or do not feel comfortable finding) places in traditional universities. This approach can exploit open educational resources but it would be naïve to expect all HEIs to contribute open education resources if only a few
exploit the potential offered. All HEIs should be enabled to provide virtual education but a few exemplar universities should be encouraged (the OU is an obvious candidate).

Because growth in business is good, right? (err….) and HE is a business, right? (err….) And is that a recommendation that the OU become a global online education provider?

A step change is required. To exploit ICT it follows that UK HEIs must be flexible, innovative and imaginative.

Flexible… innovative… imaginative…

ICT has greatly increased and simplified access by students to learning materials on the Internet. Where, as is nearly universal in HE, this is coupled with a Virtual Learning Environment to manage the learning process and to provide access to quality materials there has been significant advances in distance and flexible learning.

But there is reason to believe this ready access to content is not matched by training in the traditional skills of finding and using information and in “learning how to learn” in a technology, information and network-rich world. This is reducing the level of scholarship (e.g. the increase in plagiarism, and lack of critical judgement in assessing the quality of online material). The Google and Facebook generation are at ease with the Internet and the world wide web, but they do not use it well: they search shallowly and are easily content with their “finds”. It is also the case that many staff are not well skilled in using the Internet, are pushed beyond their comfort zones and do not fully exploit the potential of Virtual Learning Environments; and they are often not able to impart new skills to students.

The use of Web 2.0 technologies is greatly improving the student learning experience and many HEIs are enhancing their teaching practices as a result. A large majority of young people use online tools and environments to support social interaction and their own learning represents an important context for thinking about new models of delivery.

It’s all very well talking about networked learners, but how does the traditional teacher and mode of delivery and assessment fit into that world? I’m starting to think the educator role might well be fulfilled by the educator as “go to person” for a topic, but what we’re trying to achieve with assessment still confuses the hell out of me…

Open learning content has already proved popular…

A greater focus is needed on understanding how such content can be effectively used. Necessary academic skills and the associated online tutoring and support skills need to be fostered in exploiting open learning content to add value to the higher education experience. It is taken for granted in the research process that one builds on the work of others; the same culture can usefully be encouraged in creating learning materials.

Maybe if the materials were co-created, they would be more use? We’re already starting to see people reusing slides from presentations that people they know and converse with (either actively, by chatting, or passively, by ‘just’ following) have posted to Slideshare. It’d be interesting to know just how the rate of content reuse on Slideshare compares with the rate of reuse in the many learning object repositories? Or how image reuse from flickr compares with reuse from learning object repositories? Or how video reuse from Youtube compares with reuse from learning object repositories? Or how resource reuse from tweeting a link or sharing a bookmark compares with reuse from learning object repositories?

…”further research”… yawn… (and b******s;-) More playing with, certainly ;-) Question: do you need a “research question” if you or your students have an itch you can scratch…? We need a more playful attitude, not more research… What was that catchphrase again? “Flexible… innovative… imaginative…”

A comprehensive national resource of freely available open learning content should be established to provide an “infrastructure” for broadly based virtual education provision across the community. This needs to be curated and organised, based on common standards, to ensure coherence, comprehensive coverage and high quality.

Yay – another repository… lots of standards… maybe a bit of SOAP? Sigh…

There is also growing pressure for student data transfer between institutions across the whole educational system, requiring compliance with data specifications and the need for interoperable business systems.

HEIs should consider how to exploit strategically the world class ICT infrastructure they enjoy, particularly by taking an holistic approach to information management and considering how to use ICT more effectively in the management of their institution and in outreach and employer engagement activities.

There’s huge amount of work that needs doing there, and there may even be some interesting business opportunities. But I’m not allowed to talk about that…

ICT is also an important component in an institution’s outreach and business and community engagement activities. This is not appreciated by many HEIs. Small and medium enterprise (SME) managers need good ICT resources to help them deliver their learning needs. Online resources and e-learning are massively beneficial to work based learning. Too little is being done to exploit ICT in HE in this area although progress is being made.

I’ve started trying to argue – based on some of the traffic coming into my email inbox – that OUseful.info actually serves a useful purpose in IT skills development in the “IT consultancy” sector. OUseful.info is often a bit of a hard read at times, but I’m not necessarily trying to show SMEs how to solve their problems – this blog is my notebook, right? – though at times I do try to reach the people who go into SMEs, and hopefully give them a few ideas that they can make (re)use of in particular business contexts.

Okay – that was a bit longer and a bit more rambling than I’d anticipated… if you ewant to read the report, it’s at On-line Innovation in Higher Education. There’s also a discussion blog available at The future of Higher Education: On-Line Higher Education Learning.

Just by the by, here are a couple more reports I haven’t linked to before on related matters:

It’s just a shame there’s no time to read any of this stuff ;-) Far easier to participate in the debate in a conversational way, either by commenting on, or tracking back to, The future of Higher Education: On-Line Higher Education Learning.

PS here’s another report, just in… Macarthur Study: “Living and Learning with New Media: Summary of Findings from the Digital Youth Project”

On Writing “Learning Content” in the Cloud

A couple of weeks ago, I posted about an experiment looking at the “mass authoring” of a book on Processing (2.0 1.0, and a Huge Difference in Style).

Darrel Ince, who’s running the experiment, offered to post a public challenge for me to produce 40, 000 words as an adjunct to the book using my own approach… I declined, partly because I’m not sure what I really had in mind would work to produce 40,000 words of “book adjunct”, partly because I don’t know what my approach would be (and I don’t have the time to invest in finding out at the moment, more’s the pity:-(….

Anyway, here’s some of my further thinking on the whole “mass authoring experiment”…

Firstly, three major things came to my mind as ‘issues’ with the process originally suggested for the ‘mass authoring experiment’ – two related to the technology choice, the third to the production model.

To use an application such as Google docs, or a even a wiki, to write a book in a sense respects the structure of the book. Separate documents represent separate chapters, or sections, and multiple authors can have access the document. If “version control” is required – that is, if separate discrete drafts are required – then separate documents can be spawned for each draft. Alternatively, if a the process is one of continual refinement, each chapter can evolve in a single document, potentially authored, edited, critically read and commented on by several people.

There are quite a few books out there that have been written by one or two people round a blog, but there the intent was to create posts that acted as tasters or trial balloons for content and get feedback from the community relating to it. John Battelle’s book on search (Dear Blog: Today I Worked on My Book), and the Groundswell book (7 ways the Web makes writing a book better & faster) are prime examples of this. “The Googlization of Everything” is another, and is in progress at the moment (Hi. Welcome to my book.).

The Google Hacks book I contributed a single hack to (;-) used separate Google docs docs for each hack, as described in Writing a Book in Google Docs. (In part, the use of Google docs as the authoring environment was a ‘medium is the message’ hack!) There the motivation was to author a ‘trad’ book in a new environment – and it seemed to work okay.

In each case, it’s worth remembering that the motivation of the authors was to write a book book, as with the mass authoring experiment, so in that sense it will provide another data point to consider in the “new ways of authoring books” landscape.

The second technology choice issue was the medium chosen for doing the code development. In a book book, intended for print, you necessarily have to refer the reader to a computer in order for them to run the code – offline or online doesn’t really come into it. But if you are writing for online delivery, then there is the option of embedding interactive code development activities withing the test, using something like Obsessing, for example. Potentially, Obsessing, and even the processing.js library, might be pretty unstable, which would provide for an unsatisfactory learning experience for a novice working through the materials (“is my code broken or is the environment broken?”), but with use and a community around it, either the original developer might be motivated to support the libraries, or someone else might be minded to provide maintenance and ongoing development and support an engaged and contributory audience. After all, having a community finding bugs and testing fixes for you is one of the reasons people put time into their open code.

The other major issue I had was with respect to the structuring and organising of the “book”. If you want to play to network strengths in recruiting authors, critical readers, editors and testers, I’m not sure that providing a comprehensively broken down book structure is necessarily the best model? At its worst, this is just farming out word creation to “word monkeys” who need to write up each structural element until they hit the necessary word count (that maybe a little harsh, but you maybe get the gist of what I’m trying to say?). The creativity that comes from identifying what needs to go into a particular section, and how it relates to other sections, is, in the worst case, denied to the author.

In contrast, if you provide a book stub wiki page as a negotiation environment and then let “the community” create further stub pages identifying possible book topics, then the ‘outline’ of the book – or the topics that people feel are important – would have had more play – and more sense of ownership would belong with the community.

A more ‘natural’ way of using the community, to my mind, would be to explore the issue of a ‘distributed uncourse’ in a little more detail, and see how a structure could emerge from a community of bloggers cross referencing each other through posts, comments and trackbacks – Jim Groom’s UMW edu-publishing platform or D’Arcy Norman’s UCalgary Blogs platform are examples of what a hacked-off-the-shelf solution might look like to support this “within” an institution?

The important thing is that the communities arise from discovering a shared purpose. Rather than being given a set of explicit tasks to do, the community identifies what needs doing and then does it. Scott Leslie recently considered another dimension to this problem, in considering how “getting a community off the shelf” is a non-starter: Planning to Share versus Just Sharing.

It strikes me that the “mass authoring” experiment is trying to source and allocate resource to perform a set of pre-defined tasks, rather than allowing a community to grow organically through personal engagement and identify meaningful tasks that need to be completed within that community – that is, allowing the tasks to be identified on an ‘as required’ basis, or as itches that occur that come to need scratching?

The output of an emergent community effort would potentially be non-linear and would maybe require new ways of being read, or new ways of having the structure exposed to the reader? I tried to explore some of these issues as they came to mind when I was writing the Digital Worlds uncourse blog:


(though it probably doesn’t make a lot of sense without me talking to it!)

As part of the challenge, I was advised that I would need about 16 authors. I’m really intrigued about how this number was arrived at. On the basis of porducity (circa 2,500 words per person, assuming a 40, 000 words deliverable?). When I as doing the uncourse posts, my gut feeling was that an engaging 500-800 word blog post might get say a handful of 50-200 word comments back, and possibly even a link back from another blog post. But what does that mean in terms of word count and deliverables?

Another issue that I had with taking the ‘recruit from cold’ approach were I to take up the challenge is that there is potentially already a community around Resig’s processing library, the obsessing interactive editor for it, and Processing itself.

For example, there are plenty of resources already out in the wild to support Processing (eg at the Processing.org website) that might just need some scaffolding or navigation wrapped around them on order to make a “processing course” (copyright and license restrictions allowing, of course…)? So why not use them? (cf. Am I missing the point on open educational resources? and Content Is Infrastructure.) Of course, if the aim was to manufacture a “trad book” according to a prespecified design, this approach may not be appropriate, compared to seeing the structure of the “unbook” arise as engagement in an emergent and ongoing conversation – the next chapter is the next post I read or write on the topic.

From my own experience of Digital Worlds, I wrote a post or two a day for maybe 12 weeks, and then the flow was broken. That required maybe 2-4 hours a day commitment, learning about the topics, tinkering with ideas, seeing what other conversations were going on. It was time consuming, and the community I was engaging with (in terms of people commenting and emailing me) was quite small. Playing a full role in a larger community is more time consuming still, and is maybe one downside to managing an effective community process?

The idea behind the experiment – of looking for new ways to author content – is a good one, but for me the bigger question is to find new ways of reading and navigating content that already exists, or that might emerge through conversation. If we assume the content is out there, how can we aggregate it into sensible forms, or scaffold it so that it is structured in an appropriate way for students studying a particular “course”, If the content is produced through conversation, then does it make sense to talk about creating a content artefact that can be picked up an reused? Or is the learning achieved through the conversation, and should instructor interventions in the form of resource discovery and conducting behaviour, maybe, replace the “old” idea of course authoring?

In terms of delivering content that is authored in a distributed fashion on a platform such as the UMW WPMU platform, I am still hopeful that a “daily feed” widget that producing 1 or more items per day form a “static blog” according to a daily schedule, starting at the day the reader subscribes to the blog, will be one way of providing pacing to linearised feed powered content. (I need to post the WP widget we had built to do this, along with a few more thoughts about a linear feed powered publishing system built to service it).

For example, if you define a static feed – maybe one that replays a blog conversation – then maybe this serves as an artefact that can be reused by other people down the line, and maybe you can post in your own blog posts in “relative time”. I have lots of half formed ideas about a platform that could support this, e.gg on WPMU, but it requires a reengineering (I think), or at least a reimagining, of the whole trackback and commenting engine (you essentially have to implement a notion of sequence rather than time…).

(To see some related example of “daily feeds’, see this ‘daily feeds’ bookmark list.)

So to sum up what has turned out to be far too long a post? Maybe we need to take some cues from this:

and learn to give up some of the control we strive for, and allow our “students” to participate a little more creatively?

See also: Learning Outcomes – again . It strikes me that predefining the contents of the book is like an overkill example of predefining learning outcomes written to suit the needs of a “course author”, rather than the students…?

Open Content Anecdotes

Reading Open Content is So, Like, Yesterday just now, the following bits jumped out at me:

Sometimes– maybe even most of the time– what I find myself needing is something as simple as a reading list, a single activity idea, a unit for enrichment. At those times, that often-disparaged content is pure gold. There’s a place for that lighter, shorter, smaller content… one place among many.

I absolutely agree that content is just one piece of the open education mosaic that is worth a lot less on its own than in concert with practices, context, artifacts of process, and actually– well, you know– teaching. Opening content up isn’t the sexiest activity. And there ain’t nothin’ Edupunk about it. But I would argue that in one way if it’s not the most important, it’s still to be ranked first among equals. Not just for reasons outlined above, but because for the most part educators have to create and re-create anew the learning context in their own environment. Artifacts from the processes of others– the context made visible– are powerful and useful additions that can invigorate one’s own practice, but I still have to create that context for myself, regardless of whether it is shared by others or not. Content, however, can be directly integrated and used as part of that necessary process. When all is said and done, neither content nor “context” stand on their own particularly well.

For a long time now, I’ve been confused about what ‘remixing’ and ‘reusing’ open educational content means in practical terms that will see widespread, hockey stick growth in the use of such material.

So here’s where I’m at… (err, maybe…?!)

Open educational content at the course level: I struggle to see the widespread reuse of courses, as such; that is, one insitution delivering another; if someone from another institution wants to reuse our course materials (pedagogy built in!), we license it to them; for a fee. And maybe we also run the assessment, or validate it. It might be that some institutions direct their students to a pre-existing, open ed course produced by another instituion where the former instituion doesnlt offer the course; maybe several institutions will hook up together around specialist open courses so they can offer them to small numbers of their own students in a larger, distributed cohort, and as such gain some mutual benefit from bringing the cohort up to a size where it works as a community, or where it becomes financially viable to provide an instructor to lead students through the material.

For indidividuals working through a course on their own, it’s worth bearing in mind that most OERs released by “trad” HEIs are not designed as distance education materials, created with the explicit intention that they are studied by an individual at a remote location. The distance educational materials we create at the OU often follow a “tutorial-in-print” model, with built in pacing and “pedagogical scaffolding” in the form of exercises and self-assessment questions. Expecting widespread consumption of complete courses by individuals is, I think, unlikely. As with a distributed HEI cohort model, it may be that gorups of individuals will come together around a complete course, and maybe even collectively recruit a “tutor”, but again, I think this could only ever be a niche play.

The next level of granularity down is what would probably have been termed a “learning object” not very long ago, and is probably called something like an ‘element’ or ‘item’ in a ‘learning design’, but which I shall call instead a teaching or learning anecdote (i.e. a TLA ;-); be it an exercise, a story, an explanation or an activity, it’s a narrative something that you can steal, reuse and repurpose in your own teaching or learning practice. And the open licensing means that you know you can reuse it in a fair way. You provide the context, and possibly some customisation, but the original narrative came from someone else.

And at the bottom is the media asset – an image, video, quote, or interactive that you can use in your own works, again in a fair way, without having to worry about rights clearance. It’s just stuff that you can use. (Hmmm I wonder: if you think about a course as a graph, a TLA is a fragment of that graph (a set of nodes connected by edges), and a node, (and maybe even an edge?) is an asset?)

The finer the granularity, the more likely it is that something can be reused. To reuse a whole course maybe requires that I invest hours of time on that single resource. To reuse a “teaching anecdote”, exercise or activity takes minutes. To drop in a video or an image into my teaching means I can use it for a few a seconds to illustrate a point, and then move on.

As educators, we like to put our own spin on the things we teach; as learners viewed from a constructivist or constructionist stance, we bring our own personal context to what we are learning about. The commitment required to teach, or follow, a whole course is a significant one. The risk associated with investing a large amount of attention in that resource is not trivial. But reusing an image, or quoting someone else’s trick or tip, that’s low risk… If it doesn’t work out, so waht?

For widespread reuse of the smaller open ed fragments, then we need to be able to find them quickly and easily. A major benefit of reuse is that a reused component allows you to costruct your story quicker, because you can find readymade pieces to drop into it. But if the pieces are hard to find, then it bcomes easier to create them yourself. The bargain is soemthing like this:

if (quality of resource x fit with my story/time spent looking for that resource) > (quality of resource x fit with my story/time spent creating that resource), then I’m probably better of creating it myself…

(The “fit with my story” is the extent to which the resource moves my teaching or learning on in the direction I want it to go…)

And this is possible where the ‘we need more‘ OERs comes in; we need to populate something – probably a search engine – with enough content so that when I make my poorly formed query, something reasonable comes back; and even if the results don’t turn up the goods with my first query, the ones that are returned should give me the clues – and the hope – that I will be able to find what I need with a refinement or two of my search query.

I’m not sure if there is a “flickr for diagrams” yet (other than flickr itself, of course), maybe something along the lines of O’Reilly’s image search, but I could see that being a useful tool. Similarly, a deep search tool into the slides on slideshare (or at least the ability to easily pull out single slides from appropriately licensed presentations).

Now it might be that any individual asset is only reused once or twice; and that any individual TLA is only used once or twice; and that any given course is only used once or twice; but there will be more assets than TLAs (becasue resources can be disaggreated from TLAs), and more TLAs than courses (becuase TLAs can be disaggregated from courses), so the “volume reuse” of assets summed over all assets might well generate a hockey stick growth curve?

In terms of attention – who knows? If a course consumes 100x as much attention as a TLA, and a TLA consumes 10x as much attenion as an asset. maybe it will be the course level open content that gets the hiockey stcik in terms of “attention consumption”?

PS being able to unlock things at the “asset” level is one of the reasons why I don’t much like it when materials are released just as PDFs. For example, if a PDF is released as CC non-derivative, can I take a screenshot of a diagram contained within it and just reuse that? Or the working through of a particular mathematical proof?

PS see also “Misconceptions About Reuse”.

Visual Controls for Spreadsheets

Some time ago, Paul Walk remarked that “Yahoo Pipes [might] do for web development what the spreadsheet did for non-web development before it (Microsoft Excel has been described as the most widely used Integrated Development Environment)”. After seeing how Google spreadsheets could be used as part of quick online mashup at the recent Mashed Library, Paul revised this observation along the lines of “the online spreadsheet [might] do for web development what the spreadsheet did for non-web development before it”.

In An Ad Hoc Youtube Playlist Player Gadget, Via Google Spreadsheets, I showed how a Google gadget can be used as a container for arbitrary Javascript code that can be used to process the contents of one or more Google spreadsheet cells, which, combined with the ability to pull in XML content from a remote location into a spreadsheet in real time, suggests that there is a lot more life in the spreadsheet than one might previously have thought.

So in a spirit of “what if” I wonder whether there is an opportunity for spreadsheets to take the next step towards being a development platform for the web in the following ways:

  • by offering support for a visual controls API, (cf. the Google Visualization API) which would provide a set of visual controls – sliders, calendar widgets and so on – that could directly change the state of a spreadsheet cell. I don’t know if the Google spreadsheet gadgets have helper functions that already support the ability to write, or change, cell values, but the Google Gdata spreadsheet does support updates (e.g. updating cells and Updating rows). Just like the visualization API lets you visually chart the contents of a set of cells, a visual controls API could provide visual interfaces for writing and updating cell values. So if anyone from the Lazyweb is listening, any chance of a trivial demo showing how to use something like a YUI slider widget within a Google spreadsheet gadget to update a spreadsheet cell? Or maybe a video type, that would take the URL of a a media file, or the splash page URl for a video on something like Youtube, and automatically create a player/popup player for the video if you select it? Or similarly, an audio player for an MP3 file? Or a slideshow widget for a set of image file cells?
  • “Rich typed” cells; for example, Pamela Fox showed how to use a map gadget in Google spreadsheets to geocode some spreadsheet location cells (Geocoding with Google Spreadsheets (and Gadgets)), so how would it be if we could define a location type cell which actually had a couple of other cells associated with in “another dimension” that were automatically populated with latitude and longitude values, based on a geocoding of the location entered in to a “location type” cell?
  • “real cell relative” addressing; I don’t really know much about spreadsheets, so I don’t know whether such a facility already exists, but it is possible to “really relatively reference” one cell from another; for example, could I create a formula along the lines of ={-1,-1}*{-1,0} that would take a cell “left one and up one” ({-1, -1}) and multiply it by the contents of the cell “left one” {-1, 0})? So e.g. if i paste the formula into C3, it performs the calculation B2*B3?
  • Rich typed cells could go further, and automatically pop-up an appropriate visual control if the cell as typed that way? (e.g. as a “slider controlled value”, for example; and a date type cell might launch a calendar control when you try to edit it, for example?

PS for my thoughts on reinventing email, see Sending Wikimail Messages in Gmail ;-)

Immortalising Indirection

So it seems that Downes is (rightly) griping again;-) this time against “the whims of corporate software producers (that’s … why I use real links in th[e OLDaily] newsletter, and not proxy links such as Feedburner – people using Feedburner may want to reflect on what happens to their web footprint should the service disappear or start charging)”.

I’ve been thinking about this quite a bit lately, although more in the context of the way I use TinyURLs, and other URL shortening services, and about what I’d do if they ever went down…

And here’s what I came up with: if anyone hits the OUseful.info blog (for example) via a TinyURL or feedburner redirect, I’m guessing that the server will see something to that effect in the header? If that is the case, then just like WordPress will add trackbacks to my posts when other people link to them, it would be handy if it would also keep a copy of TinyURLs etc that linked there. Then at least I’d be able to do a search on those tinyURLs to look for people linking to my pages that way?

Just in passing, I note that the Twitter search engine has a facility to preview shortened URLs (at least, URLs shortened with TinyURL):

I wonder whether they are keeping a directory of these, just in case TinyURL were to disappear?

Decoding Patents – An Appropriate Context for Teaching About Technology?

A couple of nights ago, as I was having a rummage around the European patent office website, looking up patents by company to see what the likes of Amazon, Google, Yahoo and, err, Technorati have been posting recently, it struck me that IT and engineering courses might be able to use patents in the similar way to the way that Business Schools use Case Studies as a teaching tool (e.g. Harvard Business Online: Undergraduate Course Materials)?

This approach would seem to offer several interesting benefits:

  • the language used in patents is opaque – so patents can be used to develop reading skills;
  • the ideas expressed are likely to come from a commercial research context; with universities increasingly tasked with taking technology transfer more seriously, looking at patents situates theoretical understanding in an application area, as well as providing the added advantage of transferring knowledge in to the ivory tower, too, and maybe influencing curriculum development as educators try to keep up with industrial inventions;-)
  • many patents locate an invention within both a historical context and a systemic context;
  • scientific and mathematical principles can be used to model or explore ideas expressed in a patent in more detail, and in a the “situated” context of an expression or implementation of the ideas described within the patent.

As an example of how patents might be reviewed in an uncourse blog context, see one of my favourite blogs, SEO by the SEA, in which Bill Slawski regularly decodes patents in the web search area.

To see whether there may be any mileage in it, I’m going to keep an occasional eye on patents in the web area over the next month or two, and see what sort of response they provoke from me. To make life easier, I’ve set up a pipe to scrape the search results for patents issued by company, so I can now easily subscribe to a feed of new patents issued by Amazon, or Yahoo, for example.

You can find the pipe here: European Patent Office Search by company pipe.

I’ve also put several feeds into an OPML file on Grazr (Web2.0 new patents, and will maybe look again at the styling of my OPML dashboard so I can use that as a display surface (e.g. Web 2.0 patents dashboard).

So What Else Are You Doing At The Moment?

I was intending not to write any more posts this year, but this post struck a nerve – What’s Competing for Internet Users’ Attention? (via Stephen’s Lighthouse) – so here’s a quick “note to self” about something to think about during my holiday dog walks…:

What else are out students doing whilst “studying” their course materials?

Here’s what some US respondents declared when surveyed about what else they were doing whilst on the internet:

A potentially more interesting thing to think about though is a variation of this:

In particular, the question: what other media do you consume whilst you are using OU course materials?

And then – thinking on from this – do we really think – really – that contemporary course materials should be like books? Even text books? Even tutorial-in-print, SAQ filled books?

Newspapers are realising that newsprint in a broadsheet format is not necessarily the best way to physically package content any more (and I have a gut feeling that the physical packaging does have some influence on the structure, form and layout of the content you deliver). Tabloid and Berliner formats now dominate the physical aspect of newspaper production, and online plays are increasingly important.

OU study guides tend to come either as large format books or A4 soft cover bindings with large internal margins for note taking. Now this might be optimal for study, but the style is one that was adopted in part because of financial concerns, as well as pedagogical ones, several decades ago.

http://flickr.com/photos/54459164@N00/
“what arrived in the post today” – Johnson Cameraface

As far as I know, the OU don’t yet do study guides as print-on-demand editions (at least, not as a rule, except when we get students to print out PDF copies of course materials;-). Print runs are large, batch job affairs that create stock that needs warehousing for several years of course delivery.

So I wonder – if we took the decision today about how to deliver print material, would the ongoing evolution of the trad-format be what we’d suggest? Or do we need an extinction event? The above image shows an example of a recent generation of print materials – which represents an evolution of trad-OU study guides. But if we were starting now, is this where we’d choose to start from? (It might be that it is – I’m just asking…;-)

One other trad-OU approach to course design was typically to try to minimise the need for students to go outside the course materials (one of the personas we consider taking each course is always a submariner, who only has access to their OU course materials) but I’m not sure how well this sits any more.

Now I can’t remember the last time I read a newspaper whilst at home and didn’t go online at least once whilst doing so, either to follow a link or check something related, and I can’t remember the last non-fiction book I read that didn’t also act as a jumping off point – often “at the point of reading” – for several online checks and queries.

So here’s a niggle that I’m going to try to pin down over the holidays. To what extent should our course materials be open ended and uncourse like, compared to tighly scoped with a single, strong and unwavering narrative that reflects the academic author’s teaching line through a set of materials?

The “this is how it is”, single linear narrative model is easier for the old guard to teach, easier to assess, and arguably easier to follow as a student. It’s tried, trusted, and well proven.

The uncourse is all over the place, designed in part to be sympathetic to study moments in daily rituals (e.g. feed reading) or interstitial time (see Interstitial Publishing: A New Market from Wasted Time for more on this idea). The uncourse is ideally invisible, integrated into life.

The trad. OU course is a traditional board game, neatly packaged, well-defined, self-contained. The uncourse is an Alternate Reality Game.

(Did you see what I just did, there?;-)

And as each day goes by, I appreciate a little more that I don’t think the traditional game is a good one to be in, any more… Because the point about teaching is to help people become independent learners. And for all the good things about books (and I have thousands of them), developing skills for bookish learning is not necessarily life-empowering any more…

[Gulp… where did that come from?!]

Getting Bits to Boxes

Okay – here’s a throwaway post for the weekend – a quick sketch of a thought experiment that I’m not going to follow through in this post, though I may do in a later one…

  • The setting: “the box” that sits under the TV.
  • The context: the box stores bits that encode video images that get played on the TV.
  • The thought experiment: what’s the best way of getting the bits you want to watch into the box?

That is, if we were starting now, how would we architect a bit delivery network using any or all of the following:

1) “traditional” domestic copper last mile phone lines (e.g. ASDL/broadband);
2) fibre to the home;
3) digital terrestrial broadcast;
4) 3G mobile broadband;
4.5) femtocells, hyperlocal, domestic mobile phone base stations that provide mobile coverage within the home or office environment, and use the local broadband connection to actually get the bits into the network; femtocells might be thought of as the bastard lovechild of mobile and fixed line telephony!
5) digital satellite broadcasts (sort of related: Please Wait… – why a “please wait” screen sometimes appear for BBC red button services on the Sky box…).

Bear in mind that “the box” is likely to have a reasonable sized hard drive that can be used to cache, say, 100 hrs of content alongside user defined recordings.

All sorts of scenarios are allowed – operators like BT or Sky “owning” a digital terrestrial channel; the BBC acting as a “public service ISP”, with a premium rate BBC license covering the cost of a broadband landline or 3G connection; Amazon having access to satellite bursts for a couple of hours a day; and so on…

Hybrid return paths are possible too – the broadband network, SMS text messages, a laptop on your knee or – more likely – an iPhone or web capable smartphone in your hand, and so on. Bear in mind that the box is likely to be registered with an online/web based profile, so you can change settings on the web that will be respected by the box.

If you want to play the game properly, you might want to read the Caio Review of Barriers to investment in Next Generation Broadband first.

PS If this thought experiment provokes any thoughts in you, please share them as a comment to this post:-)