Skip to content

OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education and data journalism. Snarky and sweary to anyone who emails to offer me content for the site.

Category: Radical Syndication

Course Management and Collaborative Jupyter Notebooks via SageMathCloud (now CoCalc)

Prompted by a joint coursemodule team to look at options surrounding a “virtual computing lab” to support a couple of new level 1 (first year equivalent) IT and computing courses (they should know better?!;-), I had another scout around and came across SageMathCloud, which looks at first glance to be just magical:-)

An open source, cloud hosted system [code], the free plan allows users to log in with social media credentials and create their own account space:

SageMathCloud

Once you’re in, you have a project area in which you can define different projects:

Projects_-_SageMathCloudI’m guessing that projects could be used by learners to split out different projects with a course, or perhaps use a project as the basis for a range of activities within a course.

Within a project, you have a file manager:

My_first_project_-_SageMathCloud

The file manager provides a basis for creating application-linked files; of particular interest to me is the ability to create Jupyter notebooks…

My_first_project_-_SageMathCloud2

Jupyter Notebooks

Notebook files are opened in to a tab. Multiple notebooks can be open in multiple tabs at the same time (though this may start to hit performance from the server? pandas dataframes, for example, are held in memory, and the SMC default plan could mean memory limits get hit if you try to hold too much data in memory at once?)?

My_first_project_-_SageMathCloud3

Notebooks are autosaved regularly – and a time slider that allows you to replay and revert to a particular version is available, which could be really useful for learners? (I’m not sure how this works – I don’t think it’s a standard Jupyter offering? I also imagine that the state of the underlying Python process gets dislocated from the notebook view if you revert? So cells would need to be rerun?)

My_first_project_-_SageMathCloud4

Collaboration

Several users can collaborate on a project. I created another me by creating an account using a different authentication scheme (which leads to a name clash – and I think an email clash – but SMC manages to disambiguate the different identities).

My_first_project_-_SageMathCloud5

As soon as a collaborator is added to a project, they share the project and the files associated with the project.

Projects_-_SageMathCloud_and_My_first_project_-_SageMathCloud

Live collaborative editing is also possible. If one me updates a notebook, the other me can see the changes happening – so a common notebook file is being updated by each client/user (I was typing in the browser on the right with one account, and watching the live update in the browser on the left, authenticated using a different account).

My_first_project_-_SageMathCloud_and_My_first_project_-_SageMathCloud

Real-time chatrooms can also be created and associated with a project – they look as if they might persist the chat history too?

_1__My_first_project_-_SageMathCloud_and_My_first_project_-_SageMathCloud

Courses

The SagMathCloud environment seems to have been designed by educators for educators. A project owner can create a course around a project and assign students to it.

My_first_project_-_SageMathCloud_1(It looks as if students can’t be collaborators on a project, so when I created a test course, I uncollaborated with my other me and then added my other me as a student.)

My_first_project_-_SageMathCloud_2

An course folder appears in the project area of the student’s account when they are enrolled on a course. A student can add their own files to this folder, and inspected by the course administrator.

Projects_-_SageMathCloud_and_My_first_project_-_SageMathCloud_3

A course administrator can also add one or more of their other project folders, by name, as assignment folders. When an assignment folder is added to a course and assigned to a student, the student can see that folder, and its contents, in their corresponding course folder, where they can then work on the assignment.

student_-_2015-11-24-135029_-_SageMathCloud_and_My_first_project_-_SageMathCloud

The course administrator can then collect a copy of the student’s assignment folder and its contents for grading.

My_first_project_-_SageMathCloud_9

The marker opens the folder collected from the student, marks it, and may add feedback as annotations to the notebook files, returning the marked assignment back to the student – where it appears in another “graded” folder, along with the grade.

Tony_Hirst_-_2015-11-24-135029_-_SageMathCloud_and_My_first_project_-_SageMathCloud

Summary

At first glance, I have to say I find this whole thing pretty compelling.

In an OU context, it’s easy enough imagining that we might sign up a cohort of students to a course, and then get them to add their tutor as a collaborator who can then comment – in real time – on a notebook.

A tutor might also hold a group tutorial by creating their own project and then adding their tutor group students to it as collaborators, working through a shared notebook in real time as students watch on in their own notebooks, and perhaps may direct contributions back in response to a question from the tutor.

(I don’t think there is an audio channel available within SMC, so that would have to be managed separately? [UPDATE: seems there is some audio support – via William Stein, “if you click on the chat to the right of most file types (e.g., make a .md file), then there is a video camera, and if you click on that, you can broadcast yourself to other viewers of the file”.])

Wishlist

So what else would be nice? I’ve already mentioned audio collaboration, though that’s not essential and could be easily managed by other means.

For a course like TM351, it would be nice to be able to create a composition of linked applications within a project – for example, it would be nice to be able to start a PostgreSQL or MongoDB server linked to the Jupyter server so that notebooks could interact directly with a DBMS within a project or course setting. I also note that the IPython kernel being used appears to be the 2.7 version, and wonder how easy it is to tweak the settings on the back-end, or via an administration panel somewhere, to enable other Jupyter kernels?

I also wonder how easy it would be to add in other applications that are viewable through a browser, such as OpenRefine or RStudio?

In terms of how the backend works, I wonder if the Sandstorm.io encapsulation would be useful (eg in context of Why doesn’t Sandstorm just run Docker apps?) compared to a simpler docker container model, if that indeed is what is being used?

Author Tony HirstPosted on November 24, 2015October 18, 2018Categories OU2.0, Radical Syndication, Rstats1 Comment on Course Management and Collaborative Jupyter Notebooks via SageMathCloud (now CoCalc)

An R-chitecture for Reproducible Research/Reporting/Data Journalism

It’s all very well publishing a research paper that describes the method for, and results of, analysing a dataset in a particular way, or a news story that contains a visualisation of an open dataset, but how can you do so transparently and reproducibly? Wouldn’t it be handy if you could “View Source” on the report to see how the analysis was actually done, or how the visualisation was actually created from an original dataset? And furthermore, how about if the actual chart or analysis results were created directly as a result of executing the script that “documents” the process used?

As regular readers will know, I’ve been dabbling with R – and the RStudio environment – for some time, so here’s a quick review of how I think it might fit into a reproducible research, data journalism or even enterprise reporting process.

The first thing I want to introduce is one of my favourite apps at the moment, RStudio (source on github). This cross platform application provides a reasonably friendly environment for working with R. More interestingly, it integrates with several other applications:

  1. RStudio offers support for the git version control system. This means you can save R projects and their associated files to a local, git controlled directory, as well as managing the synchronisation of the local directory with a shared project on Github. Library support also makes it a breeze to load in R libraries directly from github.
  2. R/RStudio can pull in data from a wide variety of sources, mediated by a variety of community developed R libraries. So for example, CSV and XML files can be sourced from a local directory, or a URL; the RSQLite library provides an interface to SQLite; RJSONIO makes it easy to work with JSON files; wrappers also exist for many online APIs (twitteR for Twitter, for example, RGoogleAnalytics for Google Analytics, and so on).
  3. RStudio provides built in support for two “literate programming” style workflows. Sweave allows you to embed R scripts in LaTeX documents and then compile the documents so that they include the outputs from/results of executing the embedded scripts to a final PDF format. (So if the script produces a table of statistical results based on an analysis of an imported data set, the results table will appear in the final document. If the script is used to general a visual chart, the chart image will appear in the final document.) The raw script “source code” that is executed by Sweave can also be embedded explicitly in the final PDF, so you can see the exact script that was used to create the reported output (stats tables of results, or chart images, etc). If writing LaTeX is not really your thing, RMarkdown allows you to write Markdown scripts and again embed executable R code, along with any outputs directly derived from executing that code. Using the knitr library, the RMarkdown+embedded R code can be processed to produce an HTML output bundle (HTML page + supporting files (image files, javascript files, etc)). Note that if the R code uses something like the googleVis R library to generate interactive Google Visualisation Charts, knitr will package up the required code into the HTML bundle for you. And if you’d rather generate an HTML5 slidedeck from your Rmarkdown, there’s always Slidify (eg check out Christopher Gandrud’s course “Introduction to Social Science Data Analysis” – Slidify: Things are coming together fast, example slides and “source code”).
  4. A recent addition, RStudio now integrates with RPubs.com, which means 1-click publishing of RMarkdown/knitr’d HTML to a hosted website is possible. Presumably, it wouldn’t be too hard to extend RStudio so that publication to other online environments could be supported. (Hmm, thinks… could RStudio support publication using Github pages maybe, or something more general, such as SWORD/Atom Publishing?!) Other publication routes have also been demonstrated – for example, here’s a recipe for publishing to WordPress from R).

Oh, and did I mention that as well as running cross-platform on the desktop, RStudio can also be run as a service and accessed via a web browser. So for example, I can log into a version of RStudio running on one of OU/KMi’s server and access it through my browser…

Here’s a quick doodle of how I see some of the pieces hanging together. I had intended to work on this a little more, but I’ve just noticed the day’s nearly over, and I’m starting to flag… But as I might not get a chance to work on this any more for a few days, here it is anyway…

PS I guess I should really have written and rendered the above diagram using R, and done a bit of dogfooding by writing this post in Rmarkdown to demonstrate to the process, but I didn’t… The graph was actually rendered from a .dot source file using Graphviz. Here’s the source, so if you want to change the model, you can… (I’ve also popped the script up as a gist):

digraph G {

	subgraph cluster_1 {
		Rscript -> localDir;
		localDir -> Rscript;
		Rscript -> Sweave;
		Sweave -> TeX;
		TeX -> PDF [ label = "laTeX"]
		Rscript -> Rmarkdown;
		RCurl -> Rscript;
		Rmarkdown -> HTML [ label = "knitr" ];
		Rmarkdown -> localDir;
		Sweave -> localDir;
		label = "Local machine/\nServer";
		
		RJSONIO -> Rscript;
		XML -> Rscript;
		RSQLite -> Rscript;
		SQLite -> RSQLite;
		subgraph cluster_2 {
			XML;
			RJSONIO;
			RCurl;
			RSQLite;
			label = "Data sourcing";
		}
	}
	OnlineCSV -> RCurl;
	
	ThirdPartyAPI -> RJSONIO;
	ThirdPartyAPI -> XML;
	ThirdPartyAPI -> RCurl;
	
	
	localDir -> github [ label = "git" ];
	github -> localDir;
	HTML -> RPubs;
}

PS This is related, and very relevant – Melbourne R user group presentation: Video: knitr, R Markdown, and R Studio: Introduction to Reproducible Analysis. And this: New Tools for Reproducible Research with R

PPS See also: Data Reporting with knitr and Open Research Data Processes: KMi Crunch – Hosted RStudio Analytics Environment

Author Tony HirstPosted on July 15, 2012September 12, 2012Categories Infoskills, OU2.0, Radical Syndication, Rstats4 Comments on An R-chitecture for Reproducible Research/Reporting/Data Journalism

Using GetTheData to Organise Your Data/API FAQs?

It’s generally taken as read that folk hate doing documentation*. This is as true of documenting data and APIs as it is of code. I’m not sure if anyone has yet done a review of “what folk want from published datasets” (JISC? It’s probably worth a quick tender call…?), but there have certainly been a few reports around what developers are perceived to expect of an API and its associated documentation and community support (e.g. UKOLN’s JISC Good APIs Management Report and API Good Practice reports, and their briefing docs on APIs).

* this is one reason why I think bloggers such as myself, Martin Hawksey and Liam Green Hughes offer a useful service: we do quick demos and geting started walkthroughs of newly launched services, demonstrating their application in a “real” context…

At a recent technical advisory group meeting in support of the Resource Discovery Taskforce UK Discovery initiative (which is aiming to improve the discoverability of information resources through the publication of appropriate metadata, and hopefully a bit of thought towards practical SEO…) I suggested that a Q and A site might be in order to support developer activities: content is likely to be relevant, pre-SEOd (blending naive language questions with technical answers), and maintained and refreshed by the community:-)

In much the same way that JISCPress arose organically from the ad hoc initiative between myself and Joss Winn that was WriteToReply, I suggested that the question and answer site with a focus on data that I set up with Rufus Pollock might provide a running start to UK Discovery Q&A site: GetTheData.

API connections to OSQA, the codebase that underpins GetTheData, are still lacking, but there are mechanisms for syndicating content from RSS feeds (for example, it’s easy enough to get a feed out of tagged questions out, or questions and answers relating to a particular search query); which is to say – we could pull in ukdiscovery tagged questions and answers in to the UK Discovery website developers’ area.

Another issue relates to whether or not developers would actually engage in the asking and answering of questions around UK Discovery technical issues. Something I’ve been mulling over is the extent to which GetTheData could actually be used to provide QandA styled support documentation for published data or data APIs, concentrating a wide range of data related Q&A content on GetTheData (and hence helping building community/activity through regularly refreshed content and a critical mass of active users) and then syndicating specific content to a publisher’s site.

So for example: if a data/api publisher wants to use GetTheData as a way of supporting their documentation/FAQ effort, we could set them up as an admin and allow them rights over the posting and moderation of questions and answers on the site. (Under the current permissions model, I think we’d have to take it on trust that they wouldn’t mess with other bits of the site in a reckless or malevolent way…;-)

API/data publishers could post FAQ style questions on GetTheData and provide canned, accepted (“official”) answers. Of course, the community could also submit additional answers to the FAQs, and if they improve on the official answer be promoted to accepted answers. Through syndication feeds, maybe using a controlled tag filtered through a question submitter filter (i.e. filtering questions by virtue of who posted them), it would be possible to get a “maintained” lists of questions out of GetTheData that could then be pulled in via an RSS feed into a third party site – such as the FAQ area of a data/api publisher’s website.

Additional activity (i.e. community sourced questions and answers) around the data/API on GetTheData could also be selectively pulled in to the official support site. (We may also be able to pull out the lists of people who are active around a particular tag???) In the medium term, it might also be possible to find a way of supporting remote question submission that could be embedded on the API/data site…

If any data/API publishers would like to explore how they might be able to use GetTheData to power FAQ areas of their developer/documentation sites, please get in touch:-)

And if anyone has comments about the extent to which GetTheData, or OSQA, either is or isn’t appropriate for discovery.ac.uk, please feel free to air them below…:-)

Author Tony HirstPosted on June 20, 2011Categories Anything you want, Radical SyndicationTags getthedata, osqa, rdtf, ukdiscovery2 Comments on Using GetTheData to Organise Your Data/API FAQs?

Paragraph Level Search Results on WordPress Using Digress.it and Yahoo Pipes

One of the many RSS related feature requests I put in when we were working on the JISCPress project was the ability to get a page level RSS feed out where each paragraph was represented as a separate item the page feed.

WordPress already delivers a single item RSS feed for each page containing just the substantive content of the page (i.e. the content without the header, footer and sidebar fluff), which means you can do things like this, but what I wanted is for the paragraphs on each page to be atomised as separate feed elements.

Eddie implemented support for this, but I didn’t do anything with it at the time, so here’s an example of just why I thought it might be handy – paragraph level search.

At the moment, searching a document on WriteToReply returns page level results – that is, you get a list of search results detailing the pages on which the search term(s) appear. As you might expect with WordPress, we can get access to these results as a feed by shoving feed in the URI, like this:
https://ouseful.wordpress.com/feed?s=test

Paragraph level feeds, as implemented in the Digress.it WordPress theme we were developing, are keyed by URLs of the form:
http://writetoreply.org/legaldeposit/feed/paragraphlevel/annex-c-online-content-to-be-published/#56

That is:
http://writetoreply.org/DOCNAME/feed/paragraphlevel/PAGENAME/#PARA_NUMBER

So can you guess what I’m gonna do yet…?

First of all, grab the search feed for a particular query on a particular document into a Yahoo Pipe:

Rewrite the URI of each page liked to in the results feed as the full fat, itemised paragraph feed for the page, and emit those items (that is, replace each original search results item with the set of paragraph items from that page).

The next step is to filter those paragrpah feed items for just the paragraphs that contain the original search terms:

We need to rewrite the link because (at the time of writing) the page paragraphs feed doesn’t link to each paragraph, it links to the parent page (a bug report has been made;-)

You can find the pipe here: Double dip JISCPress search

Note that at the time of writing, there’s also a problem with the paragraph number reported in the link (again a report has been made), a workaround patch for which is included in this pipe.

What this means is that we now have a workaround for indexing into individual paragraphs using a search term. If we tag content at the paragraph level, (e.g. by running a page-level paragraph feed, or double dip search results feed through OpenCalais), we can generate related search links into the document, or other documents on the platform, at a paragraph level, increasing the relevance, or resolution (in terms of increased focus), of the returned results.

Just by the by, the approach shown above is based on a search, expand and filter pattern, (cf. a search within results pattern) in which a search query is used to obtain an initial set of results which are then expanded to give higher resolution detail over the content, and then filtered using the original search query to deliver the final results. If a patent for this doesn’t already exist for this, then if I worked for Google, Yahoo, etc etc you could imagine it being patented. B*****ds.

PS here’s a trick I picked up from Joss’ blog somewhere for reversing the order of feed items published by WordPress:
http://writetoreply.org/legaldeposit/feed/?orderby=ID&order=ASC
I assume these parameters also work?

Author Tony HirstPosted on February 18, 2010February 16, 2010Categories Pipework, Radical Syndication, WriteToReplyTags JISCPress4 Comments on Paragraph Level Search Results on WordPress Using Digress.it and Yahoo Pipes

Twitter Powered Subtitles for BBC iPlayer Content c/o the MASHe Blog

I don’t often do posts where I just link to or re-present content that appears elsewhere on the web, but I’m going to make an exception in this case, with a an extended preview to a link on Martin Hawksey’s MASHe blog…

Somewhen last year, I started to explore how we might use a Twitter backchannel as a way of capturing subtitle like commentary for recordings of live presentations (e.g. Twitter Powered Subtitles for Conference Audio/Videos on Youtube, Twitter Powered Youtube Subtitles, Reprise: Anytime Commenting, Easier Twitter Powered Subtitles for Youtube Movies). Further progress toward freestanding subtitles stalled for want of a SMIL like player that could replay timestamped text files.

Anyway, whilst I was watching Virtual Revolution over the weekend (and pondering the question of Broadcast Support – Thinking About Virtual Revolution) I started thinking again about replaying twitter streams alongside BBC iPlayer content, and wondering whether this could form part of a content enrichment strategy for OU/BBC co-productions.

I had a little more luck finding text replayers this time, for example here: Accessible HTML5 Video with JavaScripted captions and here: smiltext-javascript (I found that “timed text” is a handy search phrase), but no time to explore further…

…and then this:

which leads to a how to post on Twitter powered subtitles for BBC iPlayer in which Martin “come[s] up with a way to allow a user to replay a downloaded iPlayer episode subtitling it with the tweets made during the original broadcast.”

This builds on my Twitter powered subtitling pattern to create a captions file for downloaded iPlayer content using the W3C Timed Text Authoring Format. A video on the Martin’s post shows the twitter subtitles overlaying the iPlayer content in action.

AWESOME :-)

This is exactly it’s worth blogging half baked ideas – because sometimes they come back better formed…

So anyway, the next step is to work out how to make full use of this… any ideas?

PS I couldn’t offhand find any iPlayer documentation about captions files, or the content packaging for stuff that gets downloaded to the iPlayer desktop – anyone got a pointer to some?

– iPlayer accessibility: turning on subtitles

PPS Twitter backchannel cubtitle files for episode 3 and 4 of Virtual Revolution available here: The Virtual Revolution: Twitter subtitles for BBC iPlayer

Author Tony HirstPosted on February 17, 2010February 22, 2010Categories BBC, OBU, OU2.0, Radical SyndicationTags captions, iPlayer, subtitles9 Comments on Twitter Powered Subtitles for BBC iPlayer Content c/o the MASHe Blog

Topical, Hyperlocal DeliTV for Local People

It is said that “fortune favours the prepared mind”, or at least the mildy obsessing one, so when I saw @danbri’s post on Local Video for Local People and realised that it was trivial to get hold of geocoded Youtube videos within a certain distance of a specified location using the following Youtube API call:

http://gdata.youtube.com/feeds/api/videos?v=2 &q=hovercraft&location=50.694254,-1.224976&location-radius=5mi

it was immediately obviously that this could be used to provide a local (and optionally topical) feed of Youtube videos to populate a hyperlocal DeliTV video channel for watching on Boxee.

So here it is, my DeliTV Local pipe.

And here’s the front end:

To view the channel in Boxee, enter a location, and optionally a topic, and then either:

– run the pipe, and subscribe to the RSS feed directly in Boxee;
– bookmark the URI of the pipe. Enter the URI in you browser location bar according to the following pattern:
http://pipes.yahoo.com/ouseful/delitv_local?l=required,location&q=optional search terms
hit return, check the location and optional search terms are correct and the pipe is giving a plausible output, and then bookmark that page to a DeliTV tag on delicious. (WHen you bookmark the pag, any spacesin your search terms should be replaced by %20. So the above would be bookmarked containing the characters optional%20search%20terms). If you then subscribe to that DeliTV channel via a DeliTV pipe that you have hooked up to your Boxee account, you will be able to watch the channel through Boxee.

So for example, here’s my ‘Hovercrafts” channel for Ryde on the Isle of WIght:

(Hmm, I wonder, should these be sorted by relevance or recency? I think the default is relevance?)

If you want to define a variety of different topic channels around a particular location, or a set of channels on the same topic from different locations, bookmark each channel to delicious and subscribe to them all through the same DeliTV pipe :-)

See also: Deli TV – Personally Programmed Social Television Channels on Boxee: Prototype

Author Tony HirstPosted on October 14, 2009October 13, 2009Categories Pipework, Radical Syndication, TinkeringTags delitv, hyperlocal tv, youtube1 Comment on Topical, Hyperlocal DeliTV for Local People

Surfacing Google Sidewiki Comments Within a Web Page

As recent readers may know, I’ve been blogging lately over on the Arcadia Project Blog, a site I have authoring permissions on but not admin rights. At the moment, comments on the site seem to be disabled except to project team members (I’m not sure how they are whitelisted?), which is a bit of a pain because I want wider comments on the site.

So what to do? The blog is hosted on Blogspot, which means I can add embed codes and javascript to a post and hence embed a Disqus comment thread on each post I write.

Alternatively, (additionally), commenters who are running the enhanced Google Toolbar can comment on the page using Google Sidewiki.

Sidewiki is all well and good (or, errr, not – maybe it’s really evil…?) but it means that unless you’re logged in to Google and running the Google toolbar, or you’ve got a Greasemonkey script or bookmarklet to check for Sidewiki comments related to a page you’re probably not going to see the Sidewiki comments.

Fortunately, for the moment at least, Sidewiki comments for a page can be accessed without authentication via a GData/RSS feed: Retrieving Sidewiki entries written for a particular web page:

GET http://www.google.com/sidewiki/feeds/entries/webpage/webpageUri/full

where webpageUri is the URI of the page you want to see comments for, suitably encoded. In Javascript, I think encodeURIComponent(window.location) should do the trick…

How to get a JSON version of the feed, wrapped in a callback function that can be used to display the feed, is documented on the GData API site: Using JSON with Google Data APIs – just add ?alt=json-in-script&callback=myFunction.

Reusing the sample code on the GData site, it was easy enough to create a function to display a Sidewiki comment feed for a particular page:

<div id="demo"></div>

<script type="text/javascript">
function c(root) {
  var feed=root.feed;
  var entries=feed.entry || [];
  var html=['<ul>'];
  for (var i=0; i < entries.length; ++i) {
    var entry=entries[i];
    var title=entry.title.$t;
    var description=entry.content.$t;
    var link=entry.link[0].href;
    html.push('<li><em><a href="',link,'">', title,'</a><br/>',description,'</em></li>');
  }
  html.push('</ul>');
  document.getElementById("demo").innerHTML=html.join("");
}
</script>

<script src="http://www.google.com/sidewiki/feeds/entries/webpage/http%3A%2F%2Farcadiaproject.blogspot.com%2F2009%2F10%2Fwanted-library-hardware-hacker-for.html/full?alt=json-in-script&callback=c">
</script>

Note that I have explicitly named the page for the feed I want in the above example and that WordPress has messed it up because I’m using the HTML code view. It should be:

http://www.google.com/sidewiki/feeds/entries/webpage/
http%3A%2F%2Farcadiaproject.blogspot.com%2F2009%2F10%2Fwanted-library-hardware-hacker-for.html/full?alt=json-in-script&callback=c

See it in action here: Wanted: Library Hardware Hacker for Desktop Tattle Tape Detector (bottom of the page).

A general purpose script would add the script dynamically using an encoded version of the URI for the current page. Something like this, maybe?

var s=document.createElement('script');
s.setAttribute('type','text/javascript');
var uri='http://www.google.com/sidewiki/feeds/entries/webpage/' + encodeURIComponent(window.location)+'/full?alt=json-in-script&callback=c';
s.setAttribute('src',uri);
document.body.appendChild(s);

Adding this function along with the c function and ‘demo’ ID’d display div to a page template should display any Google Sidewiki comments associated with a page within the page…

… which might be dangerous, of course, given the lack of control a page owner has over the Sidewiki comments associated with it…

Author Tony HirstPosted on October 9, 2009October 11, 2009Categories Radical Syndication, Tinkering, WriteToReply2 Comments on Surfacing Google Sidewiki Comments Within a Web Page

Watching The Economist Videographics and Video Podcasts via Boxee on DeliTV

I’m just such a glutton for punishment… the slightest external interest in things that might be OUseful, and like a whotsit chasing a doo dah, I can’t but bite… So for example: in Videographics from the Economist last week(?!), @deburca wrote:

The Economist now has an interesting section on videographics, each of which can be downloaded or embedded into blogs, teaching resources etc.
…
An RSS feed is also available which may be a useful channel addition for Tony Hirst’s Delitv project

Sigh…

Like this at the top?

These channels/programmes:

These packages:

and this sort of content…?

So here’s the pipework… after a quick glance at the Economist video RSS feeds page that @deburca linked to:

and a brief sigh that they don’t make an OPML feed available, I produced a quick pipe that scrapes the page to generate a feed containing links to each of the different video ‘programme’ feeds, rewriting the http:// part of the the URL to the rss:// protocol that Boxee expects:

If you bookmark the pipe URI – http://pipes.yahoo.com/ouseful/dtv_economist – to a DeliTV tag on your delicious account, the Economist programme feeds should appear wherever you’ve tag-programmed them to….

Author Tony HirstPosted on October 7, 2009October 11, 2009Categories Pipework, Radical Syndication, TinkeringTags delitv, Economist2 Comments on Watching The Economist Videographics and Video Podcasts via Boxee on DeliTV

Collaborative Curation and the Magic of Reading Lists

Reading lists hit the news last week with Read/Write Web picking up a post from the venerable Dave Winer about Google get[ting] a patent on reading lists. The patent was filed in 2005, a year or so after Dave Winer blogged:

One of the innovations flowing out the Share Your OPML site is the idea of reading lists. An expert in a given area puts together a set of feeds that you would subscribe to if you want a balanced flow of information on his or her topic of expertise. You let the expert subscribe to feeds on your behalf. I’ve gotten the first taste of what this is like by reading the aggregator page on the Share Your OPML site. As new sites come on the Top-100, as the aggregated interests of the community shift, I automatically start reading sites I wasn’t reading before. I don’t have to do anything. I like this. So at last Thursday’s Berkman meeting I asked two of our regulars, Rick Heller and Jay McCarthy, to start doing these reading lists, and Rick is ready with what he calls a list of “political blogs that provide a balanced diet of liberal and conservative views.”

So what are dynamic reading lists? Take one or more RSS feeds, and declare their URIs as items in a reading list feed. Subscribe to that reading list feed. Now whenever there is a change made to the items contained in either of the RSS feeds, the person who subscribed to the reading list feed sees those changes. So a reading list (which could be maintained by anyone) is something I can subscribe to with a single click. And that reading list can be managed, can contain RSS feeds or other reading lists that are curated by other people.

As a student, my degree could have a reading list that contains links to reading lists for each of my courses. Those course reading lists could be maintained by course instructors, and might contain feeds from other students taking the course. I subscribe to single reading list. My instructor on a particular course can change the contents of one of the feeds that is identified in my reading list. I see those changes via my degree reading list.

So it may have occurred to you that reading lists are a great way of sharing a curatorial load… and you’d be right :-)

Another example of the reading list/shared curation pattern is exemplified by Jon Udell’s elmcity project, which allows for separately maintained calendar feeds to be managed and aggregated using the Delicious social bookmarking tool (e.g. Collaborative curation as a service or elmcity project FAQ.

DeliTV also uses a similar pattern to allow users to define video playlists (that may contain other video playlists) on delicious, and then watch them in Boxee or via an appropriate mobile device (e.g. Deli TV – Personally Programmed Social Television Channels on Boxee: Prototype and An Unintended Consequence: DeliTV Goes Mobile on iPhone and Android).

It’s been some time since I properly tinkered with OPML, one of the most convenient formats for describing reading lists, so here’s a note to self about some the services that might be worth playing with:

  • Scott Wilson’s JOPML, an OPML bundler for TicTocs RSS feeds (see e.g. Mashlib Pipes Tutorial: 2D Journal Search);
  • Scott Wilson’s Ensemble generator, that cobbles together an OPML feed of OERs based on a specified search term;
  • a couple of my own, very old, experiments: Social Bookmarking OPML Feed Roller, or Persistent News Search OPML Feed Roller; and not forgetting the OPML Dashboard Display and Disaggregating an MIT OpenCourseware Course into Separate RSS Feeds of course;-)
  • @cogdog – you got any OPML/reading lists demos/hacks?;-)

On my to do list is also a way of putting together ‘highlights’ collections of notable paragraphs contained with in an atomised JISCPress/WriteToReply/Digress.it document…

As a design pattern, reading lists provide a very powerful way of leveraging the power of a community of individuals to collaboratively, yet independently, curate sets of resources. As with RSS, it may be that reading lists won’t achieve much explicit consumer success. But as wiring/plumbing – don’t underestimate them…

PS Remember, many resource centric sites allow you to create playlist feeds – e.g. Youtube Playlists, or, more recently, flickr playlists/galleries

Author Tony HirstPosted on September 22, 2009September 22, 2009Categories OU2.0, Radical SyndicationTags opml, reading list

An Unintended Consequence: DeliTV Goes Mobile on iPhone and Android…

Wouldn’t it be handy if, as well as viewing DeliTV feeds in Boxee, you could also consume them on your phone? Well it just so happens that you can… :-)

Whilst messing around with Recent BBC/OU TV Programmes on Boxee, I noticed that my “OU on the BBC 7 Day CatchUp” code used in Recent OU Programmes on the BBC, via iPlayer (also available on iPhone: iPhone 7 Day OU Programme CatchUp, via BBC iPlayer) had broken. Whilst testing the fix on the iPhone/iPod Touch version, (which also works on a wifi link at least with my HTC Magic Android phone) it occurred to m that I should also be able to pipe DeliTV feeds to my phone, and then display them using the the iUI interface libraries too…

So a little bit of tweaking of my OU 7 day catchup code, and couple of extra handlers to wrap the DeliTV (for Boxee) pipe, and what do we get? (Images grabbed from iPhoney on a Mac.)

You can play along here: http://ouseful.open.ac.uk/i/idelitv.php (for a QR code of the URL, see here.)

A clunky homepage…

Leads to the default DeliTV multiplex (psychemedia/boxeetest5):

[You can configure the app to run with your own DeliTV mutliplex:
http://ouseful.open.ac.uk/i/idelitv.php?q=YOURMUTLIPLEX
So e.g. http://ouseful.open.ac.uk/i/idelitv.php?q=psychemedia/delitv_f1 ; or http://ouseful.open.ac.uk/i/idelitv.php?q=psychemedia for my default “delitv” multiplex.]

http://ouseful.open.ac.uk/i/idelitv.php

Here’s the UK Politics suite of channels:

(Note that the page may take some time to load; when I get a chance, I’ll add a loading indicator in…)

If we go into the Political Parties list, and click through on the Liberal Democrats link, we get a list of actual videos:

Clicking through on those takes you to the video page:

And clicking the Watch this video link will play the video for you using whatever your mobile device allows.

Whilst the Youtube content is workingm the iPlayer content is not working – yet. I think the original 7 day catchup had to use a helper for BBC URLs (I seem to remember that iUI doesn’t like BBC mobile URLs), and I haven’t had chance to work it in yet…

Anyway – what does this all tell us? That feeds are a Good Thing, of course…! ;-)

It also means that if you create a hierarchical playlist of Youtube content, at least, that maybe includes curated lists managed by other people, you can watch the content either in Boxee, or on your mobile device.

So that’s the proof of concept done… but as is the way of these things, it needs proper apps building to make it shiny and robust enough, and with a friendly and intuitive UI, to be used on a casual basis by anyone not me… ;-)

Author Tony HirstPosted on September 21, 2009September 21, 2009Categories Radical Syndication, TinkeringTags Boxee, delitv, iPhone, iPod Touch, iUI

Posts navigation

Page 1 Page 2 Page 3 Next page
© AJ Hirst 2008-2021
Creative Commons License
Attribution: Tony Hirst.

Contact

Email me (Tony Hirst)
Bookmarks
Presentations
Follow @psychemedia
Tracking Jupyter newsletter

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,031 other subscribers
Subscribe in a reader

My Other Blogs

F1Datajunkie Blog F1 data tinkerings
Digital Worlds Blog Game Design uncourse
Visual Gadgets Blog visualisation bits'n'pieces

Custom Search Engines

Churnalism Times - Polls (search recent polls/surveys)
Churnalism Times (search press releases)
CourseDetective UK University Degree Course Prospectuses
UK University Libraries infoskills resources
OUseful web properties search
How Do I? Instructional Video Metasearch Engine

Page Hacks

RSS for the content of this page

View posts in chronological order

@psychemedia Tweets

  • RT @ArynnPost: I am of a firm belief that marketing and technical guides/documentation do not mix well together. 11 hours ago
  • pondering the myst ast and thinking about what semantically structured chunks might be interestingly pulled out of… twitter.com/i/web/status/1… 1 day ago
  • ah ha.. BBC Any Questons giving a platform to a paid up lobbiest for a private utilities company.. 1 day ago
Follow @psychemedia

RSS Tumbling…

  • "So while the broadcasters (unlike the press) may have passed the test of impartiality during the..."
  • "FINDING THE STORY IN 150 MILLION ROWS OF DATA"
  • "To live entirely in public is a form of solitary confinement."
  • ICTs and Anti-Corruption: theory and examples | Tim's Blog
  • "Instead of getting more context for decisions, we would get less; instead of seeing the logic..."
  • "BBC R&D is now winding down the current UAS activity and this conference marked a key stage in..."
  • "The VC/IPO money does however distort the market, look at Amazon’s ‘profit’..."
  • "NewsReader will process news in 4 different languages when it comes in. It will extract what..."
  • Governance | The OpenSpending Blog
  • "The reality of news media is that once the documents are posted online, they lose a lot of value. A..."

Recent Posts

  • Whither In-Browser Jupyter WASM? R is Here, Could Postgres Be Too?
  • Fragment — Did You Really X That?
  • Working with Broken
  • Chat Je Pétais
  • From Packages to Transformers and Pipelines

Top Posts

  • Generating Diagrams from Text Generated by ChatGPT
  • Can We Get ChatGPT to Act Like a Relational Database And Respond to SQL Queries on Provided Datasets and pandas dataframes?
  • Can We use ChatGPT to Render Diagrams From Accessible Diagram Descriptions
  • An Easier Approach to Electrical Circuit Diagram Generation - lcapy
  • Generating (But Not Previewing) Diagrams Using ChatGPT
  • Connecting to a Remote Jupyter Notebook Server Running on Digital Ocean from Microsoft VS Code
  • More Than Ten Free Hosted Jupyter Notebook Environments You Can Try Right Now
  • Simple Interactive View Controls for pandas DataFrames Using IPython Widgets in Jupyter Notebooks

Archives

OUseful.Info, the blog… Blog at WordPress.com.
OUseful.Info, the blog…
Blog at WordPress.com.
  • Follow Following
    • OUseful.Info, the blog...
    • Join 2,031 other followers
    • Already have a WordPress.com account? Log in now.
    • OUseful.Info, the blog...
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...