Feed Autodiscovery in Javascript

For what it’s worth, I’ve posted a demo showing a couple of feed autodiscovery/autodetection tricks that let you autodiscover feeds in remote pages via a couple of online services: the Google feed api, and YQL (Feed Autodiscovery With YQL).

Try it out: Feed autodiscovery in Javascript (code)

Single page web app: feed autodetection

I’ve also added in a routine that uses the Google feed api to look up historical entries on an RSS feed. As soon as Google is alerted to a feed (presumably by anyone or any means), it starts cacheing entries. The historical entries API lets you grab up to 250 of the most recent entries from a feed, irrespective of how many items the feed itself currently contains…

Why it matters: Public Data Principles: RSS Autodiscovery on Government Department Websites?, Autodiscoverable Feeds and UK HEIs (Again…)

PS Just by the by, I added a Scraperwiki view to my UK HEI autodiscovered feeds Scraperwiki. I added a little bit of logic to try to pull out feeds on a thematic basis too…

UK HE autodisocverable feeds

On the to do list is to create some OPML output views so you can easily subscribe to, or display, batches of the feeds in one go.

I guess I should also add a table to the scraper to start logging the number of feeds that are autodiscoverably out there over time?

Grabbing the Output of a Yahoo Pipe into a Web Page Using JQuery

One of the things I tend to take for granted about using Yahoo Pipes is how to actaully grab the output of a Yahoo Pipe into a webpage. Here’s a simple recipe using the JQuery Javascript framework to do just that.

The example demonstrates how to add a bit of code to a web page that will load in the contents of an RSS feed from a specified and arbitrary URL using Yahoo pipes.

Loading RSS directly into a page from an arbitrary website as an XML file is typically not possible if the RSS feed is hosted on a web domain that is different to the domain of the page the feed is to be loaded in to because of the security model used by the web browser. However, Javascript files can be loaded in to a web page from any domain, which is how this workaround works. The address of the RSS feed is passed to Yahoo pipes, and the pipe generates a Javascript version of it (using JSON – Javascript Object Notation) that can be loaded in to the page.

Here’s the code (gist):


<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js">

<script type="text/javascript">
//Routine to display the items of an RSS feed in a web page
// The output is attached to a uniquely identified HTML item

// The URL of the RSS feed you want to display
var url='https://ouseful.wordpress.com/feed';

//The id of the HTML element you want to contain the displayed feed
var containerID="test";

// The gubbins...

function cross_domain_JSON_call(url){
 // url points to an RSS feed
 // displayOutput(url);
 //fetch the feed from the address specified in 'url'
// then call "myCallbackFunction" with the resulting feed items
   function(data) { myCallbackFunction(data.value.items); }

// A simple utility function to display the title of the feed items
function displayOutput(txt){

function myCallbackFunction(items){
  // 'items' contains the separate feed items;
  // 'title' contains the item title, 'link' the url, 'description' the main text

  // Run through each item in the feed and print out its title
  for (var i=0; i < items.length; i++){

  // You could easily call 'myArbitraryCallbackFunction(items)" from this function

// Tell JQuery to call the feed loader when the page is all loaded



<div id="test"></div>


If you read through the code, you should see that most of the work relating to getting the feed into the page is done for you once you have provided the URL. Note that you will probably also need to provide a function to process the feed items once they are loaded in to the page.

Some of My Dev8D Tinkerings – Yahoo Pipes Quick Start Guide, Cross-Domain JSON with JQuery and Council Committee Treemaps from OpenlyLocal

One of the goals I set myself for this year’s Dev8D was to get round to actually using some of the things I’ve been meaning to try out for ages, particularly Google App Store and JQuery, and also to have a push on some of the many languishing “projects” I’ve started over the last year, tidying up the code, making the UIs a little more presentable, and so on…

Things never turn out that way, of course. Instead, I did a couple of presentations, only one of which I was aware of beforehand!;-) a chance remark highlighting me to the fact I was down to do a lightning talk yesterday…

I did start looking at JQuery, though, and did manage to revisit the Treemapping Council Committees Using OpenlyLocal Data idea I’d done a static proof of concept for some time ago…

On the JQuery front, I quickly picked up how easy it is to grab JSON feeds into a web page if you have access to JSON-P (that is, the ability to attach a callback function to a JSON URL so you can call a function in the web page with the object as soon as it loads), but I also ran into a couple of issues. Firstly, if I want to load more than one JSON feed into a page, and then run foo(json1, json2, json3, json4, json5), how do I do it? That is, how do I do a “meta-callback” that fires when all the separate JSON calls have loaded content into the page. (Hmm – I just got a payoff from writing this para and then looking at it – it strikes me I could do a daisy chain – use the callback from the first JSON call to call the second JSON object, use the callback from that to call the third, and so on; but that’s not very elegant…?) And secondly, how do I get a JSON object into a page if there is no callback function available (i.e. no JSON-P support)?

I’m still stuck on the first issue (other than the daisy chain/bucket brigade hack), but I found a workaround for the second – use a Yahoo pipe as a JSON-P proxy. I’ll be writing more about this in a later post, but in the meantime, I popped a code snippet up on github.

On the Openlylocal/council treemap front, I’d grabbed some sample JSON files from the Openlylocal site as I left Dev8D last night for the train home, and managed to hack the resulting objects into a state that could be used to generate the treemap from them.

A couple of hours fighting with getting the Openlylocal JSON into the page (solved as shown above with the Pipes hack) and I now have a live demo – e.g. http://ouseful.open.ac.uk/test/ccl/index-dyn.php?id=111. The id is the openlylocal identifier used to identify a particular council on the Openlylocal site.

If you’re visiting Openlylocal council pages, the following bookmarklet will (sometimes*;-) display the corresponding council committee treemap:

javascript:var s=window.location.href;s=s.replace(/.*=/,””);window.location.href=”http://ouseful.open.ac.uk/test/ccl/index-dyn.php?id=”+s;

(It works for pages with URLs that end =NNN;-)
Council committee treemap

The code is still a bit tatty, and I need to tidy up the UI, (and maybe also update to a newer JIT visualisation library), so whilst the URI shown above will persist, I’ll be posting an updated version to somewhere else (along with a longer post about how it all works) when I get round to making the next set of tweaks… Hopefully, this will be before Dev8D next year!;-)

PS I also had a huge win in discovering a javascript function that works at least on Firefox: .toSource(). Apply it to a javascript object (e.g. myobj.toSource() and then if you do things like alert(myobj.toSource()) you can get a quick preview of the contents of that object without having to resort to a debugger or developer plugin tool:-)

PPS can you tell my debugging expertise is limited to: alert(“here”); all over the place ;-) Heh heh…

Drag and Drop Ordered Links in Delicious?

One of the things I’ve been interested in for some time is how to use social bookmarking services like delicious as a database for ordered link collections (for example, Ordered Lists of Links from delicious Using Yahoo Pipes).

I don’t have time to try this right now, so here’s a quick holding post about a plan for the future:

– pinch the code used for this jquery demo: “Dynamic Drag’n Drop With jQuery And PHP” which shows how to update a database with the new list order using “an Ajax request in the backend”…

– populate the list using the delicious api from a (logged in) users’ specified tags;

– identify the position of each item in the list using a special machinetag tag. Rewrite the jquery demo to use a delicious API call in place of the demo database update function. I’m guessing that this would mean using the https://api.del.icio.us/v1/posts/add? call, and just copying everything from the previous version of the bookmark apart from the item number machine tag?

– cobble together a Javascript script to pull a tagged list from delicious, as JSON, that includes ordered list machine tags and display the list, appropriately ordered; also provide a ‘reverse order’ switch ;-)

– cobble together a simple web service using a minimal PHP script that will grab a tagged list from delicious, with machine tags, and then display it as an ordered RSS list (also provide a ‘reverse order’ switch in the URI).

If anyone builds these before I do, please post a link to the demo :-)