Slideshare Stats – Number of Views of Your Recent Slideshows

Yesterday morning, I wanted to grab hold of a summary of the number of views my uploaded presentations on Slideshare have had, A quick scan of the Slideshare API suggests that a bit of a handshake is required, at least in generating an MD5’d hash of a key with a Unix timesatamp. I have a pipe that does something similar somewhere (err, or at least part of it… here maybe).

I didn’t have the 10 minutes or so such a pipework hack should take (i.e. half an hour, just in case, plus up to half an hour to blog any solution I came up with;-), so I had a quick look at the YQL community tables to see if anyone had developed a wrapper for calling at least part of the Slideshare API, and it seems some has:

YQL SLideshare query

So here’s a pipe that generates a list of a user’s 20 most recent Slideshare uploads, along with how many times they have been downloaded:

And here’s how the output looks:

SLideashare - recent downloads pipe

Note to self: make some time to see what other YQL community tables are available…

Libraries Near Me Map (Courtesy of LibraryThing)

Given your location as postcode, where can you find a list of libraries near you? As with many of these public information style questions, it can be quite hard trying to find a single general answer. A crude way is to enter a search map along the lines of libraries near POSTCODE into Google Maps, but the results that are returned aren’t necessarily that good… Alternatively, you can go to your local council website, and then do a search there for libraries; but the format in which results are provided can vary; and if you’re a developer, there’s no immediately obvious way of creating (or consuming) a service that will allow you to create a small embeddable map showing the location of libraries in the vicinity of a particular postcode area.

Putting a tweet out last week to ask if anyone knew of a “library lookup by postcode”, several people suggested Worldcat, but @lynncorrigan suggested LibraryThing Local. Hmmm… here’s what it offers:

LibraryThing Local

That is, we can search by postcode, limit results to book related venues within a particular distance, and then filter down by library type, getting both a list of results back as well as map view. On running a search, we also get a URL that contains the search term as a parameter:

http://www.librarything.com/local/place/mk7%206aa%2C%20uk

This was a good start, so I had a look around the LibraryThing APIs to see if an API was available for the local service. I couldn’t spot anything, so I sent @librarythingtim a tweet to check, can got a confirmation back that there was no API for that service… Hmmm…

At times like this, it’s often worth trying different searches, and clicking different links (such as the “All”, “Bookstore” and “Library” fitlers, as well as the distance setting), to see what happens to the page – whether the URL changes for instance. It’s also worth doing a View Source on the page to see if there are any immediately obvious calls to AJAX web services that bring content into the page on those times when a setting change doesn’t cause the page to reload (with or without the same URL), but you suspect that there may have been a call to a webservice somewhere.

In the case of LibraryThing Local, clicking the links and changing the settings changed the displayed results without appearing to reload the page, but I wasn’t sure whether the change was just based on data stored within the web page, or whether the information was being pulled in from somewhere else.

Which is where developer tools can come in useful… One tool I use every now and gain is a Firefox extension called Firebug. This debugging environment operates within Firefox, and among other things can be used to track calls that a web page makes back to a server, along with any arguments passed to the server, and any results returned from it. Clicking between the “All” and “Libraries” links suggested that corresponding data was being pulled in to the page from a LibraryThing server:

LibraryTHing local - ajax call

The call was being made to http://www.librarything.com/ajax_venuesNearUser.php and returned HTML data that could be displayed within the results listing:

LibraryThing Local - data return

Hmmm…

The next step was to see if I could construct a URL without and session identifying variables to pull similar data back, such as:
http://www.librarything.com/ajax_venuesNearUser.php?&q=mk76aa,uk&showvenue=2

This did indeed work. The HTTP call also suggested there was a parameter identified as d. Could this be for distance? I tried it, setting it to 2, then 3. It seemed to work:-)

The next step was to try to pull the results into some sort of environment where I could play with it. The original motivation for getting this data was so that I could add libraries to a map of UK Online centres, based on my 5 Minute Hack – UK Centres Online Map, which used Yahoo pipes to geocode UK Online Centre addresses and generate a KML feed listing them in the vicinity of a particular postcode… So Yahoo pipes it was to be…

As well as RSS feeds, Yahoo Pipes can ingest XML and JSON data, as well as HTML pages. My first thought was to try to import the LibraryThing Local data as XML… but for some reason the Pipes Fetch Data block failed to parse it. Trying the Fetch Page block also failed to work… Hmmm…

At this point, I cast my mind around for other tools that might do the job. Google Spreadsheets? No; if the library data was in a table or a list, we might be able to use the importHTML formula, but is isn’t… Maybe I could try to import the HTML as XML…? Hmmm… XML… How about I try using YQL to parse the :LibraryThing Local data, and then pull the result into the pipe?

YQL parsing LibraryThing Local data

Success:-)

Here’s the query I used:
select * from html where
url=”http://www.librarything.com/ajax_venuesNearUser.php?showvenue=2&q=MK7+6AA&d=3″ and xpath=’//div[@class=”venueItem”]’

Rather conveniently, there is a YQL block available in Yahoo Pipes that can be provided with a YQL query and will pull the result back into the Pipes environment. Now the pipework can begin… but what do we want it to do?

The recipe will follow similar lines to the UK Online centres map pipe: grab a postcode from the user (and the search radius for good measure); construct the LibraryThing Local URL to fetch the raw data; use YQL to get the data into the pipes environment, then identify the address of each library, geocode it, create a title and piece of descriptive text (if required) then output the who lot. Simple:-)

To start then, let’s construct the LibraryThing Local URL:

LibraryThing Local map - create the URL

The reason we use a Create URL block is so that the postcode is automatically escaped (e.g. if the user places space characters in the postcode when they enter it, these need handling appropriately).

Having got the URL, we need to insert it into a YQL query – I use a bit of string replacing magic for that;-)

LibrryThing Local map - create the YQL query

Here’s the base part of the YQL query:
select * from html where
url=”MAGIC” and xpath=’//div[@class=”venueItem”]’

So, now we’re in a position to grab the local library data from LibraryThing Local via YQL:

Library Thing Local map - results from YQL

In the results, we see several separate items, one for each Library, but within each item, there are two “subresults”. Fortunately, these are “regular”, in that the result(s) for each Library is structured in the same way. Knowing how to handle the subresults required a little bit of Pipes trickery:

LibraryThing Local map - parsing results

It’s worth just looking at what’s going on in that block for a moment, and comparing it to the results that were coming from the YQL processor…

LibraryThing Local map - parsing reasults

All that remains to do now is geocode the address of each library, and output the result:

LibraryThing Local map - gecoding addresses

Here’s the result:

LibraryThing laocl map - pipe preview

You can see the pipe here: http://pipes.yahoo.com/ouseful/librarylocations

For details on how to use the output of this pipe to create your own embeddable maps, see the second half of 5 Minute Hack – UK Centres Online Map, in particular how to take the KML output of the pipe and use it to display the results in an embeddable Google map.

PS via @ostephens libraries lookup on People’s Network: “you can specify postcode and radius & get results as XML or route to xslt”

5 Minute Hack – UK Centres Online Map

So…

…earlier today…

jaggeree twitter post...

Note the time: 11.20

A few minutes later, I posted this:

in reply...

Again, note the time: 11.26

So what happened in between..?

1) I clicked through on Chris’ tweet to get to A little thing to help UK Online/Rewired State hackday / Jul 23rd 2010,

2) copied a link from there (http://ukonline-fakeapi.appspot.com/?postcode=ec1a4dd) to a JSON feed for list of UKOnline centres in the vicinity of a postcode that was the little thing @jaggeree wrote to help the UK Online/Rewired State hackday on August 7th.

3) Went to Yahoo Pipes, created a new pipe, grabbed:
– a Fetch Data block that can parse a JSON feed,
– a Create URL block into which I pasted the link, which was then automatically parameterised for me,
– a Text Entry box for a postcode value
wired them together, and had a look at the output of the JSON feed:

UK Online centres - Pipe Web Address: http://pipes.yahoo.com/ouseful/ukonlinecentres

It was then easy enough to add a couple more blocks (a Loop block containing a Location Builder block) to geocode the address of each centre:

Geocoding an address in Yahop pipes

Note the trick. The trick is to assign the output of the location builder block to y:location, This allows pipes to work a bit of magic… if a pipe detects y:location.lat and y:location.lon elements in feed items, it will generate a map view over the output of the pipe, and a link to a KML output from the pipe…

To finish off, let’s just create a feed description element to carry the phone number, email and address of the centre, and set the link to be its web address…

UK Online centres pipe - finishing off

Here’s the replacement term in the regular expression:
Phone: ${phone}<br />Email: ${email}<br />Address: ${address}

And if we now to to the homepage for the pipe – http://pipes.yahoo.com/ouseful/ukonlinecentres:

UK online centres pipe http://pipes.yahoo.com/ouseful/ukonlinecentres

Note that link to the KML…? Grab a copy of that:

right-clik, copy link address

go over to http://maps.google.com, and paste the KML link into the search box… then hit Search Maps…

KML in google maps

(You can also load the KML feed into Google Earth…)

One in the Google Maps environment, we can now grab a link to the Google mapped UK Online centres by postcode, or an iFrame embed code (which can be customised….)

Grabbing a link to, or iframe code for, a google map

If you grab the iframe code and go to somewhere like Netvibes, you can add an HTML widget to your dashboard:

netvibes html widger

paste the embed code into the widget:

Embed code in Netvibes universal widget

and then view the map on your Netvibes page (you made need to tweak the size of the width and height attributes of the iframe…):

Map in Netvibes widget

So, what we have here is:
– @jaggeree’s API doing something magical and getting the data from somewhere…
– geocoding and KML publication via a Yahoo Pipe
– map rendering using Google maps
– display via Netvibes

And no coding… (at least, not by me…)

As ever, it has taken me far longer to write this post than in did to create the pipe and send the link back to @jaggeree…

PS the URL of the KML file that you can paste into the Google Maps search box has the form:
http://pipes.yahoo.com/ouseful/ukonlinecentres?_render=kml&postcode=mk7+6aa
for the postcode MK7 6AA. To use your own postcode, just edit this URL, replacing the space with a +, or omit it altogether (mk76aa). So for example, for the postcode CB3 9BB we would use the URL:
http://pipes.yahoo.com/ouseful/ukonlinecentres?_render=kml&postcode=CB3+9BB
or
http://pipes.yahoo.com/ouseful/ukonlinecentres?_render=kml&postcode=CB39BB
(upper/lower case is irrelevant) and just paste the URL into the Google maps search box.

Previewing the Contents of a JSON Feed in Yahoo Pipes

This post builds on the previous one (Grabbing the Output of a Yahoo Pipe into a Web Page) by describing a strategy that can help you explore the structure of a JSON feed that you may be pulling in to a web page so that you can identify how to address the separate elements contained within it.

This strategy is not so much for developers as for folk who don’t really get coding, and don’t want to install developer tools into their browser.

As the “Grabbing the Output of a Yahoo Pipe into a Web Page” post described, it’s easy enough to use JQuery to get a JSON feed into a web page, but what happens then? How do you work out how to “address” the various parts of the Javascript object so that you can get the information or data you want out of it?

Here’s part of a typical JSON feed out of a Yahoo pipe:

{“count”:17,”value”:{“title”:”Proxy”,”description”:”Pipes Output”,”link”:”http:\/\/pipes.yahoo.com\/pipes\/pipe.info?_id=5273c18fa5e739feb13c0d93dc7f4160″,”pubDate”:”Mon, 19 Jul 2010 05:15:55 -0700″,”generator”:”http:\/\/pipes.yahoo.com\/pipes\/”,”callback”:””,”items”:[{“link”:”http:\/\/feedproxy.google.com\/~r\/ouseful\/~3\/9WBAQqRtH58\/”,”y:id”:{“value”:”http:\/\/blog.ouseful.info\/?p=3800″,”permalink”:”false”},”feedburner:origLink”:”http:\/\/blog.ouseful.info\/2010\/07\/19\/grabbing-the-output-of-a-yahoo-pipe-into-a-web-page\/”,”slash:comments”:”0″,”wfw:commentRss”:”http:\/\/blog.ouseful.info\/2010\/07\/19\/grabbing-the-output-of-a-yahoo-pipe-into-a-web-page\/feed\/”,”description”:”One of the things I tend to take for granted about using Yahoo Pipes is how to actaully grab the output of a Yahoo Pipe into a webpage. Here’s a simple recipe using the JQuery Javascript framework to do just that. The example demonstrates how to add a bit of code to a web page […]“,”comments”:”http:\/\/blog.ouseful.info\/2010\/07\/19\/grabbing-the-output-of-a-yahoo-pipe-into-a-web-page\/#comments”,”dc:creator”:”Tony Hirst”,”y:title”:”Grabbing the Output of a Yahoo Pipe into a Web Page”,”content:encoded”:”

One of the things I tend to take for granted about using Yahoo Pipes is how to actaully grab the output of a Yahoo Pipe into a

Yuck…

However, we can can use the Yahoo pipes environment to help us understand the structure and make up of this feed. Create a new pipe, and just add a “Fetch Data” block to it. Paste the URL of the JSON feed into the block, and now you can preview the feed – the image below show a preview of the JSON output from a simple RSS proxy pipe, that just takes in the URL of an RSS feed and then emits it as a JSON feed:

Yahoo pipes JSON browser

(Note that if you find yourself using the Yahoo Pipes V2 engine, you may have to wire the output of the Fetch Data block to the output block before the preview works. You shouldn’t need to save the pipe though…)

When you load the feed in to a webpage, if you assign the whole object to the variable data, then you will find the output of the pipe in the object data.value.

In the example shown above, the title of the feed as a whole will be in data.value.title. The separate feed items will be in the collection of data.value.items; data.value.items[0] gives the first item, data.value.items[1] the second, and so on up to data.value.items[data.value.items.length-1]. The title of the third feed item will be data.value.items[2].title and the description of the 10th feed item will be data.value.items[9].description.

This style of referencing the different components of the javascript object loaded into the page is known as the javascript object dot notation.

Here’s a preview of a council feed from OpenlyLocal:

Preview an openly local council feed

In this case, we start to address the data at data.council, find the population at data.council.population, index the wards using data.council.wards[i] and so on.

Whitelisted Hashtag Retweeter Pipe

Last week, I got an email from Stuart with the following query:

I’m trying to find a way to enable people to post to the OU twitter account from their personal account by using a predefined hashtag. …
We agreed a hashtag #***** which kmi researchers are using from their account if they want to share information to the main OU account.  I pull an RSS feed of this into the OU account and retweet it.  I’m sure you can see the obvious loop that occurs!

… are you aware of anything that will let me retweet a hashtag and strip off that hashtag to avoid the loop?  It would be great to be able to add new hashtags in the future so it could be rolled out to other faculties who might wish to share their news via the OU account just by tweeting from individual faculty members’ accounts.

Here’s what I came up with…

hashtag filter pipe

The first part of the pipe takes the user defined hashtag and creates the URL that will run a search for that hashtag on twitter and the second part of the pipe fetches the feed. The Filter block will only pass through tweets that come from specified twitter users (actually, that isn’t quite true… this pipe is gameable/spammable becuase of the way I use “contains” in the whitelist filter block… Can you see how?!;-) The regular expression block strips the hashtag out of the retweeted tweets. (For the pipe to work and not get into an infinite loop, this isn’t actually necessary if we’re using the whitelist, because retweeters that make use of the pipe feed should not have their username in the whitelist… That is, if you’re running the whitelist, you can remove the regualr expression block and leave the hashtag in the retweet feed. Conversely, if you don’t want to run the whitelist, you can just remove the filter block, although in this case you will need the hashtag stripping regular expression block to prevent infinite retweets… Got that?!;-)

You can find the pipe here: Hashtag retweeter pipe

If you want a more “secure” version, i.e. one that does not reveal the identities of people in the whitelist, or the hashtag, use private string blocks (example pipe:

Making strings private to owner in Yahoo pipes

If you want to create your own hashtag retweeter pipe without having to clone and customise your own pipe, use this approach:

Customisable twitter retweet pipe

(NB if you leave either of the username slots blank, then tweets sent by anyone using the hastag will be passed through the pipe and made available for retweeting.)

Sigh… another claim… 2ZXZGU4TDXK2

Searching the Backchannel – Martin Bean, OU VC, Twitter Captioned at JISC10

Other Martin’s been at it again, this time posting JISC10 Conference Keynotes with Twitter Subtitles.

The OU’s VC, Martin Bean, gave the opening keynote, and I have to admit it really did make me feel that the OU is the best place for me to be working at the moment :-)

… though maybe after embedding that, my days are numbered…? Err…

Anyway, I feel like I’ve not really been keeping up with other Martin’s efforts, so here’s a quick hack a placemarker/waypoint in one of the directions I think the captioning could go – deep search linking into video streams (where deep linking is possible).

Rather than search the content, we’re going to filter captions for a particular video, in this case the twitter caption file from Martin (other, other Martin?!) Bean’s #JISC10 opening keynote. The pipework is simple – grab the URL of the caption file and a “search” term, parse the captions into a feed with one item per caption, then filter on the caption content. I added a little Regular Expression block just to give a hint as to how you might generate a deeplink into content based around the tart time of the caption:

Filter based searching caption

You can find the pipe here: Twitter caption search

One thing to note is that it may take some time for someone to tweet what someone has said. If we had a transcript caption file (i.e. a timecoded transcript of the presentation) we might be able to work out the “mean time to tweet” for a particular event/twitterer, in which case we could backdate timestamps to guess the actual point in the video that a person was tweeting about. (I looked at using auto-genearated transcript files from Youtube to trial this, but at the current time, they’re rubbish. That said, voice search on my phone was rubbish a year ago, but by Christmas it was working pretty well, so the Goog’s algorithms learn quickly, especially where error signals are available. So bear in mind that if you do post videos to Youtube, and you can upload a caption file, as well as helping viewers, you’ll also be helping train Google’s auto-transcription service (because it’ll be able to compare the result of auto-transcription with your captions file…. If you’re the Goog, there are machine learning/supervised learning cribs everywhere!))

(Just by the by, I also wonder if we could colour code captions to identify in a different colour tweets that refer to the content of an earlier tweet/backchannel content, rather than the foreground content of the speaker?)

Unfortunately, caption files on Youtube, which does support deep time links into videos, only appear to be available to video owners (Youtube API: Captions), so I can’t do a demo with Youtube content… and I so should be doing other things that I don’t have the time right now to look at what would be required deeplinking elsewhere…:-(

PS The captioner tool can be found here: https://mashe.hawksey.info/ititle  http://www.rsc-ne-scotland.org.uk/mashe/ititle/

Martin Hawksey, whose work this is, has described the evolution of the app in a series of several posts here: http://www.rsc-ne-scotland.org.uk/mashe/?s=twitter+subtitles

Twitter Auto-translation Pipe

Popping into my Twitter feed yesterday was a reference from a hack day backchannel to a Twitter map pipe I use a lot in demos (see also Demonstrating Twitter in Conference Presentations). The tweet was tagged #brhackday, so of course I followed it, and then got stuck…

That’ll be br for Brazil then, I guess?

Anyway, driving home last night I remembered I’d messed around with a couple of language related pipes before (e.g. Filter Tweets by Language, so here’s one that does a bit of automagical translation:

We start of by reusing a couple of pipes – one to gran a twitter search feed given a user supplied search term, the other to autodetect the language using the Google Language detector API (as described in the post mentioned above).

THe next step is to split the tweets based on language – if they are already in the language we want them translated to, we don’t need to do any translation… For the tweets we do need to translate, we define the language pair (fromLanguage|toLanguage). The fromLanguage is provided by the language autodetector, the toLanguage is provided by the user.

The next step is to construct a URL that will call the Google language translation API again, this time with the text that needs to be translated along with the language mapping. (It may be that the API can do a language autodetect and then automagically handle the translation – but I thought it was worth unpicking the process in case you wanted to plug in a different language translation service, for example).

Finally, we merge the untranslated and translated streams, and sort the feed in reverse chronological time to make it a little bit more conventional:

So there you have it – an automagic twitter translator:


PS bah – pipe described above also needs a user input box for the twitter search term… oops!

Demonstrating Twitter in Conference Presentations

Every so often I see tweets go by along the lines of “demoing twitter – please say hi”, and I typically respond with a link to a Twittermap pipe I created some time ago that takes a URL for a set of Twitter search results and then tries to plot the location of each Twitterer based on their location setting in their Twitter profile:

Having to find the URL a) of an appropriate search, and b) and the feed of that search is a bit of a pain though, so here’s a tweak:

Enter your username and conference hashtag (because these shout outs usually happen at hashtagged events, right?), some sort of hint as to how recently you want the tweets from (you can also enter a date) and the pipework should do its stuff.

The URL for the pipe is of the form:

http://pipes.yahoo.com/ouseful/youtweetedme?u=USERNAME&h=HASHTAG

so for example:
http://pipes.yahoo.com/ouseful/youtweetedme?u=joedale&h=pls10

If you want a Google Maps version, use a URL of the form:

http://maps.google.com/maps
 ?q=http:%2F%2Fpipes.yahoo.com%2Fouseful%2Fyoutweetedme%3F_render%3Dkml
 %26h%3DHASHTAG
 %26t%3Dtoday
 %26u%3DUSERNAME

For practical use, it probably makes sense to bookmark the pipe and/or the Google map with the settings you require (in the case of the Google map, this might include setting the zoom level and central point of the map, and then grabbing the Google generated link for that map configuration).

So how does the pipe work? Lazily, that’s how – we just grab the required parameters and construct the URL that my original Tweetmap pipe required…

There’s an additional hack in the form of the Date Builder block which is used to generate a by-the-second time stamp that is passed as an additional made up parameter to the Twitter search API in order to get round any cacheing issues in Yahoo pipes; (the normal cacheing means that if you’re running the pipe several times in a session, you may not see any new results… Note that the Google Maps views might become stale because of cacheing at the Google end…)

Feed Aggregation, Truncation and Post Labeling With Google Spreadsheets and Yahoo Pipes

Got another query via Twitter today for a Yahoo Pipe that is oft requested – something that will aggregate a number of feeds and prefix the title of each with a slug identifying the appropriate source blog.

So here’s one possible way of doing that.

Firstly, I’m going to create a helper pipe that will truncate the feed from a specified pipe to include a particular number of items from the feed and then annotate the title with a slug of text that identifies the blog: (Advisory: Truncate and Prefix).

The next step is to build a “control panel”, a place where we list the feeds we want to aggregate, the number of items we want to truncate, and the slug text. I’m going to use a Google spreadsheet.

We can now create a second pipe (Advisory: Spreadsheet fed feed aggregator that will pull in the list of feeds as a CSV file from the spreadsheet, for each feed grab the feed contents, then truncate them and badge them as required using the helper pipe:

To keep things tidy, we can sort the posts so they appear in the traditional reverse chronological order.

PS Hmmm… it might be more useful to be able to limit the feed items by another criteria, such as all posts in the last two weeks? If so, this sort of helper pipe would do the trick (Advisory: Recent Posts and Prefix):

HTH:-)

Grabbing the JSON Description of a Yahoo Pipe from the Pipe Itself

In a series of recent posts, (The Yahoo Pipes Documentation Project – Initial Thoughts, Grabbing JSON Data from One Web Page and Displaying it in Another, . Starting to Think About a Yahoo Pipes Code Generator) I’ve started exploring some of the various ingredients that might be involved in documenting the structure of a Yahoo Pipe and potentially generating some programme code that will then implement a particular pipe.

One problem I’d come across was how to actually obtain the abstract description of a pipe. I’d found an appropriate Javascript object within an open Pipes editor, but getting that data out was a little laborious…

…and then came a comment on one of the posts from Paul Daniel/@hapdaniel, pointing me to a pipe that included a little trick he was aware of. A trick for grabbing the description of a pipe from a pipe’s pipe.info feed (e.g. http://pipes.yahoo.com/pipes/pipe.info?_out=json&_id=eed5e097836289dfb4e8586220b18e0e.

Paul used something akin to this YPDP pipe’s internals pipe to grab the data from the info feed of a specified pipe (the URL of which has the form http://pipes.yahoo.com/pipes/pipe.info?_id=PIPE_ID using YQL:

http://query.yahooapis.com/v1/public/yql?url=http%3A%2F%2Fpipes.yahoo.com%2Fpipes%2Fpipe.info%3F_out%3Djson%26_id%3D44d4492a582d616bffda237d461c5ef4&q=select+PIPE.working+from+json+where+url%3D%40url&format=json

It’s just as easy to grab the JSON feed from YQL, e.g. using a query of the form:
select PIPE.working from json where url=”http://pipes.yahoo.com/pipes/pipe.info?_out=json&_id=44d4492a582d616bffda237d461c5ef4&#8243;. The pipe id is the id of the pipe you want the description of.

If you have a Yahoo account, you can try this for yourself in the YQL developer console:

We can then grab the JSON feed either from YQL or the YPDP pipe’s internals pipe into a web page and run whatever we want from it.

So for example, the demo service I have set up at http://ouseful.open.ac.uk/ypdp/pipefed.php will take an id argument containing the id of a pipe, and display a crude textual description of it. Like this:

So what’s next on the “to do” list? Firstly, I want to tidy up – and further unpack – the “documentation” that the above routine produces. Secondly, there’s the longer term goal of producing the code generator. If anyone fancies attacking that problem, you can get hold of the JSON description of a pipe from its ID using either the YPDP internals pipe or the YQL query that are shown above.