Skip to content

OUseful.Info, the blog…

Trying to find useful things to do with emerging technologies in open education and data journalism. Snarky and sweary to anyone who emails to offer me content for the site.

Category: Radical Syndication

DeliTV Now Lets You Tag ITV Programmes: Watch Corrie, Emmerdale and EastEnders on the Same DeliTV Channel

Last night, a post on Liam’s blog announced “ITV on Boxee with a little help from Yahoo Pipes and Scotland” describing how the STV (ITV in Scotland) website has all manner of feed goodness on its watch again programmes pages; and with a little bit of pipework, Liam was easily able to get the programmes playing in Boxee.

A couple of minutes tinkering on the Deli TV pipe, and you can now bookmark either a series page (with a URI like this: http://player.stv.tv/programmes/emmerdale/) or an individual programme page (with a URI like this: http://player.stv.tv/programmes/emmerdale/2009-09-08-1900) on delicious and then either view the series or the individual programme via a Deli TV programmed channel.

So what? So we can now use delicious to programme cross terrestrial channel channels (sic) of our own. So if you fancy a UK soaps channel, just bookmark this DeliTV channel definition page to your DeliTV channel – UK Soaps:

And here’s what’s on…

So what? So EastEnders is broadcast by the BBC on BBC One, and Coronation Street and Emmerdale are broadcast on the commercial STV/ITV network. Which is to say: if you fancy playing channel controller using content from the BBC, STV/ITV or Youtube, you can do so using Deli TV. Clever, eh?:-)

In series catch-up mode… so whether it’s EastEnders from BBC One:

or Coronation Street or Emmerdale from ITV (courtesy of STV):

all your programmes are belong to us:-)

So, would anyone like to pick up on Liam’s comment about DeliTV? :-)

Here’s the UK soaps DeliTV channel URL again: UK Soaps

Unfortunately, I think that broadcast restrictions means the programmes on this channel from STV/ITV and the BBC can only be viewed in the UK. If anyone from outside the UK would like to test DeliTV with video catch-up services with your local video catch-up services, please get in touch. If any UK based educators would like to propose channels or video services that might be able to get through local authority firewalls so you can programme and watch teacher created (curated?) DeliTV channels in schools and FE colleges, please also get in touch:-)

PS a little bit of extra tinkering was required to get the BBC series catch-up working, and it’s a little brittle in that you have to bookmark the correct page for the series feed to be detected, but it’s a start.

Author Tony HirstPosted on September 10, 2009September 9, 2009Categories Open Education, OU2.0, Radical SyndicationTags delitv, redefining television as we know it2 Comments on DeliTV Now Lets You Tag ITV Programmes: Watch Corrie, Emmerdale and EastEnders on the Same DeliTV Channel

Deli TV – Personally Programmed Social Television Channels on Boxee: Prototype

[Please note, this post originally went out under the title of “Delicious TV”, which happens to be a trademarked “property”. If you’re looking for delicioustv.com (is their DTV identifier also trademarked, I wonder?, which serves up the Totally Vegetarian public television show, you ned to go here. Sorry about that… ]

On of the things that I wanted to explore in the Digital Worlds online short course (T151 Digital worlds: designing games, creating alternative realities – registrations now open for October 2009 start;-) was how we might use Youtube video playlists as a way of pointing students towards an optional set of third party based video resources that could illustrate the various topics contained within the course. Here’s my first attempt how we might deliver such a service using Boxee…

On the original Digital Worlds uncourse blog I explored various ways of using Splashcast to provide a single point of access to video content. In part based on that, I came up with an ad hoc set of requirements for handling video content in a relaxed way;-)

– a browser based or multiplatform delivery interface that would allows users to watch video compilations on a TV/large screen in lean back mode;

– a way of curating content and generating hierarchical playlists in which a course could have a set of topics, and each topic could contain one or more videos or video playlists. Ideally, playlists should be able to contain other playlists.

As a precursor to this, I had a little tinker with Boxee last week to produce a UK HEI Boxee Channel. The recipe was quite simple, and using a list of UK HEI user pages on Youtube generated a channel on Boxee that would let you browse the recent uploads from each HEI.

The list of HEI Youtube pages was originally scraped from a table on a third party web page, but in a comment to the original post I also demonstrated how the recipe could also be used to create a Boxee channel feed from a delicious bookmark list. In particular, I linked to a channel of UK Media Youtube channels, a channel of UK Government Youtube channels and a channel on differential quations built up from separate OER playlists on Youtube. To view the channels in Boxee, grab the RSS feed from the appropriate channel pipe and then subscribe to it in Boxee as a video content feed.

Can you see where we might go with that approach? That is, with this: I also demonstrated how the recipe could also be used to create a Boxee channel feed from a delicious bookmark list…

Delicious TV Deli TV

How about using delicious as a way of curating video playlists and viewing them in Boxee? This would offer quite a large amount of flexibility: if a playlist was based on a tag feed, users could generate many different playlists; if a playlist could contain another (delicious) playlist, one user could build their own playlists that contained nested playlists (e.g. a course playlist could contain separate topic playlists, or a separate playlist for each week of the course) or even other peoples’ playlists; ‘live’ playlists could be copied from one user to another – that is, if my playlist bookmarked one of your playlists, any changes you made to that playlist would show up whenever I watched your channel; and so on…

So here it is – Delicious TV Deli TV:

Here’s what’s on one of my channels:

You may notice that the channel contains the following separate sorts of content:

– programmes listed in a BBC iPlayer category feed (e.g. BBC Satire);
– a podcast feed (Wiley and Downes in Discussion);
– a particular Youtube videos (New Model Army);
– a Youtube Playlist (MIT differential equations);
– recently uploaded videos to a particular user’s Youtube channel (the Guardian)l
– another Delicious TV playlist (psychemedia’s bookmarks).

(Not shown is a link to a particular programme on iPlayer, but that is also supported.)

So here’s how that channel was programmed:

Simply by bookmarking links to delicious…

To get started with your own Delicious TV Deli TV</em channel on Boxee, all you need is a Boxee account from Boxee.tv. Oh, and you’ll also need to download a Boxee client to your computer (Windows, Macs and Linux are all supported).

What next? That all depends on whether or not you have a delicious account…

If you do have an account on the delicious social bookmarking site then you will be able to programme your own Boxee channel by bookmarking programmes and playlists you your delicious account.

If you don’t have a delicious account, you can still programme a Delicious TV channel by subscribing to someone else’s delicious TV playlist in Boxee.


If you DO NOT have a delicious account:

Have a look at http://delicious.com/tag/delitv to see who’s been bookmarking Delicious TV Deli TV content on delicious. (For example, my Delicious TV Deli TV empire is based here: http://delicious.com/psychemedia/delitv ;-)

Use the name of the user whose Delicious TV Deli TV channel you want to subscribe to in the following URL:
http://pipes.yahoo.com/ouseful/delitv?_render=rss&q=DELICIOUS_USERNAME

So for example, my feed is at:
http://pipes.yahoo.com/ouseful/delitv?_render=rss&q=psychemedia

Subscribe to the URL in Boxee:

Now fire up your Boxee client, go to the pop-out Applications menu on the left hand side of the screen and select Video, then choose My Video Feeds:

You should now be able to view the Delicious TV Deli TV Channel you subscribed to.


If you DO have a delicious account:

The top level menu of your Boxee/Delicious TV Deli TV channel will contain those items you have tagged delitv in delicious.

Subscribe to the following Delicious TV Deli TV feed in Boxee:

http://pipes.yahoo.com/ouseful/delitv?_render=rss&q=DELICIOUS_USERNAME

where DELICIOUS_USERNAME is your delicious username.

At the current time, you can bookmark:

  • a particular Youtube video
    (http://www.youtube.com/watch?v=YC8Kk9nEM0Y);
  • a Youtube Playlist
    (http://www.youtube.com/view_play_list?p=11DBE3516825CD0F);
  • recently uploaded videos to a particular user’s Youtube channel (http://www.youtube.com/user/bisgovuk);
  • programmes listed in a BBC iPlayer category feed
    (e.g. http://www.bbc.co.uk/programmes/genres/drama/thriller);
  • another Delicious TV playlist
    (http://delicious.com/psychemedia/t151boxeetest);
  • an MP3 file
    (e.g. http://www.downes.ca/files/audio/downeswiley4.mp3);
  • a “podcast” playlist
    (http://delicious.com/psychemedia/wileydownes+opened09).

If you bookmark another Delicious TV Deli TV feed, that will be rendered as a submenu in Boxee.

You can also bookmark other peoples Delicious TV Deli TV pages.


Feedback

If you run into any problems with Delicious TV Deli TV, please post a comment below. At the moment, Delicious TV Deli TV is very much in testing, so all feedback is welcome.

If you are outside the UK, then the BBC iPlayer links will not work for you. However, links to US based video streaming services may work for you (if you try them and they do, or don’t, please let me know via a comment below:-)

I haven’t tried the service with watch again content from ITV, Channel 4, or Channel 5 in the UK – anyone know if Boxee supports these yet (or is likely to in the near future?)

I don’t think Boxee has a mobile client, which is a shame; if anyone knows of a mobile video browser that can consume Boxee RSS feeds, please let me know… :-)

If anyone with a design flair would like to help me out with a the design for a simple homepage for Delicious TV Deli TV, a fully blown Delicious TV Deli TV Boxee app, please get in touch… :-)

If anyone is a patent troll who claims to have already got a monopoly over this sort of thing, f**k off – it was obvious and trivial given the current state of the tech and I didn’t need (indeed, I haven’t even seen) your crappy patent, in order to figure it out…

PS so why Delicious TV Deli TV? – So My Boxee “Delicious TV” Gets a Trademark Infringement Warning.

Author Tony HirstPosted on September 2, 2009December 15, 2011Categories Open Education, Radical Syndication, TinkeringTags bbc iplayer, Boxee, delitv, RSSisDead, youtube32 Comments on Deli TV – Personally Programmed Social Television Channels on Boxee: Prototype

RSS is Dead… Long Live RSS

Which makes more sense to you as a call to action? Doing:

Hence:

RSS subscription hasn’t worked in the browser, or on the Windows desktop… are we trying to syndicate the wrong sort of content? Or using the wrong tone? Certainly, I suspect the RSS icon means little or nothing to most people; and even for those who do know what it refers to, how much use do they make of it?

Author Tony HirstPosted on August 9, 2009Categories Radical Syndication, Thinkses18 Comments on RSS is Dead… Long Live RSS

Content Transclusion: One Step Closer

Following a brief exchange with @lesteph last night, I thought it might be worth making a quick post about the idea of content or document transclusion.

Simply put, transclusion refers to the inclusion, or embedding, of one document or resource in another. To a certain extent, whenever you embed an image or Youtube video in a page is a form of transclusion. (Actually, I’m not sure that’s strictly true? But it gets the point across…)

Whilst doing a little digging around for references to fill out this post, I came across a nicely worked example of transclusion from Wikipedia – Transclusion in Wikipedia

content transclusion in wikipedia

The idea? You can embed the content of any Wikipedia page in any other Wikipedia page. And presumably the same is true within any Mediawiki installation.

That is, in a MediaWiki wiki:

you can embed the content of any one page in any other page.

(I’m not sure if one MediaWiki installation can transclude content from any other MediaWiki installation? I assume it can???)

It’s also possible to include, (that is, transclude) MediaWiki content in a WordPress environment using the Wiki Inc plugin. A compelling demonstration of this is provided by Jim Groom, who has shown how to republish documentation authored in a Wiki via a WordPress page, an approach we adopted in our WriteToReply Digital Britain tinkerings.

One of the things we’ve started exploring the JISCPress project is the ability to publish each separate paragraph in a document (each with its own URI), in a variety of formats – txt, JSON, HTML, XML. That is, we have (or soon will have) an engine in place that supports the “publishing” side of paragraph level transclusion of content from reports published via the JISCPress/WTR platform. Now all we need is the transclusion (re-presentation of transcluded content) part to be able to transclude content from one document in another. (See Taking the Conversation Elsewhere – Embedded Quotes; see also Image Based Quotes from WriteToReply Using Kwout for a related mashup).

(Hmm, although Joss won’t like this, I do think we need a [WTR-include=REF] shortcode handler installed by default in WTR/JISCPress that will pull in paragraph level content in to one document from a document elsewhere on the local platform?)

Now this is really what hypertext is about – URIs (that is, links), that can act as a portal that can pull content in to one location from another. It may be of course that the idea of textual transclusion is just too confusing for people. But it’s something we’re going to explore with WriteToReply.

And on of the things we’re looking at for both WriteToReply and JISCPress is the use of semantic tagging to automatically annotate parts of the document (at the paragraph level, if possible?) so that content on a particular topic (i.e. tagged in a particular way) in one document can be automatically transcluded in – or alongside – a related paragraph in a separate document. (Hmm – maybe we need a ‘related paragraphs’ panel, cf. the comments panel, that can display transcluded, related paragraphs, from elsewhere in the document or from other documents?)

PS If you have an hour, here’s the venerable Ted Nelson giving a Google Tech Talk on the topic of transclusion:

Enjoy…

PPS here’s an old library that provides a more general case framework for content transclusion: Purple Include. I’m not sure if it still works though?

PPPS Here’s the scarey W3C take on linking and transclusion ;-) This is also interesting: auto/embed is not node transclusion

PPPPS for another take on including content by reference, see Email By Reference, Not By Value, or “how I came up with the idea for Google Wave first”;-)

PPPPPS Seems like eprints may also support transclusion… E-prints – VLit Transclusion Support.

Author Tony HirstPosted on August 7, 2009October 23, 2016Categories Radical Syndication, WriteToReplyTags Actually, JISCPress, transclusion7 Comments on Content Transclusion: One Step Closer

Feed Powered Auto-Responders

A few weeks ago, I got my first “real” mobile phone, an HTC Magic (don’t ask; suffice to say, I wish I’d got an iPhone:-( and as part of the follow up service from the broker (phones4U – I said I might be tempted to recommend them, so I am) I got a ‘will you take part in a short customer satisfaction survey’ type text message.

So when I responded (by text) I immediately got the next message in the sequence back as a response.

That is, the SMS I sent back was caught and handled by an auto-responder, that parsed my response, and automatically replied with an appropriate return message.

Auto-responders are widely used in email marketing and instant messaging environments, of course, and as well as acting in a direct response mode, can also be used to schedule the delivery of outgoing messages either according to a fixed calendar schedule (a bulk email to all subscribers on the first of the month, for example) or according to a more personalised, relative time schedule.

So for example, a day or two after getting my new phone, Vodafone started sending me texts about how to use my phone on their network*, presumably according to a schedule that was initiated when I registered the phone for the first time on the network; and the Phones4U courtesy chase up was presumably also triggered according to some preset schedule.

* something sucks here, somewhere: I keep finding my phone has connected to other, rival networks, and as such seems to spend large amounts of its time roaming, even when in a Vodafone signal area. Flanders – you owe me for making such a crappy recommendation… and Kelly, you have something to answer for, too…

So, these auto-scheduled, auto-responding systems are exactly the same idea as daily feeds: whenever you subscribe, a clock starts ticking and content is delivered to you according to a predefined schedule via that same channel.

In a true autoresponder, of course, the next mailing in a predefined sequence is sent in response to some sort of receipt from the recipient, rather than a relative time schedule, and in the case of autoresponding feeds this can be supported too if the feed scheduler supports unique identifiers for each subscription.

(The simplest daily feed system has a subscription URL that contains the start date; content is then delivered according to a relative time schedule that starts on the date contained in the subscription URL. A more elaborate syndication platform would use a unique identifier in the subscription URL, and the content delivery schedule is then tied to the current state of the schedule associated with that unique identifier.)

So how might a feed autoresponder work? How about in the same way as a feed stats package such as Feedburner? These measure ‘reach’ by inserting a small image at the very end of each feed item that is loaded whenever the feed item is viewed. By tracking how many images are served, it’s possible to get an idea of how many times the feed item was viewed.

The same mechanism can be used as part of a feed auto-responder system: for a subscription via a URI that contains a unique identifier, serve an image with a unique, obfuscated (impossible to guess at, and robots excluded) filename for each item. When the image is polled from a browser client, assume that the subscriber has read that item and publish the next item to the feed after a short delay. The next time the user visits their feedreader, the next item should be there waiting for them.

PS Note that someone somewhere has probably patented this, although as a mechanism it’s been around and blogged about for years (prior art doesn’t seem to be respected much in the world of software patents…) If you have a reference, please provide a link to it in the comments to this post.

Author Tony HirstPosted on July 20, 2009July 15, 2009Categories Radical Syndication, ThinksesTags feed based autoresponder, RSS autoresponder

Mashlib Pipes Tutorial: 2D Journal Search

[This post is a more complete working of Mash Oop North – Pipes Mashup by Way of an Apology]

Keeping current with journal articles in a particular subject area is one of the challenges that faces many researchers, and by implication the academic and research librarians tasked with supporting the information needs of those researchers.

This relatively simple recipe shows how to create a “two dimensional” search that allows a user to provide two sets of keywords, one to identify a set of journals in a particular subject area, the other to filter the current articles down to a particular subtopic in that subject area.

What this demo shows is:
– how to pull in a list of journals in a particular subject area based on user provided keywords into the Yahoo pipes environment;
– how to pull in the most recent table of contents from those journals into that environment;
– how to then filter those recent articles to only display articles on a particular subtopic.

Th starting point for this recipe is jOPML, a service created by Scott Wilson that allows you to run a keyword search on the titles of journals whose tables of contents are made available as RSS on ticTOCs and a generate an OPML feed containing the RSS feed URLs for those journal TOCS. (OPML is an XML formatted language that, among other things, can be used to transport bundles of RSS feed URLs around the web. In much the same way that RSS is one of the most effective ways of transporting sets of links to web pages around the web (as for example in the case of RSS feeds from social bookmarking sites such as delicious), so OPML is one of the best ways of moving collections of RSS links around.)

Now as well as consuming RSS feeds, Yahoo Pipes can also pull in other data formats. So for example:

– the Fetch Feed block can pull in a wide variety of RSS flavoured forms (different versions of RSS, Atom etc); [Handy tip – a pipe that just wires a Fetch Feed block direct to the pip output can be used to “normalise” different flavours of RSS/Atom in order to provide a single, standard feed format at the output of the pipe. ]<

– Fetch Data can be used to import XML and JSON into the pipes environment (with Fetch CSV pulling in CSV data files, from sources such as Google Spreadsheets);

– Fetch Page can be used to load HTML web pages into Yahoo Pipes, providing the means by which to develop simple screen scraping applications within the Pipes environment.

What this means is that we can pull in the OPML file generated by jOPML into the Yahoo Pipes environment and have a play with it :-)

So let’s see how. To start with, we need to find a way of getting arbitrary OPML files out of jOPML. Running a search for science history on jOPML returns:

with the OPML available here: http://jopml.org/feeds.opml?q=science+history

Looking at this URI, you’ll hopefully see that it contains the search terms used to query the journals database on jOPML. In effect, the URI is an API to the jOPML service. By rewriting the URI, we can make different calls on the jOPML service, and return different OPML files for different topic areas.

AS-AN-ASIDE TAKE HOME POINT: many URIs effectively provide an API to a web service. If you ever see a search form, run some queries using it, and look at the URIs of th results page. If you can see your search terms in the URI, you are now in a position to construct your own queires to that service simply by using the URI, rather than having to go by the search form.

Here are a couple of services you can try this out with:
– Google: http://google.com;
– Twitter: http://search.twitter.com.
Remember, the goal is to:
1) run a search;
2) look at the the URI of the results page and see if you can spot the search terms;
3) try to hack the URI to run a search using a new set of search terms.
Are there any other hackable items in the URI? For example, run several Twitter searches returning different numbers of search results, and look at the URI in each case. Can you see how to hack it to return the number of results items that you want? (Note that there is a hard limit set at the Twitter end that defines the maximun numbr of results that can be returned.)

It’s not just search terms that appear in URIs either. For example, the ISBN Playground will generate a wide variety of URIs that are all “keyed” using an arbitrary ISBN. (Actually, thot’s not quite true; many of them require ISBN 10 format ISBNs. But there are ways around that, as I’ll show in a later post…) If I’m missing any URIs you know of that contain ISBNs, please let me know in a comment to this post ;-)

Anyway, that’s more than enough of that! Let’s go back to the 2D journal search recipe, and let the pipework begin…

The main idea bhind Yahoo pipes is to “wire” together different components in order to complete some sort of task. When you create a new Yahoo pipe you are presented with an empty canvas that dominates the screen on which to create your “pipe”, and a menu area on the left that contains different blocks that you can use to create your pipe.

Blocks are added to the canvas either by dragging them from th menu area and dropping them on the canvas, or clicking the + symbol on the block you want in the menu area, which adds it to the canvas automatically.

Blocks are wired together by clicking on the circle on the bottom of a block and dragging and dropping the “wire” onto a circle on the top of the next block in your pipe.

The idea is that content flows through one block into the next, entering the block along its top edge, being processed by the block as appropriate, and then passing out through the bottom edge of the block.

Blocks that do not have an input circle on the top edge are used to pull content into the pipe from elsewhere. (These can be found in the Sources part of the menu panel.)

In contrast, the Pipe output block does not have any circles on its lower edge – the output from this block is exposed to the outside world on the the pipe’s public home page. (The single pipe output block is added to the canvas automatically when you add an input block. Pipes can have multiple input blocks, but only on output block.)

(If this sort of interaction design appeals to you – that is, “wiring” separate components in some sort of linear workflow together – a Javascript library is available that implements the drag, drop and wire features so you can implement an interface similar to the Yahoo Pipes interface in your own web applications: WireIt – a Javascript Wiring Library. To see WireIt in action, check out Tarpipe.)

So where do we start? The first thing to do is to construct the URI to the OPML feed that we can then use to pull in the OPML feed for a set of journals on a particular topic:

If you highlight a block by clicking on it, it will glow orange. You can then inspect the output from just this block by looking in the preview pan at the bottom of the screen:

The “Journal keywords (text)” block is actually a Text input block:

The URL builder constructs a URL from the fixed elements of the URI (the page location http://jopml.org/feeds.opml and the query variable q) and the user inputted search terms. The user inputs are exposed as text entry boxes on the front page of the pipe as well by arguments in the URI for the pipe (e.g. in the same way that the query terms appear in the jOPML URIs).

In order to import the contents of the jOPML file, we can use the Fetch Datablock.

To see what we’ll be working with, here’s what an original OPML file looks like:

If we load this XML file into Pipes, we need to tell the Fetch Data block what parts of the OPML file which should use as separate items in within the pipe. Looking at the OPML file, we ideally want each journal to be represented as a separate item within the pipe. We do this by specifying the path to the outlin element in the OPML feed, noting that each journal listing is represented using a separate outline element.

Within the pipes environment, the OPML file is represented as follows:

Each outline element contains information regarding a single journal – it’s title, xmlUrl, and so on. The xmlUrl element contains the URI of the RSS feed for the contents of the current issue of the particular journal. You’ll see that the xmlURI points to the RSS feed of the journal from the publisher’s site.

So for example, the RSS version of the TOCs for the journal The British Journal for the History of Science can be found at http://journals.cambridge.org/data/rss/feed_BJH_rss_2.0.xml.

Now you could of course subscribe to all these journal table of contents feeds simply importing the OPML file into an RSS reader such as Google Reader, but where would the fun be in that? After all, most of the time, I’m not actually that interested in most of the articles in any particular journal. For example, it would be far more efficient (?!) if I was only alerted to articles that were in my subject area. So let’s see how to do that…

The Loop block lets us work with each item in the selected journals feed. Essentially, it says “for each item in a feed, do something to or with that item”. (For each is a really powerful idea in computational thinking. It does pretty much exactly what it says on the tin: for each item in a list, do something with it. In the Yahoo Pipes environment, the Loop block essentially implements for each):

You’ll see that the loop block has a space for adding another block – the block whose functionality will be applied to each element in the incoming feed. As well as placing ‘standard’ pipes blocks taken from the blocks menu in a Loop element, you can also use pipes you have created yourself.

If we embed a Fetch Feed block in the Loop, then for each journal item identified in the imported OPML feed, we can locate its TOCs RSS feed URI (the xmlUrl element) and use it to fetch the contents of that feed.

Now you may notice that the Loop block can output the results of the Fetch Feed call in one of two ways; it can either annotate the original feed items, for example by assigning (that is, adding) the current list of contents for a journal to each a subelement of each item in the pipe:

In more abstract terms, we might represent that as follows:

Or byemitting the items, which is to say that each item that comes into the the Loop block is replaced by the set of items that were created within the Loop block:

Here’s how that looks diagrammatically:

Because I want to produce a feed that just contains links to articles that may be of interest to me, we’re going to use the “emit all results” option.

So let’s just recap where we are. Here’s the pipe so far:

We start by taking some user keyword terms and construct a URI that calls the jOPML service, returning an OPML file that contains the titles and TOC RSS URLs of journals related to those keywords. We then loop through that list of journals, replacing each journal item with a list of items corresponding to the current table of contents of each journal. These items are pulled from the table of contents RSS feed for each journal as obtained from the ticTOCs listings.

The next step is to filter the contents list so that we only get passed journal articles on a particular topic. We’ll do that using a crude keyword filter that only lets articles through whose contents contain a particular keyword or set of keywords.

Taking the filter block, we wire in another user input that allows the user to specify keywords that must appear in the title element of an article for it to be emitted from the pipe, and take the output from this filter to the output of the pipe.

So there we have it: a 2D search that takes two sets of keywords, one set that pulls out likely suspect journals on a topic, and the second set that filters articles from those journals on a more detailed subject.

The output form the pipe is then available as an RSS feed in its own right, as a Google personal (iGoogle) widget, etc etc.

The whole pipe looks like this:

It works by generating a jOPML URI based on user provided keyword terms, importing the jOPML feed into the pipe, grabbing the RSS feed of the table of contents for each journal specified in the OPML feed and then filtering those contents listings using another set of keyword terms based on the title of each article.

In doing so, you have seen how to use the URL Builder block to construct the jOPML URI using user provided search terms entered using a Text Input block; the Import Data block to grab the jOPML XML feed; the Loop and Fetch Feed blocks to pull in the table of contents RSS feed from the journal publisher for each journal identified in the jOPML feed; and the Filter block to pass through only those articles that contain a second set of user specified keywords in the article title.

Enjoy :-)

PS if I manage to blag being able to run a Library Mashup uncourse in the Autumn, this is about the level of detail post I’d was planning to write. So – too much detail? Not enough? Just right? How’s it for leveling? Appropriate for a ‘not necessarily techie, but keen to learn’ audience?

Author Tony HirstPosted on July 9, 2009August 6, 2012Categories Pipework, Radical Syndication, TutorialTags jOPML, mashlib, mashlib09, ticTOCs8 Comments on Mashlib Pipes Tutorial: 2D Journal Search

Ordered Lists of Links from delicious Using Yahoo Pipes

One of the things that I often use the delicious social bookmarking service for is to push lists of links into web pages, web dashboards, or the feedshow link presenter. However, sometimes it’s important to be able to push the links in a particular order (particularly for the link presenter) rather than the order in which the links were bookmarked (i.e. order by timestamp based on when the bookmark was saved).

So a couple of days ago it occurred to me that I should be able to do this with a simple Yahoo Pipe by using a tags to order the sequence of links and sorting on those. So for anyone who remembers programming in BASIC, and number the lines 10, 20, 30 (or 100, 200, 300) to give yourself “room” to insert additional lines, the following convention may be familiar…

STEP 1: tag your links according to the convention: ORDERLABEL:nnn. So for example, to provide raw testing material for my pipe I tagged three links with the following variants: orderA:1000, orderB:120, orderC:103; orderA:3000, orderB:110, orderC:102; and orderA:2000, orderB:130, orderC:101. It also makes sense to tag each item with just ORDERLABEL, so you can pull out just those items from delicious.

STEP 2: build the pipe. My idea here was to grab the list of tags for each link as a string, use a regular expression to just parse out the sequence number from the string, having been provided by the order label (e.g. orderA, orderB or orderC in my test case), and then sort the feed on those numbers…

Unfortunately, delicious doesn’t emit all the tags in a single element (at least, not as far as Yahoo! Pipes are concerned):

And even more unfortunately (for me at least), I don’t know an effective way of combining these sub-elements into a single element? (The Sub-element pipe operator will convert every item in each category subelement list to an element in it’s own right, but that’s not a lot of use as I don’t know how to copy the tilte, link and description elements into each category subelement…)

So what to do?

Well, it turns out you can use this sort of construction in a regular expression block:
${category.0.content}
which says “use the content of the 0’th category subelement”.

Which means in turn that if I refer to each of n tags explicitly (as in: ${category.n.content}), I can construct a single string containing all n categories (i.e. all the tags in a single string).

We copy the title element as an element of convenience to create an order element within the feed. The string block constructs a single replacement string for the regular expression that will replace the original contents of the order element with the content element from the first 16 category subelements. Following the regular expression replacement, the order element now contains up to the first 16 tags associated with the element in a single string.

The next step is to filter the feed so that we only pass elements that contain tags that are based on the ORDERLABEL root (in this case, I am sorting on things like orderA:1000, orderA:2000, etc):

(Remember that we could use another tag (I usedorderedfeedtest in this example) to pull in all the orderA:nnn tagged bookmarks.)

The appropriately order number tagged elements are then processed so that the order element is rewritten with just the “line number” for each feed item (so e.g. orderA:2000 would become 2000, and the items in the feed sorted using this element.

By specifying the appropriate ordering label, we can force the order in which feed items are displayed:

And then:

You can find the pipe here: delicious feed ordered by “tag line numbers”.

Author Tony HirstPosted on April 22, 2009April 22, 2009Categories Pipework, Radical Syndication, Tinkering7 Comments on Ordered Lists of Links from delicious Using Yahoo Pipes

Mashing Up Government the RSS Way: Raw Materials

Three or four weeks ago, @adrianshort tipped me off about a campaign he was trying to put together to encourage local councils to start publishing autodiscoverable web pages from their homepages. Various overcommitments of my own meant I couldn’t contribute anything to this initiative, but it’s great to see it up and running now at Mash the State:

So how does my local council do?

Boo – no autodiscoverable feeds on their homepage…. (I wonder whether it might it be an idea to have a link to the council page that is being checked for autodiscoverable links, so that people can see which page it actually is and scout around it for non-autodiscoverable feeds?)

Although the campaign is targeted at encouraging councils to publish RSS news feeds, there’s a range of other feeds that they could usefully publish too, potentially without too much effort.

For example, councils can make use of the Planning Alerts service that scrape planning info from local council websites (presumably it would make get the data via feeds if the data were made available that way? [Update: the link is there, I just hadn’t noticed it – the name of the council in the body text is a link to the assumed council home page.]):

These feeds include geo-data too, which means you can plot the feed on a map:

(I started exploring an even richer planning map for the IW Council, who provide (albeit in a hard to find way) audio recordings of council planing meetings. You can find the proof of concept here: Barriers to Open Availability of Information? IW Planning Committee Audio Recordings.)

Roadworks feeds might be another useful service? Elgin (the electronic local government information network) is one source of this information, although their results listings aren’t available as RSS, and in constructing the URLs for the search results, you need to know the Local Authority Area number :-( (Is there a straightforward list of these available anywhere?)

As well as the opening up of the Mash the State Campaign, I also spotted this week that the UK Parliament website was now providing RSS feeds detailing the progress of every bill currently going through Parliament:

Haing the RSS feed means it’s trivial to create a timeline viewof a Bill’s progress using a service such as Dipity. So for example, here’s a timeline depicting the progress of the Coroners and Justice Bill:

Coroners and Justice bill timeline http://www.dipity.com/psychemedia/Coroners-and-Justice

(I’m not sure if there’s an official way of tracking amendments to already enacted Acts? If not, here’s a workaround I put together some time ago – Tracking UK Parliamentary Act Amendments – although I don’t know whether it’s still working?)

PS this looks like an interesting related collection of links: Mashups in government; and this post – Sign up, sign up for Open Source – describes some innovative looking local council projects (I like the idea of a planning application tracker, cf. the government Bill tracker, maybe?)

PPS Although the percentage of councils that currently have autodiscoverable feeds on their homepage is quite low, it’s still a better uptake than for HEIs: Back from Behind Enemy Lines, Without Being Autodiscovered(?!) and Autodiscoverable RSS Feeds From HEI Library Websites. See also 404 “Page Not Found” Error pages and Autodiscoverable Feeds for UK Government Departments.

Author Tony HirstPosted on April 11, 2009April 11, 2009Categories Policy, Radical Syndication3 Comments on Mashing Up Government the RSS Way: Raw Materials

Embedding Yahoo Pipes Output With a Single Click

…sort of…

Here are a quick couple of bookmarklets that I’ve been meaning to put together and only just got around to. They work on the “homepage” of any Yahoo pipe (such as this POIT Report beta recommendations pipe that reverses the order of feed items from the recommendations page of the POIT report (beta)) and do the following:

– preview the feed output of the pipe in a Grazr widget:
javascript:window.location=
“http://grazr.com/gzpanel.html?file=http://pipes.yahoo.com/pipes/pipe.run?_id=”+
window.location.href.split(‘=’)[1]+”&_render=rss”;
[Line breaks added for display purposes – you’ll need to remove them for the bookmarklet to work.]
Clicking on this bookmarklet when you are on a pipe’s homapage will display the pipe’s feed output in a Grazr widget. So what? So you can preview the pipe’s feed output in a “legitimate” feed reader.

– go the Grazr widget embedding page to customise your own embeddable widget container for the pipe’s output RSS feed:
javascript:window.location=
“http://grazr.com/config?file=http://pipes.yahoo.com/pipes/pipe.run?_id=”+
window.location.href.split(‘=’)[1]+”&_render=rss”;
[Line breaks added for display purposes – you’ll need to remove them for the bookmarklet to work.]
Clicking on this bookmarklet when you are on a pipe’s homapage will display the pipe’s feed output in a Grazr widget on the Grazr widget editor page. So what? So you can grab the pipe’s feed output into a “legitimate” feed reader and then get the embed code to embed the feed in your own page.

Remember that if the feed has Slideshare slideshow, audio files or Youtube movies added as enclosures to the feed, the Grazr widget will display them within the widget…

  • delicious Feed Enhancer – Auto-Enclosures in Grazr;
  • Viewing Presentations in Grazr via Slideshare RSS Feeds.

(I’m not sure if Scribd enclosures are also automatically rendered in Grazr, but they definitely can be if you create your own OPML file… Embed Scribd iPaper in Grazr Demo.)

Author Tony HirstPosted on February 7, 2009February 7, 2009Categories Radical Syndication, Tinkering1 Comment on Embedding Yahoo Pipes Output With a Single Click

Single Item RSS Feeds from WordPress Blogs

Okay, in Just Feed Me One Piece at a Time here’s a quick fix for how to get a single item RSS feed for each separate blog post on a WordPress blog…

…but first, the hint condition for those of you who want to work it out yourself: Features: RSS (WordPress.com).

Can you see what it is yet?

Here: If you go to your main site feed, and then add ?s=oogabooga to the end of the URL, it’ll show you a feed of just the posts that contain the word “oogabooga” in them. (This is called a search feed.)

So a “nearly there” solution is…

…go to the main site feed and use the title of the post you want the feed for as the search term.

Like this:
https://ouseful.wordpress.com/feed/?s=”Just%20Feed%20Me%20One%20Piece%20at%20a%20Time”

Now this works fine if you never mention the exact title of the post in another post on the blog, because post titles are likely to be unique phrases (and so the search will only turn up one result)…

But if you do refer to one post using it exact title in another post, you’ll get multiple hits.

So a quick fix workaround is just to push the feed through a pipe that filters the results list by title:

(An alternative approach would be to use the heuristic that the first mention of the phrase will be in the original post (i.e. the post where the search phrase matches the post title), in which case you could just filter the search feed to return the oldest hit; in a pipe, the easiest way to do this would be to reverse the feed order, (or sort by ascending date) then truncate the feed after 1 item).)

Note that the workaround relies on the WordPress search doing its thing properly and the user getting the search term right…

For a useful workflow, it’d be handy to have a bookmarklet that would generate the URL for a single item RSS feed for a given WordPress blog feed (a bit like the OpenLearn single item RSS feed bookmarklet does for OpenLearn unit pages). This means capturing the blog top-level URL (e.g. https://ouseful.wordpress.com) and the post title. The following pipe attempts to do that just given the URL of the actual post you want the single feed item for (WordPress single item RSS from URl):

This pipe works for OUseful.info, but probably won’t work for a lot of other WordPress blogs because it uses a heuristic to capture the post title from the page title. More specifically, in the OUseful.info case, page titles have the form This is the Post Title « OUseful.Info, the blog…, so the pipe looks for the « then loses it and everything to the right of it in the page title to determine the post title.

What would be handy here would be for all WordPress templates to add a “title” metadata element to the header containing the exact post title..?

Though of course, it’d be nicer still if WordPress and the other blogging platforms made the single item RSS feed available for each post natively, just anyway… ;-)

PS it seems like WordPress does do such a thing…

So the general case solution to single item RSS feeds for WordPress blog posts is use the following construction:

http://wp.example.com/post-URL/?feed=rss2&withoutcomments=1

Here’s a bookmarklet that I think should work on any WordPress blog…

javascript:window.location+=”?feed=rss2&withoutcomments=1″;

(Crappy WordPress won’t let me actually provide a link to the bookmarklet – it insists on stripping out the “javascript:”:-(

And so the web just got a little bit more wired for me… Thanks, Shanta Rohse:-)

(Note that if the blog publisher has configured the feeds to only include summaries rather than full posts, you’ll only get the summary… Unless there’s a URI argument that will force the full item to be published (which I doubt!)?)

PPS it strikes me that you can add a link to the single item RSS feed for a WordPress post by adding something like this to the post (or sidebar, maybe, if your template has sidebar widgets alongside individual posts):
<a href=”?feed=rss2&withoutcomments=1″>Single item RSS feed for this post</a>

Like this: Single item RSS feed for this post

(I’m not sure whether that link will work from within a feed reader, though?)

Author Tony HirstPosted on February 2, 2009February 2, 2009Categories Pipework, Radical Syndication, Tinkering5 Comments on Single Item RSS Feeds from WordPress Blogs

Posts navigation

Previous page Page 1 Page 2 Page 3 Next page
© AJ Hirst 2008-2021
Creative Commons License
Attribution: Tony Hirst.

Contact

Email me (Tony Hirst)
Bookmarks
Presentations
Follow @psychemedia
Tracking Jupyter newsletter

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,026 other subscribers
Subscribe in a reader

My Other Blogs

F1Datajunkie Blog F1 data tinkerings
Digital Worlds Blog Game Design uncourse
Visual Gadgets Blog visualisation bits'n'pieces

Custom Search Engines

Churnalism Times - Polls (search recent polls/surveys)
Churnalism Times (search press releases)
CourseDetective UK University Degree Course Prospectuses
UK University Libraries infoskills resources
OUseful web properties search
How Do I? Instructional Video Metasearch Engine

Page Hacks

RSS for the content of this page

View posts in chronological order

@psychemedia Tweets

  • RT @ollie: we're hiring! the @FinancialTimes is looking for a data journalist to join our newsroom in new york: boards.eu.greenhouse.io/financialtimes… 9 hours ago
  • RT @commonslibrary: How might artificial intelligence impact the UK's creative industries? Today, @sarahjolney1 will lead a debate on the… 10 hours ago
  • Public sector discord... twitter.com/dkernohan/stat… https://t.co/ECA1WEgfH2 15 hours ago
Follow @psychemedia

RSS Tumbling…

  • "So while the broadcasters (unlike the press) may have passed the test of impartiality during the..."
  • "FINDING THE STORY IN 150 MILLION ROWS OF DATA"
  • "To live entirely in public is a form of solitary confinement."
  • ICTs and Anti-Corruption: theory and examples | Tim's Blog
  • "Instead of getting more context for decisions, we would get less; instead of seeing the logic..."
  • "BBC R&D is now winding down the current UAS activity and this conference marked a key stage in..."
  • "The VC/IPO money does however distort the market, look at Amazon’s ‘profit’..."
  • "NewsReader will process news in 4 different languages when it comes in. It will extract what..."
  • Governance | The OpenSpending Blog
  • "The reality of news media is that once the documents are posted online, they lose a lot of value. A..."

Recent Posts

  • Working with Broken
  • Chat Je Pétais
  • From Packages to Transformers and Pipelines
  • Search Assist With ChatGPT
  • Fragment — Cheating, Plagiarism, Study Buddies and Machine Assist

Top Posts

  • Generating Diagrams from Text Generated by ChatGPT
  • Information Literacy and Generating Fake Citations and Abstracts With ChatGPT
  • Generating (But Not Previewing) Diagrams Using ChatGPT
  • Connecting to a Remote Jupyter Notebook Server Running on Digital Ocean from Microsoft VS Code
  • Can We Get ChatGPT to Act Like a Relational Database And Respond to SQL Queries on Provided Datasets and pandas dataframes?
  • Templated Text Summaries From Data Using ChatGPT
  • SQL Databases in the Browser, via WASM: SQLite and DuckDB
  • Chat Je Pétais

Archives

OUseful.Info, the blog… Blog at WordPress.com.
OUseful.Info, the blog…
Blog at WordPress.com.
  • Follow Following
    • OUseful.Info, the blog...
    • Join 2,026 other followers
    • Already have a WordPress.com account? Log in now.
    • OUseful.Info, the blog...
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...