The Future of Search Is Already Here

One of my favourite quotes (and one I probably misquote – which is a pre-requisite of the best quotes) is William Gibson’s “the future is already here, it’s just not evenly distributed yet”…

Several times tonight, I realised that the future is increasingly happening around me, and it’s appearing so quickly I’m having problems even imagining what might come next.

So here for you delectation are some of the things I saw earlier this evening:

  • SnapTell: a mobile and iPhone app that lets you photograph a book, CD or game cover and it’ll recognise it, tell you what it is and take you to the appropriate Amazon page so you can buy it… (originally via CogDogBlog;

  • Shazam, a music recognition application that will identify a piece of music that’s playing out loud, pop up some details, and then let you buy it on iTunes or view a version of the song being played on Youtube (the CogDog also mentioned this, but it was arrived at tonight independently);

    So just imagine the “workflow” here: you hear a song playing, fire up the Shazam app, it recognises the song, then you can watch someone play a version of the song (maybe even the same version on Youtube.

  • A picture of a thousand words?: if you upload a scanned document onto the web as a PDF document, Google will now have a go at running an OCR service over the document, extracting the text, indexing it and making it searchable. Which means you can just scan and post, flag the content to the Googlebot via a sitemap, and then search into the OCR’d content; (I’m not sure if the OCR service is built on top of the Tesseract OCR code?)
  • barely three months ago, Youtube added the ability to augment videos with captions. With a little bit of glue, the Google translate service will take those captions and translate them into another language for you (Auto Translate Now Available For Videos With Captions):

    “To get a translation for your preferred language, move the mouse over the bottom-right arrow, and then over the small triangle next to the CC (or subtitle) icon, to see the captions menu. Click on the “Translate…” button and then you will be given a choice of many different languages.” [Youtube blog]

Another (mis)quote, this time from Arthur C. Clarke: “any sufficiently advanced technology is indistinguishable from magic”. And by magic, I guess one thing we mean is that there is no “obvious” causal relationship between the casting of a spell and the effect? And a second thing is that if we believe something to be possible, then it probably is possible.

I think I’m starting to believe in magic…

PS Google finally got round to making their alerts service feed a feed: Feed me! Google Alerts not just for email anymore, so now you can subscribe to an alerts RSS feed, rather than having to receive alerts via email. If you want to receive the updates via Twitter, just paste the feed URL into a service like Twitterfeed or f33d.in.

PPS I guess I should have listed this in the list above – news that Google has (at least in the US) found a way of opening up its book search data: Google pays small change to open every book in the world. Here’s the blog announcement: New chapter for Google Book Search: “With this agreement, in-copyright, out-of-print books will now be available for readers in the U.S. to search, preview and buy online — something that was simply unavailable to date. Most of these books are difficult, if not impossible, to find.”

Time to Get Scared, People?

Last week, I posted a couple of tweets (via http://twitter.com/psychemedia) that were essentially doodles around the edge of what services like Google can work out about you from your online activity.

As ever in these matters, I picked on AJCann in the tweets, partly because he evangelises social web tool use to his students;-)

So what did I look at?

  • the Google Social Graph API – a service that tries to mine your social connections from public ‘friendships’ on the web. Check out the demo services…

    For example, here’s what the Google social API can find from Alan’s Friendfeed account using the “My Connections” demo:

    • people he links to on twitter and flickr;
    • people who link to him as a contact on twitter, delicious, friendfeed and flickr;
    • a link picked up from Science of the Invisible (which happens to be one of Alan’s blogs), also picks out his identi.ca identity; adding that URL to the Social Graph API form pulls out more contacts – via foaf records – from Alan’s identi.ca profile;

    The “Site Connections” demo pulls out all sorts of info about an individual by looking at URLs prominently associated with them, such as a personal blog:

    The possible connections reveal Alan’s possible identity on Technorati, Twitter, identi.ca, friendfeed, swurl, seesmic and mybloglog.

  • For anyone who doesn’t know what Alan looks like, you can always do a “face” search on Google images;
  • increasingly, there are “people” search engines out there that are built solely for searching for people. One example is Spock (other examples include pipl, zoominfo and wink [and 123people, which offers and interesting federated search results page]). The Spock “deep web” search turns up links that potentially point to Alan’s friendfeed and twitter pages, his revver videos, slideshare account and so on;
  • Alan seems to be pretty consistent in the username he uses on different sites. This makes it easy to guess his account on different sites, of course – or use a service like User Name Check to do a quick search;

Now I wasn’t going to post anything about this, but today I saw the following on Google Blogoscoped: Search Google Profiles, which describes a new Google search feature. (Didn’t know you had a Google Profile? If you have a Google account, you probably do – http://www.google.com/s2/profiles/me/? And if you want to really scare yourself with what your Google account can do to you, check http://www.google.com/history/… go on, I dare you…)

I had a quick look to see if I could find a link for the new profile search on my profile page, but didn’t spot one, although it’s easy enough to find the search form here: http://www.google.com/s2/profiles. (Maybe I don’t get a link because my profile isn’t public?)

Anyway, while looking over my profile, I thought I’d add my blog URL (http://ouseful.info) to it – and as soon as I clicked enter, got this:

A set of links that I might want to add to my profile – taken in part from the Social Graph API, maybe? Over the next 6 months I could see Google providing a de facto social network aggregation site, just from re-posting to you what they know about your social connections from mining the data they’ve crawled, and linking some of it together…

And given that the Goog can learn a lot about you by virtue of crawling public pages that are already out there, how much more comprehensive will your profile on Google be (and how certain will it be in the profile it can automatically generate around you?) if you actually feed it yourself? (Bear in mind things like health care records exist already…)

PS I just had a look at my own Web History page on Google, and it seems like they’ve recently added some new features, such as “popular searches related to my searches”, and also something on search trends that I don’t fully (or even partially) understand? Or maybe they were already there and I’ve not noticed before/forgotten (I rarely look at my search history…)

PPS does the web know when your birthday is??? Bewar of “Happy Birthday me…”. See also My Web Birthday.

[Have you heard about Google’s ‘social circle’ technology yet? read more]

Amazon Reviews from Different Editions of the Same Book

A couple of days ago I posted a Yahoo pipe that showed how to Look Up Alternative Copies of a Book on Amazon, via ThingISBN. The main inspiration for that hack was that it could be useful to get “as new” prices for different editions of the same book if you’re not so bothered about which edition you get, but you are bothered by the price. (Or maybe you wanted an edition of a book with a different cover…)

It struck me last night that it might also be useful to aggregate the reviews from different editions of the same book, so here’s a hack that will do exactly that: produce a feed listing the reviews for the different editions of a particular book, and label each review with the book it came from via its cover:

The pipe starts exactly as before – get an ISBN, check that the ISBN is valid, then look up the ISBNs of the alternative editions of the book. The next step is to grab the Amazon comments for each book, before annotating each item (that is, each comment) with a link to the book cover that the review applies to; we also grab the ISBN (the ASIN) for each book and make a placeholder using it for the item link and image link:

Then we just create the appropriate URLs back to the Amazon site for that particular book edition:

The patterns are as follows:
– book description page: http://www.amazon.co.uk/exec/obidos/ASIN/ISBN
– book cover image: http://images.amazon.com/images/P/ISBN.01.TZZZZZZZ

Here’s how the nested pipe that grabs the comments works (Amazon book reviews lookup by ISBN pipe): first construct the URL to call the webservice that gets details for a book with a particular ISBN – the large report format includes the reviews:

Grab the results XML and point to the reviews (which are at Items.Item.CustomerReviews.Review):

Construct a valid RSS feed containing one comment per item:

And there you have it – a pipe that looks up the different editions of a particular book using ThingISBN, and then aggregates the Amazon reviews for all those editions.

Time for a TinyNS?

In a comment to Printing Out Online Course Materials With Embedded Movie Links Alan Levine suggests: “I’d say you are covered for people lacking a QR reader device since you have the video URL in print; about all you could is run through some process that generates a shorter link” [the emphasis is mine].

I suspect that URL shortening services have become increasingly popular because of the rise of the blog killing (wtf?!) microblogging services, but they’ve also been used for quite some time in magazines and newspapers. And making use of them in (printed out) course materials might also be a handy thing to do. (Assessing the risks involved in using such services is the sort of thing Brian Kelly may well have posted about somewhere; but see also towards the end of this post.)

Now anyone who knows me knows that my mobile phone is a hundred years old and won’t go anywhere near the interweb (though I can send short emails through a free SMS2email gateway I found several years ago!). So I don’t know if the browsers in smart phones can do this already… but it seems to me a really useful feature for a mobile browser would be something like the Mozilla/Firefox smart keywords.

Smart keywords are essentially bookmarks that are invoked by typing a keyword in the browser address bar and hitting return – the browser will then take you to the desired URL. Think of it like a URL “keyboard shortcut”…

One really nice feature of smart keywords is that they can handle an argument… For example, here’s a smart keyword I have defined in my browser (Flock, which is built from the Firefox codebase).

Given a TinyURL (such as http://tinyurl.com/6nf2z) all I need to type into my browser address bar is t 6nf2z to go there.

Which would seem like a sensible thing to be able to do in a browser on a mobile device… (maybe you already can? But how many people know how to do it, if so?)

(NB To create a TinyURL for the page you’re currently viewing at the click of a button, it’s easiest to use something like the TinyURL bookmarklet.)

Now one of the problems with URL shortening services is that you become reliant on the short URL provider to decode the shortened URL and redirect you to the intended “full length” URL. The relationship between the actual URL and the shortened URL is arbitrary, which is where the problem lies – the shortened URL is not a “lossless compressed” version of the original URL, it’s effectively the assignment of a random code that can be used to look up the full URL in a database owned by the short URL service provider. Cf. the scheme used by services like delicious, which generate an “MD5 hash” of a URL which does decode (usually!) to the original URL (see Pivotal Moments… (pivotwitter?!) for links to Yahoo pipes that decode both TinyURLs and delcious URL encodings).

So this got me thinking – what would a “TinyNS” resolution service look like that sat one level above DNS resolution – the domain name resolution service that takes you from a human readable domain name (e.g. http://www.open.ac.uk) to an IP (internet protocol) address (something like 194.66.152.28).

Could (should) we set up trusted parties to mirror the mapping of shortened URL codes from the different URL shortening services (TinyURL, bit.ly, is.gd and so on) and provide distributed resolution of these short form URLs, just in case the original services go down?

Looking Up Alternative Copies of a Book on Amazon, via ThingISBN

As Amazon improves access to the long tail of books through Amazon’s marketplace sellers and maybe even their ownership of Abebooks, it’s increasingly easy to find multiple editions of the same book. So when I followed a link to a book that Mike Ellis recommended last week (to The Victorian Internet in fact) and found that none of the editions of the book were in stock, as new, on Amazon, I had the tangential thought that it’d be quite handy to have a service that would take an ISBN and then look up the prices for all the various editions of that book on Amazon.

Given an ISBN for a book, there are at least a couple of ways of finding the ISBNs for other editions of the book – the Worldcat xISBN service, and ThingISBN from LibraryThing (now part owned by Amazon through Amazon’s ownership of Abebooks; for who else Amazon owns, see Amazon “Edge Services” – Digital Manufacturing).

So here’s a couple of Yahoo pipes for looking up the alternative editions of a book on the Amazon website, after discovering those editions from ThingISBN.

First of all a pipe that takes an ISBN and looks up alternative editions using ThingISBN:

What this pipe does is construct a URL that calls for the list of alternative ISBNs for a given ISBN. That is, it constructs a URL of the form http://www.librarything.com/api/thingISBN/ISBNHERE, which returns an XML file containing the alternative ISBNs (example), grabs the XML file back using the Fetch Data block, renames the internal representation of the grabbed XML so that the pipe will generate a valid RSS feed, and output the result.

So now we have an RSS feed that contains a list of alternative ISBNs, via ThingISBN, for a given ISBN.

Now to find out how much these books cost on Amazon. For that, we shall find it convenient to construct a pipe that will look up the details of a book on Amazon using the Amazon Associates web service, given an ISBN. (For a brief intro to Amazon Associates web services, see Calling Amazon Associates/Ecommerce Web Services from a Google Spreadsheet.)

Here’s a pipe to do that:

(If you use the AWSzone scratchpad to construct a URL that calls the Amazon web service with a look up for book by ISBN, you can just paste it into the “Base” entry form in the Pipe’s URL Builder block and hit return, and it will explode the arguments into the appropriate slots for you.)

So now we have a pipe that will look up the details of a book on Amazon given its ISBN.

We can now put the ThingISBN pipe and the Amazon ISBN lookup pipe together, to create a compound pipe that will lookup details for all the alternative versions of a particular book, given that particular book’s ISBN:

Okay – so now we have a pipe that takes an ISBN, looks up the alternative ISBNs using ThingISBN, then grabs details for each of those alternatives from Amazon…

Now what? Well, if you use this pipe in your own mashup, you may find that if you construct a URL that calls a pipe with a given ISBN, if you don’t handle the ISBN properly in your own code, you can pass a badly formed ISBN to the pipe. The most common example of this is dropping a leading 0 on the ISBN – so e.g. you pass 441172717 rather than 0441172717.

Now it just so happens that LibraryThing offers another webservice that can correct this sort of error – ISBN check API – and it’s easy enough to create a pipe to call it:

Good – so now we can defensively programme the front end of our pipe to handle badly formed ISBNs by sticking this pipe at the front of the compound pipe that calls ThingISBN and then loops through Amazon calls.

But there’s something we can do at the other end of the pipe too, and that is make use of a ‘slideshow’ feature that Yahoo pipes offers as an interface to the pipe. If the elements of a feed contain image items that are packaged in an appropriate way, the Yahoo pipes interface will automatically create a slidesho of those images.

What this means is that if we package URLs that point to the book cover image of each alternative version of a book, we can get a slideshow of the bookcovers of all the alternative editions of that book.

Here’s just such a pipe:

And here’s the example output:

If you click on the “Get as Badge” option, you can then embed this slideshow on your own website or start page:

For example, here I’ve added the slideshow to my iGoogle page:

Now to my mind, that’s quite a fun (and practical) way of introducing quite a few ideas about webservice orchestration that can be unpacked at a later date. But of course, it’s not very academic, so it’s unlikely to appear in a course near you anytime soon… ;-) But I’d argue that it does stand up as a demo that could be given to show people how much fun this stuff can be to play with, before we inflict SOAP and WS-* on them…

Rock the Academy

Now there’s nothing wrong with conference posters, but this is more LIKE IT

[via the CogDog himself: Rock the Academy Video]

And can you imagine showing this to your head of research, I mean, your HEAD OF RESEARCH, and saying “I wanna go to this… I REALLY wanna go to this…”. They’ll probably look at you and say:

“Kid, we don’t like your kind, and we’re gonna send your fingerprints off to Washington.”

And friends, somewhere in Washington enshrined in some little folder, is a study in black and white of my fingerprints. And the only reason I’m singing you this song now is cause you may know somebody in a similar situation, or you may be in a similar situation, and if your in a situation like that there’s only one thing you can do and that’s walk into the shrink wherever you are, just walk in say “Shrink, You can get anything you want, at Alice’s restaurant.”. And walk out. You know, if one person, just one person does it they may think he’s really sick and they won’t take him. And if two people, two people do it, in harmony, they may think they’re both faggots and they won’t take either of them.

And three people do it, three, can you imagine, three people walking in singin a bar of Alice’s Restaurant and walking out. They may think it’s an organization. And can you, can you imagine fifty people a day, I said fifty people a day walking in singin a bar of Alice’s Restaurant and walking out. And friends they may thinks it’s a movement.

And I think that’s what we’ve got here – a movement… You can get anything you want, at Alice’s restaurant..

PS so OU folks, when we gonna put together movies like this to advertise each and every course on the courses and quals site?! DMPB, would that fall under your remit? Or would Ian be fighting you for it?;-) heh heh

Printing Out Online Course Materials With Embedded Movie Links

Although an increasing number of OU courses include the delivery of online course materials, written for online delivery as linked HTML pages, rather than just as print documents viewable online, we know (anecdotally at least, from requests that printing options be made available to print off whole sections of a course with a single click) that many students want to be able to print off the materials… (I’m not sure we know why they want to print off the materials, though?)

Reading through a couple of posts that linked to my post on Video Print (Finding problems for QR tags to solve and Quite Resourceful?) I started to ponder a little bit more about a demonstrable use case that we could try out in a real OU course context over a short period of time, prompted by the following couple of comments. Firstly:

So, QR codes – what are they good for? There’s clearly some interest – I mentioned what I was doing on Twitter and got quite a bit of interest. But it’s still rare to come across QR codes in the wild. I see them occasionally on blogs/web-pages but I just don’t much see the point of that (except to allow people like me to experiment). I see QR codes as an interim technology, but a potentially useful one, which bridges the gap between paper-based and digital information. So long as paper documents are an important aspect of our lives (no sign of that paper-less office yet) then this would seem to be potentially useful.
[Paul Walk: Quite Resourceful?]

And secondly:

There’s a great idea in this blog post, Video Print:

By placing something like a QR code in the margin text at the point you want the reader to watch the video, you can provide an easy way of grabbing the video URL, and let the reader use a device that’s likely to be at hand to view the video with…

I would use this a lot myself – my laptop usually lives on my desk, but that’s not where I tend to read print media, so in the past I’ve ripped URLs out of articles or taken a photo on my phone to remind myself to look at them later, but I never get around to it. But since I always have my phone with me I’d happily snap a QR code (the Nokia barcode software is usually hidden a few menus down, but it’s worth digging out because it works incredibly well and makes a cool noise when it snaps onto a tag) and use the home wifi connection to view a video or an extended text online.

As a ‘call to action’ a QR tag may work better than a printed URL because it saves typing in a URL on a mobile keyboard.
[Mia Rdige: Finding problems for QR tags to solve]

And the hopefully practical idea I came up with was this: in the print option of our online courses that embed audio and/or video, design a stylesheet for the print version of the page that will add a QR code that encodes a link to the audio or video asset in the margin of the print out or alongside a holding image for each media asset. In the resources area of the course, provide an explanation of QR codes, maybe with a short video showing how they are used, and links (where possible) to QR reader tools for the most popular mobile devices.

So for example, here is a partial screenshot of material taken from T184 Robotics and the Meaning of Life (the printout looks similar):

And here’s what a trivial change to the stylesheet might produce:

The QR code was generated using the Kaywa QR-code generator – just add a URL as a variable to the generator service URL, and a QR code image appears :-)

Here’s what the image embed code looks like (the link is to the T184 page on the courses and qualifications website – in practice, it would be to the video itself):

<img src=”http://qrcode.kaywa.com/img.php?s=6&d=http%3A%2F%2Fwww3.open.ac.uk%2Fcourses%2Fbin%2Fp12.dll%3FC01t184&#8243; alt=”qrcode” />

Now anyone familiar with OU production processes will know that many of our courses still takes years – that’s right, years – to put together, which makes ‘rapid testing’ rather difficult at times ;-)

But just making a tiny tweak to the stylesheet of the print option in an online course is low risk, and not going to jeopardise quality of course (or a student’s experience of it). But it might add utility to the print out for some students, and it’s a trivial way of starting to explore how we might “mobilise” our materials for mixed online and offline use. And any feedback we get is surely useful for going forwards?

Bung the Common Craft folk a few hundred quid for a “QR codes in Plain English” video and we’re done?

Just to pre-empt the most obvious OU internal “can’t do that because” comment – I know that not everyone prints out the course materials, and I know that not everyone has a mobile phone, and I know that of those that do, not everyone will have a phone that can cope with reading QR codes or playing back movies, and that’s exactly the point

I’m not trying to be equitable in the sense of giving everyone exactly the same experience of exactly the same stuff. Because I’m trying to find ways of providing access to the course materials in a way that’s appropriate to the different ways that students might want to consume them.

As to how we’d know whether anyone was actually using the QR codes – one way might be to add a campaign tracking code onto each QR coded URL, so that at least we’d be able to tell which of the assets were were hosting were being hit from the QR code.

So now here’s a question for OU internal readers. Which “innovation pipeline” should I use to turn the QR code for video assets idea from just an idea into an OU innovation? The CETLs? KMi? IET (maybe their CALRG?) The new Innovation office? LTS Strategic? The Mobile Learning interest group thingy? The Moodle/VLE team? Or shall I just take the normal route of an individual course team member persuading a developer to do it as a favour on a course I’m currently involved with (a non-scalable result in terms of taking the innovation OU -wide, because the unofficial route is an NIH route…!)

And as a supplementary question, how much time should I spend writing up the formal project proposal (CETLs) or business case (LTS Strategic, Innovation Office(?)) etc, and then arguing it through various committees, bearing in mind I’ve spent maybe an hour writing this blog post and the previous one (and also that there’s no more to write – the proof now is in the testing ;-), and it’d take a developer maybe 2 hours to make the stylesheet change and test it?

I just wonder what would happen if any likely candidates for the currently advertised post of e-Learning Developer, in LTS (Learning and Teaching Solutions) were to mention QR codes and how they might be used in answer to a question about how they might “demonstrate a creative but pragmatic approach to delivering the ‘right’ solution within defined project parameters”?! Crash and burn, I suspect!;-)

(NB on the jobs front, the Centre for Professional Learning and Development is also advertising at the moment, in particular for a Interactive Media Developer and a Senior Learning Developer.)

Okay, ranty ramble over, back to the weekend…

PS to link to a sequence that starts so many minutes and seconds in, use the form: http://uk.youtube.com/watch?v=mfv_hOFT1S4#t=9m49s.

PPS for a good overview of QR codes and mobile phones, see Mobile codes – an easy way to get to the web on your mobile phone.

PPPS [5/2010] Still absolutely no interest in the OU for this sort of thing, but this approach does now appear to be in the wild… Books Come Alive with QR Codes & Data in the Cloud

Calling Amazon Associates/Ecommerce Web Services from a Google Spreadsheet

[UPDATE: Note that since Amazon Product Advertising API, or whatever it’s called now – the thing that was the Amazon E-commerce API – started requiring signed calls (August 2009), this trick has stopped working…]

I’ve never really been one for using spreadsheets – I’d rather write code in a text environment than macros and formulae in a Microsoft environment (because Excel is the spreadsheet you’re likely to have to hand in most cases, right?), but over the last week or so, I’ve really been switched on to how we might be able to use them as a scribble pad for playing with web services…

So for example, in Viewing Campaign Finance Data In a Google Spreadsheet via the New York Times Campaign Data API I showed how to do what it says on the tin…

… and today I had a look at Amazon Associates Web Service (formerly known Amazon ECS (eCommerce webservices)).

Until now, the best way of getting your head round what these services can do has been to use the tools on AWSzone, a playground (or scratchpad) for previewing calls to Amazon web services.

In the case of the REST flavoured web service, the form simply provides a quick’n’easy way of creating the RESTful URL that calls the webservice.

The SubscriptionId is required and can be obtained by registering for access with the Amazon Associates web service.

So just pick the web service/function you want to call (ItemSearch in this case), fill in some necessary details (and some optional ones, if you like…) and view the results:

(You might also notice the scratchpad contains a tab for creating a minimal SOAP request to the web service (and viewing the associated SOAP response) and a tab for creating Java (or C#) code that will call the service). Amusingly, you view the SOAP request and response via a URL ;-)

Whilst the scratchpad makes it easy to construct web service calling URLs, the XML response is still likely to be unmanageable at best (and meaningless at worst) for most people. Which is where using a Google spreadsheet as a display surface comes in.

How so? Like this: take a URL for a (working) webservice call constructed in the AWZ Zone REST scratchpad and paste it into cell B3 in a new Google spreadsheet (enter “URL” as a label in cell A3).

In cell D3 enter the following formula:
=importXML(B3,”//Item”)

This calls the Amazon Associates web service via the RESTful URL in cell B3, and then attempts to display the XML for each results “Item”.

Compare this with the actual XML results file:

The spreadsheet has loaded in the ASIN (the ISBN for each book result) and the DetailPageURL, but the ItemAttributes are not loaded in (or if they are, they aren’t displayed because a single cell can’t display more than a single XML attribute, and it would have to display the Author(s), Manufacturer, ProductGroup and so on).

(Hmm, I wonder what a Treemap spreadsheet would look like? How would it handle the display of XML subtrees?!)

Tweak the formula in D3 so that it says:
=importXML(B3,B4)
in cell A4 enter Path and in B4 enter //Item.

Hopefully results table will remain the same, only now you can experiment with the path setting easily.

Inspect the ItemAttributes by setting the path (cell B4) to //ItemAttributes

A single result can be obtained by specifying which result to display. For example, set the path to //Item[1]/ItemAttributes to display the ItemAttributes for just the first ([1]) results Item.

By importing several XML results files, you could list just the results for the first, second and third results, for example. By loading formulas with different paths into different cells, you can force different results attributes into particular cells.

For example, set the path to //Item[2]/ItemAttributes to display the ItemAttributes for just the second ([2]) results Item.

It’s also possible to craft changes that will apply to the web service URL. In cell A2 enter the label Search and in cell B2 a single word search term, such as google.

Cut the URL from cell B3 and replace it with the formula =CONCATENATE(“”), then paste the URL back in between the double quotes of the CONCATENATE formula.

Now go into cell B3, and find the part of the URL that encodes the search keywords:

In the example above, I was searching for the keyword mashup – replace the keyword with “,B2,”.

What this will do is add the search term in cell B2 into the URL that calls the Amazon web service. So now you can use cell B2 as a search box for a search on the Amazon Associates web service.

Note that you can only use single word search terms at the moment – if you want to use multiple search words, you must use + instead of a space between each word.

So – that’s how to build a search (of sorts) using Amazon Associates web services in a Google spreadsheet. :-)

PS Now I know that that for webservices to count in an academic environment, you’ve got to use SOAP (this counts for teaching computing just as much as it counts in JISC funded projects!), so I don’t expect any of this to count in that environment. But for “mortals”, this way of accessing webservices and then doing something useful with the results may actually be a way forward? ;-)

Mashup Mayhem BCS (Glasgow Branch) Young Professionals Talk

On Monday I gave a presentation for the BCS Glasgow branch at the invite of Daniel Livingstone, who I met in the mashup mart session at the CETIS bash last year.

I’d prepared some slides – even rehearsed a couple of the mashups I was going to do – and then fell apart somewhat when the IE6 browser I was using on the lectern PC failed to play nicely with either Pageflakes or Yahoo Pipes. (I had intended to use my own laptop, but the end of the projector cable was locked away…)

“Why not use Firefox Portable?” came a cry from the floor (and I did, in the end, thanks to Daniel…). And indeed, why not? When I was in the swing of doing regular social bookmarking sessions, often in IT training suites, I always used the local machines, and I always used Portable Firefox.

But whilst I’ve started “playing safe” by uploading at least a basic version of the slides I intend to use to Slideshare before I leave home on the way to a presentation, I’ve stopped using Portable Firefox on a USB key even if I am taking the presentation off one… (There is always a risk that “proxy settings” are required when you use your own browser, of course, but a quick check beforehand usually sorts that…)

So note to self – get back in the habit of taking everything on a USB key, as well as doing the Slideshare backup, and ideally prepping links in a feed somewhere (I half did that on Monday) so they can be referred to via a live bookmark or feedshow.

Anyway, some of the feedback from the session suggested handouts would have been handy, so here are handouts of a sort – a set of repurposed slides in which I’ve taken some of the bits that hopefully worked on Monday, along with a little bit of extra visual explanation added in. The slides probably still don’t work as a standalone resource, but that’s what the talking’s for, right?!;-)

There are also some relevant URLs collected together under the glasgowbcs tag on my delicious account: http://delicious.com/psychemedia/glasgowbcs.