Twitter Integration Means Delicious Social Bookmarking Site Gets, Err, Social…(?!)

In order to keep track of the dozens of snippets of potentially useful info I come across through m browser each week, I us the delicious online (social) bookmarking sit as a place to dump most of my bookmarks. Three or four times a week (maybe more?), I use my tags to rediscover things I remember bookmarking, and maybe once a month I actually use the delicious search option.

In order to save bookmarks, I used to use the integrated service provided by the flock browser, but in one particular update they made a change I really didn’t get on with (to the dialogue box, I think) and now I tend to use the delicious bookmarklet.

(I’ve also become pretty cavalier about which browser I use – I typically have Flock, Safari and Firefox open, and @dmje was hassling me all last week to use Chrome as well… – so with a bookmarklet on each browser, I get a consistent experience.)

As someone who used to send “FYI” emails out every so often, one of the ways I use twitter is to share potentially interesting or “of the moment” links; I also use a feedthru tag to post one or two links per day, (typically), to my blog sidebar (those links also gt integrated on a daily basis with my blog’s Feedburner feed). Note also that I rarely use the for: option on delicious, possibly because I don’t look at what’s been shared with me very often!

Anyway, one of the hassles with my workflow is the duplicated action required to both tweet and bookmark a link. But it seems that the delicious bookmarklet now has a sharing capability both within and without the delicious ecosystem. (I’m not sure if this is a Good Thing, or a delicious death throe?)

So for example, I can share a bookmark with my delicious network:

social delicious

Or tweet it (it’ll be interesting whether I adopt this workflow…):

So what message gets tweeted? The tweet message, of course:

Note that once the bookmark is saved, there is no evidence or history of it being tweeted. Nor are any of the tags used as hashtags. (If you add a hashtag to the tweet, I don’t think it gets added to the bookmark tags as a simple ‘dehashed’ tag (or hashtag.)

When the link is tweeted, a new delicious shortcode is used:

One Bad Thing about the Twitter integration – you have to provide your twitter credentials. So what the f**k is wrong with OAuth?!

As well as the new social/network amplification options in the bookmark dialogue, there’s also been a revamp (I think) of the search facility:

As well as suggested terms, an improved search display over your own bookmarks, your network’s, or everyone’s, there’s an ability to filter the results by tag. I have to admit I expected live AJAXy UI updates – I didn’t see the effect of filtering by tag unless I clicked the Search button again – but it’s maybe still early days and the live reflow may yet appear. (Or maybe it is there already and just broken for me at the mo?!)

I’m not sure how useful the volume display will be (memories of Google trends etc, there), especially as it only works when you only have one group selected (i.e. only one of my bookmarks, or my network’s, or all of them) and doesn’t reflow when you change the selection? I also wonder how well the ads will fare against the user generated links?

Anyway this is starting to look like it could become quite a powerful search tool, so maybe I need to start growing my delicious network and evangelising once more… (Just a quick note to self – if I do a social bookmarking workshop again, I need to update my slides [uploaded 3 years ago? Sheesh…] ;-)

And as for the twitter integration – I think I’ll give it a go…

PS on the search engine front, I was thinking over the weekend how the mythical ‘social search engine’ that people were trying to hype a year or two ago has actually appeared. But rather than arising out of ‘dead links’ posted to delicious, it’s a live ‘person inside’ application: Twitter.

PPS at last there’s an official announcement post: New and Delicious: Search, Tweet, and Discover the Freshest Bookmarks

Open Educational Resources and the University Library Website

Being a Bear of Very Little Brain, I find it convenient to think of the users of academic library websites falling into one of three ‘deliberate’ and one ‘by chance’ categories:

– students (i.e. people taking at course);
– lecturers (i.e. people creating or supporting a course);
– researchers;
– folk off the web (i.e. people who Googled in who are none of the above).

The following Library website homepage (in this case, from Leicester) is typical:

…and the following options on the Library catalogue are also typical:

So what’s missing…?

How about a link to “Teaching materials”, or “open educational resources”?

After all, if you’re a lecturer looking to pull a new course together, or a student who’s struggling to make head or tail of the way one of your particular lecturers is approaching a particular topic, or a researcher who needs a crash course in a particular method or technique, maybe some lecture notes or course materials are exactly the sort of resource you need?

Trying to kickstart the uptake of open educational materials has not be as easy as might be imagined (e.g. On the Lack of Reuse of OER), but maybe this is because OERs aren’t as ‘legitimately discoverable’ as other academic resources.

If anyone using an academic library website can’t easily search educational resources in that context, what does that say about the status of those resources in the eyes of the Library?

Bearing in mind my crude list of user classes, and comparing them to the sorts of resources that academic libraries do try to support the discovery of, what do we find?

– the library catalogue returns information about books (though full text search is not available) and the titles of journals; it might also tap into course reading lists.
– the e-resources search provides full text search over e-book and journal content.

One of the nice features of the OU wesbite search (not working for me at the moment: “Our servers are busy”, apparently…) is that it is possible to search OU course materials for the course you are currently on (if you’re a student) or across all courses if you are staff. A search over OpenLearn materials is also provided. However, I don’t think these course material searches are available from the Library website?

So here’s a suggestion for the #UKOER folk – see if you can persuade your library to start offering a search over OERs from their website (Scott Wilson at CETIS is building an OER aggregator that might help in this respect, and there are also initiativs like OER Commons).

And, err, as a tip: when they say they already do, a link to the OER Commons site on a page full of links to random resources, buried someowhre deep within the browsable bowels of the library website doesn’t count. It has to be at least as obvious(?!), easy to use(?!) and prominent(?!?) as the current Library catalogue and journal/database searches…

Split Screen Screenshots

Some time ago, I posted a quick hack about how to capture “split screen” screenshots, such as the one below, that shows a BBC News video embedded in a Guardian online news story:

This utility can be handy when you want to capture something in a single screenshot from the top and the bottom of a long web page, but don’t necessarily want all the stuff in between.

Anyway, the hack was included in the middle of a longer web page, so here’s a reposting of it…

On a server somewhere, place the following PHP script:

<html>
<head>
<title></title>
</head>

<frameset rows="30%, 70%">
      <frame src="<?php echo $_GET&#91;'url'&#93;; ?>">
      <frame src="<?php echo $_GET&#91;'url'&#93;; ?>">
</frameset>

The bookmarklet simply uses the current page URI as an argument in a call to the above page:

javascript:window.location=
http://localhost/splitscreen.php?url=encodeURIComponent(window.location.href);

Here’s the bookmarklet in action:

(I was going to pop up a version of the script to http://ouseful.open.ac.uk, but for some reason I can’t get in to upload anything there just at the moment:-(

Recommendations By Magic

I’m not sure how I feel about this – maybe the magic is good magic, maybe it’s voodoo magic, or maybe it’s fake magic, the work of a charlatan, but I wonder, I wonder, might Google’s ‘Personalised Ranking’ utility in Google Reader be useful in filtering, or at least ranking, latest issue table of contents feeds from somewhere like TicTocs?

Only have a 10-minute coffee break and want to see the best items first? All feeds now have a new sort option called “magic” that re-orders items in the feed based on your personal usage, and overall activity in Reader, instead of default chronological order. Click “Sort by magic” under the Folder Settings menu of your feed to switch to personalized ranking. Unlike the old “auto” ranking, this new ranking is personalized for you, and gets better with time as we learn what you like best — the more you “like” and “share” stuff, the better your magic sort will be. Give it a try on a high-volume feed folder or All items and see for yourself!

[Google Reader Personalised Ranking]

Now I believe that there is also a JISCRI project looking at a related sort of thing – Bayesian Feed Filter…: “The Bayesian Feed Filtering project will be trying to identify those articles that are of interest to specific researchers from a set of RSS feeds of Journal Tables of Content by applying the same approach that is used to filter out junk emails.” [Project Kicks Off]

So I’m thinking: it’d be great to see how their approach might filter subscribed to feeds bayesed (!;-) on what users read from those feeds, compared to the Google magic?

Google/Feedburner Link Pollution

Just a quick observation…

If you run a blog (or any other) RSS feed through Feedburner, the title links in the feed point to a Feedburner proxy for the link.

If you use Google Reader, and send a post to delicious:

the Feedburner proxy link is the link that you’ll bookmark:

(Hmmmm, methinks it would be handy if Delicious gave you the option to bookmark the ‘terminal’ URI rather than a proxied or short URI? Maybe by getting Google proxied links into Delicious, Google is amassing data about social bookmarking behaviour from RSS feeds on Delicious? So how about this for a scenario: you wake up tomorrow to find the Goog has bought Delicious off Yahoo, and all your bookmarked links are suddenly rewritten in the form: http://deliproxy.google.com/~r/gamesetwatch/~3/Yci8wJb49yk/fighting_fantasy_flowcharts.php)

If you click on the link to take you through to the actual linked page, and the actual page URI, you may well get something like this:

http://www.gamesetwatch.com/2009/11/fighting_fantasy_flowcharts.php?
utm_source=feedburner&utm_medium=feed
&utm_campaign=Feed%3A+gamesetwatch+%28GameSetWatch%29

That is, a URI with Google Analytics tracking info attached automagically by Feedburner (see Google Analytics, Feedburner and Google Reader for more on this).

Here, then, are a couple of good examples of why you might not want to use (Google) Feedburner for your RSS feeds:

1) it can pollute your links, first by appending them with Google Analytics tracking codes, then by rewriting the link as a proxied link;
2) you have no idea what future ‘innovations’ the Goog will introduce to pollute your feed even further.

(Bear in mind that Google Feedburner also allows you to inject ads into a feed you have burned using AdSense for Feeds.)

“Look at me, Look at me” – Rewriting Google Analytics Tracking Codes

A couple of quick post hoc thoughts to add to Google/Feedburner Link Pollution:

1) there’s an infoskills issue here based on an understanding of what proxied links are, what is superfluous in a URI (Google tracking attributes etc);

2) there’s fun to be had… so for example, @ajcann recently posted on how students are Leicester are getting into the bookmarked resource thing and independently “doing some excellent work on delicious, creating module resources”: Where’s the social?.

Here’s the original link as polluted by Feedburner (I clicked through to the page from Google Reader):
http://scienceoftheinvisible.blogspot.com/2009/11/wheres-social.html
?utm_source=feedburner
&utm_medium=feed
&utm_campaign=Feed%3A+SOTI+%28Science+of+the+Invisible%29
&utm_content=Google+Reader

Normally, I would have stripped the tracking cod from the link I made above to Alan’s post. Instead, I used this:
http://scienceoftheinvisible.blogspot.com/2009/11/wheres-social.html
?utm_source=ouseful.info
&utm_medium=blogosphere
&utm_campaign=infoskills,analytics
&utm_content=http://wp.me/p1mEF-EH

(The campaign element is the category I used for this post, the content is the shortcode for the post.)

Don’t ya just love it: tracking code spam :-)

So I’m thinking – maybe I need a WordPress plugin that will preemptively clean all external links of Google tracking codes and then add my own ‘custom’ tracking stuff on instead (under the assumption that the linked to site is running Google Analytics. If it isn’t, then the annotations are just an unsightly irrelevance, or noise in the URI…

A Final Nail in the Coffin of “Google Ground Truth”?

I’ve written before about how Google’s personalisation features threaten the notion of some sort of “Google Ground Truth”, the ability for two different individuals in different locations to enter the same term into the Google search box, and get back similar results (e.g. Another Nail in the Coffin of “Google Ground Truth”?).

So what threats are there? Google Personalised Search for logged in Google users is one obvious source of differences, as are regional differences from the different national search engines (e.g. google.ca versus google.co.uk).

With more and more browsers become location aware, I wonder whether we will increasingly see regional, or even hyperlocal, differences in standard web search based on browser location (something that presumably already exists in the local search engines).

Social signals (links from your friends or amplified by them) and real time signals also act as potential sources of difference for personalised ranking factors.

And for users engaged in a search session, the ranking of results you see in the third search in a session may even be influenced by the terms (and results you clicked on?!) in the first or second queries of that session.

Anyway, it seems that as of the weekend, there is another threat – perhaps a final threat – to that notion: Personalized Search for everyone:

Previously, we only offered Personalized Search for signed-in users, and only when they had Web History enabled on their Google Accounts. What we’re doing today is expanding Personalized Search so that we can provide it to signed-out users as well. This addition enables us to customize search results for you based upon 180 days of search activity linked to an anonymous cookie in your browser. It’s completely separate from your Google Account and Web History (which are only available to signed-in users). You’ll know when we customize results because a “View customizations” link will appear on the top right of the search results page. Clicking the link will let you see how we’ve customized your results and also let you turn off this type of customization.

Chris Lott also made a very perceptive comment:

PS It also looks like Google are looking for even more traffic data to help feed their stats collection’n’analysis engines: Introducing Google Public DNS

PPS it seems that Google just announced real time search results integration into the Google homepage. It’s still rolling out, but here’s a preview of what the integration looks like:

Read more at Relevance meets the real-time web. Exciting times…

PPPS Seems like there’s no global, or necessarily even national, ground truth in Google Suggest results either: Google localised Suggest

Keeping Your Facebook Updates Private

So it seems as if Facebook is trying to encourage everyone to open up a little, and just share… Ah, bless… I suppose it is getting near to Christmas, after all…

So if you don’t want the world and Google to know everything you’re posting about on Facebook, and you are quite happy with privacy settings as they currently are, thank you very much, here’s what I (think) you need to do… Continue to the next step and change the settings from Everyone:

to Old Settings:

When you hover over the Old Settings radio button, a tooltip should pop up telling you what your current settings are. If anything looks odd, make a note of it so that you can change the setting later.

If you think you’d like to make things available to Everyone, bear in mind these important things to remember:

Information you choose to share with Everyone is available to everyone on the internet.

And when you install an application:

When you visit a Facebook-enhanced application, it will be able to access your publicly available information, which includes Name, Profile Photo, Gender, Current City, Networks, Friend List, and Pages. This information is considered visible to Everyone.

To save the settings, click to do exactly what it says on the button:

If, whilst changing the settings, you noticed that an Old Setting tooltip suggested that your current privacy settings were different to what you thought they were, you’ll need to go in to the Privacy Settings panel, which you can find from the Settings on the toolbar at the top of each Facebook page:

Looking at the actual privacy settings page, there are several menu options that lead to yet more menu options and then screenfuls of different settings…

When I have a spare 2-3 hours, I’ll try to post a summary of them… (unless anyone already knows of a good tutorial on “managing your Facebook privacy settings”?) For now, though, I’m afraid you’re own trying to track down the setting you disagreed with so that you can change it to a setting you do want to have…

Search Mechanics and Search Engineers

A couple of days ago I came across the phrase search mechanic in a post on US IT Spending:

The budget request calls for launching a new tracking tool with daily updates that would provide the public with the ability to see aggregate spending by agency and also by geographic area as an effort to increase transparency. Obama also wants a new search mechanic [my emphasis] to allow the public to “mash” data by location, agency and timeframe.

By this, I take it to mean search mechanic in the sense of game mechanics, that is, something like the way the rules/architecture of the game (or ‘code‘ in the sense Lessig uses it) determine the game play and the user’s interaction with the game. (If you’re interested in how games and the business of games works, why not sign up to my Digital Worlds course?;-)

So for example, one different search mechanic might be a different user experience, such as displaying results on a map or timeline rather than as a list, or another might be a different way of determining (or ranking) and presenting the results based on user profiling; topically, using social search for example (e.g. The Anatomy of a Large-Scale Social Search Engine, and Search is getting more social).

Anyway, for a long time I’ve been looking for a phrase to describe what I think is likely to be a core skill for librarians, namely, the ability to generate effective search queries over a range of systems, from popular search engines, to traditional subscription databases (in the sense of things like Lexis Nexis or EBSCO), to ‘proper’ databases and even Linked Data stores (how’s your SQL and SPARQL?)

So I wonder – is there a role for search mechanics (like car mechanics) and search engineers? The search mechanics might be there to help you get your search query working on the one hand, or fix the ranking algorithm in your search engine on the other, whereas the search engineer might be more interested in working at a different level, figuring out effective search strategies, or how to use search in a particular situation?

Getting Started with data.gov.uk… or not…

Go to any of the data.gov.uk SPARQL endpoints (that’s geeky techie scary speak for places where you can run geeky techie datastore query language queries and get back what looks to the eye like a whole jumble of confusing Radical Dance Faction lyrics [in joke;-0]) and you see a search box, of sorts… Like this one on the front of the finance datastore:

So, pop pickers:

One thing that I think would make the SPARQL page easier to use would be to have a list of links that would launch one of the last 10 or queries that had run in a reasonable time, returned more than no results, displayed down the left hand side – so n00bs like me could at least have a chance at seeing what a successful query looked like. Appreciating that some folk might want to keep their query secret (more on this another day…;-), there should probably be a ‘tick this box to keep your query out of the demo queries listing’ option when folk submit a query.

(A more adventurous solution, but one that I’m not suggesting at the moment, might allow folk who have run a query from the SPARQL page on the data.gov.uk site “share this query” to a database of (shared) queries. Or if you’ve logged in to the site, there may be an option of saving it as a private query.)

That is all…

PS if you have some interesting SPARQL queries, please feel free to share them below or e.g. via the link on here: Bookmarking and Sharing Open Data Queries.

PPS from @iand “shouldnt that post link to the similar http://tw.rpi.edu/weblog/2009/10/23/probing-the-sparql-endpoint-of-datagovuk/“; and here’s one from @gothwin: /location /location /location – exploring Ordnance Survey Linked Data.

PPPS for anyone who took the middle way in the vote, then if there are any example queries in the comments to this post, do they help you get started at all? If you voted “what are you talking about?” please add a comment below about what you think data.gov.uk, Linked Data and SPARQL might be, and what you’d like to be able to with them…