Are We Just Google’s Lab Rats?

There are some interesting comments relating to my previous post on Google Lock-In Lock-Out in a comment thread on OSnews: Why Google gets so much credit. Here are some of my own lazy Sunday morning notes/thoughts relating to that, and other comments…

– killing Google Reader does not kill RSS/there was no “malicious intent” mapping out the Reader/RSS strategy:

A nice phrase in an #opentech talk yesterday was that we (technologists and engineers and data scientists, for example) have to “act responsibly”. Google Reader helped popularise feed reading when some of us were hopeful for its future (“We ignore RSS at OUr Peril”), and as such attracted many readers away from other clients (myself included), with the result that competition was harder (“compete against Google? Hmm… maybe not…”). Google Reader’s infrastructure and unofficial APIs enabled folk to build services off the back of the Google Reader infrastructure turning it into de facto infrastructure for other peoples’ applications and services. (Remember: the Google Maps API was unofficial at first). There aren’t many OPML bundlers out there, for example, but for hackers into appropriating tech Google Reader is one. Since I moved away from Google Reader (to theoldreader) I haven’t used Flipboard so much, which as far as I was concerned was using Reader essentially as infrastructure. Caveat emptor, I guess, for developers building on top of other companies services (as many Twitter and Facebook app developers keep discovering).

With Feedburner, Google bought up a service that acted as a proxy, taking public syndication feeds, instrumenting them with analytics, and then encouraging the people taking up the syndicated content to subscribe to the Feedburner feed. Where RSS and Atom were designed to support syndication between independent parties, Feedburner – and then Google – insinuated itself between those parties. By replacing self-controlled feeds as the subscription endpoint with Google controlled endpoints, publishers gave up control of their syndication infrastructure. With Google losing interest in open syndication feeds as it pursues its own closed content network agenda, we are faced with a situation whereby Google can potentially trash a widespread syndication infrastructure that would have remained resilient if Google hadn’t insinuated itself into it. Or if we hadn’t been so stupid as to simplistically accept it’s overtures.

Hmmm… thinks… do we need a Google users’ motto? Don’t be stupid perhaps…?!

I applaud Google for developing the services it does, getting them to scale and opening up API access. But as these services become de facto infrastructure, the question of how Google acknowledges any responsibility, that flows from this (even if this responsibility is incorrectly assumed) becomes an issue. Responsibilities arise in other areas too, of course. Such as taxation and corporate transparency. But that’s another issue. (Would Google act differently if its motto was “Be responsible” or “Act responsibly” rather than “Don’t be evil”? It strikes me that “Act responsibly” could work as a motto for both companies and their users?)

It seems to me that with Google+, Google is not adopting open syndication standards in two ways: not using it “internally”, and not making feeds publicly available. There may be good technical reasons for the first, but by the second Google is *not allowing* its community members to participate in a open content syndication network/system. Google’s choice, but I’m not playing.

Google is not killing the open standards by closing off access to them in commercial licensing terms, but it may contribute to stifling their adoption by adopting alternative standards that others feel they have to adopt because of the influence Google has on web traffic.

Consider this other way of looking at it – Google is presumably trying to get other parties to adopt WebP by developing it as an openstandard. Google assumes that it can drive adoption of this as a web standard by adopting it itself. In terms of argumentation, it doesn’t follow that by not adopting something Google can prevent it being adopted, (i.e. not adopting or by stopping its own use of a standard, Google kills it generally) but people follow bad logic all the time (and if they follow Google for their technology choices, or have a technology model based on being parasitic on Google infrastructure, Google’s dropping of a standard effectively kills it for those people) …

– control of what we see

Google makes money by putting ad-links in front of eyeballs that people click on. By presenting “relevant” ads, Google presumably tries to maximise the click-thru rate so that it can make more money per displayed link.

To encourage you to spend your attention on pages that Google controls, Google has adopted the idea that by presenting you (and me; us) with “relevant” content, we are likely to remain engaged. With Google web search, the relevance of search results supposedly attracts us back to the Google search tool. With services such as Google now, Google pre-emptively tries to present you with information it thinks you need, presumably based on predictive models of sequences of action that other people (or you yourself) have demonstrated in the past.

I’m not really up on behavioural psychology models, but I have a vague memory that intermittent reinforcement schedules were demonstrated to be one of the more effect modes of behaviourist training/operant conditioning. So I wonder: how effective are predictive intermittent positive reinforcement schedules. (You get the idea, right? We’re pigeons that peck at Android phones and Google is the experimenter trying to get us to peck the right way, by reinforcing us every now and again by satisfying out intent. That is, has there been in a flip away from Google using us to provide reinforcement training signals to its algorithms in to a situation in which we have become Google’s experimental lab rats that are coupled in a series of ongoing experiments that train us and its algorithms, jointly, together, to maximise… something…)

There is a danger, I think, in Google chasing the “relevance” thing too far, seeing the maximisation of whatever conversion metrics it decides on as being a sign that it has “got things right” for us, that it is satisfying our “intent”. And if operant conditioning does influence the way we behave, maybe we do actually need to start thinking about what the machine algorithms are training us to do. Are training us to do. Training us.

Google’s stated aim is to “organize the world’s information and make it universally accessible and useful”.

– Through web search, it started to organise information it presented to use through search results that were more appealingly ranked (seemed “more relevant”) than the other search engines did.

– Through personalised search, it started to organise the way it presented results to each of us individually.

– Through web tracking, it presents us with information – adverts – organised in a way it presumably thinks are more personally meaningful to use (but maximising what metic exactly? More likely to cause us to act in a particular way, as measured by whether we click the link, or linger on a page, or engage in a particular behaviour that can be captured – for model building and exploitation purposes – by web tracking algorithms?)

– Through Google Now, and the new Google image gallery tools, Google is seeking to organise our information (we’re part of the world, right?) on our behalf and present it back to us in a way that the Google algorithms decide.

The old photos in a drawer back at my family home are sorted howsoever (by whatever algorithm “use” and random access results in). Now they’ll be sorted by Google. Maybe the algorithms are similar. Or maybe they’re not. What would be evil, I think, was if the ranking algorithms that are used to decide the order in which organic information is presented us start to be influenced by the algorithms that are tied to advertising or marketing, that is, to algorithms that are used to try to maximise the extent to which we are influenced in accord with the goals, beliefs, desires and intents of others (with a hat tip there to agent logic and the theories of intelligent software agents).

At the moment I believe that Google believes it is trying to develop algorithms that benefit us personally, in an utilitarian way. But I’m not sure what function it is they are maximising or how they think it maps onto any personal theories or preferences we may have about what is “accessible” and “useful”. I guess we might also ask whether “accessible” and “useful” are the road to a Good Life (because in the end this comes down to philosophy and ethics, doesn’t it?) or whether we should be “organising the world’s information” with some other purpose in mind?

PS Just by the by, it’s worth noting that the educational arena is seeking to use learning analytics to instrumentalise our behaviour and engagement within learning systems and contexts for our, erm, learning benefit. (Measured how?)

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...