Cognitive Science, 2.0? Google Psychlab

Whilst I was rooting around looking for things to do the year or two after graduating, I came across the emerging, inter-disciplinary field of cognitive science, which used ideas from cognitive psychology, philosophy, AI, linguistics and neuroscience to try to understand “mind and consciousness stuff”. I read Dennett and Searle, and even toyed with going to do a Masters at Indiana, where David Chalmers had been doing all manner of interesting things as a PhD student.

I was reminded of this yesterday whilst reading a post on the Google DeepMind blog – Open-sourcing Psychlab – which opened in a style that began to wind me up immediately:

Consider the simple task of going shopping for your groceries. If you fail to pick-up an item that is on your list, what does it tell us about the functioning of your brain? …

What appears to be a single task actually depends on multiple cognitive abilities. We face a similar problem in AI research …

To address this problem in humans, psychologists have spent the last 150 years designing rigorously controlled experiments aimed at isolating one specific cognitive faculty at a time. For example, they might analyse the supermarket scenario using two separate tests – a “visual search” test that requires the subject to locate a specific shape in a pattern could be used to probe attention, while they might ask a person to recall items from a studied list to test their memory. …

“To address this problem humans”… “rigorously controlled”, pah! So here we go: are Google folk gonna disrupt cognitive psychology by turning away from the science and just throwing a bunch of numbers they’ve managed to collect from wherever, howsoever, into a couple of mathematical functions that tries to clump them together without any idea about what any clusters or grouping mean, or what they’re really clustering around…?

We believe it is possible to use similar experimental methods to better understand the behaviours of artificial agents. That is why we developed Psychlab [ code ], a platform built on top of DeepMind Lab, which allows us to directly apply methods from fields like cognitive psychology to study behaviours of artificial agents in a controlled environment. …

Psychlab recreates the set-up typically used in human psychology experiments inside the virtual DeepMind Lab environment. This usually consists of a participant sitting in front of a computer monitor using a mouse to respond to the onscreen task. Similarly, our environment allows a virtual subject to perform tasks on a virtual computer monitor, using the direction of its gaze to respond. This allows humans and artificial agents to both take the same tests, minimising experimental differences. It also makes it easier to connect with the existing literature in cognitive psychology and draw insights from it.

So, to speed up the way Google figures out how to manipulate folks’ human attention through a screen, they’re gonna start building cognitive agents that use screens as an interface (at first), develop the models so they resemble human users (I would say: “white, male, 20s-30s, on the spectrum”, but it would be more insidious perhaps to pick demographics relating to “minority” groups that power brokers (and marketers) would more readily like to “influence” or “persuade”. But that would be a category mistake, because I don’t think cognitive psychology works like that.), then start to game the hell out of them to see how you can best manipulate their behaviour.

Along with the open-source release of Psychlab we have built a series of classic experimental tasks to run on the virtual computer monitor, and it has a flexible and easy-to-learn API, enabling others to build their own tasks.

Isn’t that nice of Google. Tools to help cog psych undergrads replicate classic cognitive psychology experiments with their computer models.

Each of these tasks have been validated to show that our human results mirror standard results in the cognitive psychology literature.

Good. So Google has an environment that allows you to replicate experiments from the literature.

Just remember that Google’s business is predicated on developing ad tech, and ad revenue in turn is predicated on find ways of persuading people to either persist in, or modify, their behaviour.

And once you’ve built the model, then you can start to manipulate the model.

When we did the same test on a state-of-the-art artificial agent, we found that, while it could perform the task, it did not show the human pattern of reaction time results. .. this data has suggested a difference between parallel and serial attention. Agents appear only to have parallel mechanisms. Identifying this difference between humans and our current artificial agents shows a path toward improving future agent designs.

“Agents appear only to have parallel mechanisms”. Erm? The models that Google built appear only to have parallel mechanisms?

This also makes me think even more we need to rebrand AI. Just like “toxic” neural network research required a rebranding as “deep learning” when a new algorithmic trick and bigger computers meant bigger networks with more impressive results than previously, I think we should move to AI as meaning “alt-intelligence” or “alternative intelligence” in the sense of “alternative facts”.

“[A] path toward improving future agent designs.” That doesn’t make sense? Do they mean models that more closely represent human behaviour in terms of the captured metrics?

What would be interesting would be if the DeepMind folk found they hit a brick wall with deep-learning models and that they couldn’t find a way to replicate to human behaviour. Because that might help encourage the development of “alternative intelligence” critiques.

Psychlab was designed as a tool for bridging between cognitive psychology, neuroscience, and AI. By open-sourcing it, we hope the wider research community will make use of it in their own research and help us shape it going forward.

Hmmm…. Google isn’t interested in understanding how people work from a point of view of pure inquiry. It wants to know how you work so it can control, or at least influence, your behaviour. (See also Charlie Stross, Dude, you broke the future!.)

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

5 thoughts on “Cognitive Science, 2.0? Google Psychlab”

  1. Being at IU in the early 90’s was indeed a blast! The topic that everyone was talking about wasn’t neural networks exactly, but dynamical systems. That specific perspective hasn’t paid off, yet, but seems at the heart of many self-organizing systems. But I did do my thesis there, combining neural network’s and Hofstadter’s analogy problems. See https://repository.brynmawr.edu/compsci_pubs/78/

    I agree with many of your points here. But don’t you agree that it is better to have these tools open and available to everyone so that we can explore them, for good or evil? Otherwise, we’d only be able to speculate about any issues, good or bad.

  2. @Doug I was into dynamical / complex systems too, (I spent a month at a Santa Fe Institute complex systems summer school in 1998(?)) (my PhD ended up on genetic algorithms in dynamic environments).

    Re: the tools – what concerns me is the way they will be used: have test suite, build model, learn how to exploit model.

    Google et al are only interested in understanding how we think/behave as a scientific problem to the extent that they can then use those models to learn how to influence such a system and manipulate its behaviour.

    That arrogance, combined with the apparently laissez-faire attitude of SiValleyTech towards what would be termed “research ethics” in academia, is concerning.

    The experiments also start to border on medical ethics / medical trials too.

    Psychoactive substance are heavily regulated in the UK, research into psychedelics internationally had a hard time of it since Leary et al, and psychiatric medical drugs in the UK have to be initially prescribed by a psychiatrist, I think, rather than just a GP (general practice local doctor).

    But psychologists and psychotherapists are also licensed, albeit in their ability to practice behavioural modification therapies using words rather than chemicals, and a quick check of the UK clinical trials register suggests they go through medical trials.

    Aside from the way I don’t want ethics free US corporates looking for every opportunity to influence my behaviour, or that of society around me, I don’t want to live for ever, I don’t want some tech company telling me how to “live better”, and I don’t want people being prevented from accessing services because they refuse to sign up to constant monitoring of the form being normalised by “fitness companies” promoting the use of surveillance tags that, (at the moment), would be a step too far for a justice system to sentence someone to wear even after being found guilty of some sort of crime in a court of law.

    A pox on the whole lot of them. Just because tech can be used to do “kewel” things. So can animal experiments etc etc. Thinks: do any of the tech companies have licenses for doing animal experiments? How many of them have signed up to run medical trials around their software?

    1. You bring up some interesting points! But I am trying to come up with some principles that would govern uses of simulated people, and I can’t do it. Where is the line between what is acceptable and what has crossed over into invasion of an ethical concern? I can imagine the spirit of such a line, but can’t define it in letter for, say, a court of law.

      1. I’m not sure what I think or what the answer is. Some fragments:
        – just because you can research something doesn’t mean you should;
        – if you build a tool that you can exploit to the detriment of others, opening it up so others may be able to use it for the benefit of others is not a panacea.
        – we need to distinguish between the research practice, the aims of the research, and the uses to which the research is put.
        In Google/Facebook/etc land, my impression of their behaviour is that “generating profit through demonstrable ways of capturing attentions and/or influencing behaviour” is the prime mover and everything is geared to that. Possibly with the proviso that final products should comply with the letter (rather than the spirit) of the law, and gaming the law to whatever extent benefits the company is completely acceptable.
        Films and games are all labeled with age ratings, and guidance on what sort of “harmful content for impressionable minds” they contain.
        Food is labeled, cigarettes are labelled, in the EU, toys are safety labeled.
        In academia, psychology research projects go through a research ethics committee before they start, and may have to go to review while they’re running.
        When selling health appliances, suppliers need to put products in development through appropriate trials and regulated processes.
        Tech makes claims about certain benefits, while denying (complementary) harms (‘to advertisers: we’ll help you influence people to buy your product; to governments: no way can any content on our platform influence voting behaviour, radicalisation, etc”), and shies away from regulated spaces: they claim they aren’t publishers, they’re platforms; they claim they aren’t infringing copyright, they’re fair use (or something) to make content discoverable; they claim they aren’t selling health appliances, they’re selling fitness or wellbeing ones, etc etc.
        They’re either snake oil sellers, or they’re skirting regulation put in place for the protection of the public from potential harms caused by companies or the state. A double pox on the lot of them.

Comments are closed.

%d bloggers like this: