Whilst I was rooting around looking for things to do the year or two after graduating, I came across the emerging, inter-disciplinary field of cognitive science, which used ideas from cognitive psychology, philosophy, AI, linguistics and neuroscience to try to understand “mind and consciousness stuff”. I read Dennett and Searle, and even toyed with going to do a Masters at Indiana, where David Chalmers had been doing all manner of interesting things as a PhD student.
I was reminded of this yesterday whilst reading a post on the Google DeepMind blog – Open-sourcing Psychlab – which opened in a style that began to wind me up immediately:
Consider the simple task of going shopping for your groceries. If you fail to pick-up an item that is on your list, what does it tell us about the functioning of your brain? …
What appears to be a single task actually depends on multiple cognitive abilities. We face a similar problem in AI research …
To address this problem in humans, psychologists have spent the last 150 years designing rigorously controlled experiments aimed at isolating one specific cognitive faculty at a time. For example, they might analyse the supermarket scenario using two separate tests – a “visual search” test that requires the subject to locate a specific shape in a pattern could be used to probe attention, while they might ask a person to recall items from a studied list to test their memory. …
“To address this problem humans”… “rigorously controlled”, pah! So here we go: are Google folk gonna disrupt cognitive psychology by turning away from the science and just throwing a bunch of numbers they’ve managed to collect from wherever, howsoever, into a couple of mathematical functions that tries to clump them together without any idea about what any clusters or grouping mean, or what they’re really clustering around…?
We believe it is possible to use similar experimental methods to better understand the behaviours of artificial agents. That is why we developed Psychlab [ code ], a platform built on top of DeepMind Lab, which allows us to directly apply methods from fields like cognitive psychology to study behaviours of artificial agents in a controlled environment. …
Psychlab recreates the set-up typically used in human psychology experiments inside the virtual DeepMind Lab environment. This usually consists of a participant sitting in front of a computer monitor using a mouse to respond to the onscreen task. Similarly, our environment allows a virtual subject to perform tasks on a virtual computer monitor, using the direction of its gaze to respond. This allows humans and artificial agents to both take the same tests, minimising experimental differences. It also makes it easier to connect with the existing literature in cognitive psychology and draw insights from it.
So, to speed up the way Google figures out how to manipulate folks’ human attention through a screen, they’re gonna start building cognitive agents that use screens as an interface (at first), develop the models so they resemble human users (I would say: “white, male, 20s-30s, on the spectrum”, but it would be more insidious perhaps to pick demographics relating to “minority” groups that power brokers (and marketers) would more readily like to “influence” or “persuade”. But that would be a category mistake, because I don’t think cognitive psychology works like that.), then start to game the hell out of them to see how you can best manipulate their behaviour.
Along with the open-source release of Psychlab we have built a series of classic experimental tasks to run on the virtual computer monitor, and it has a flexible and easy-to-learn API, enabling others to build their own tasks.
- Visual search – tests ability to search an array of items for a target. [I once tried to pitch an idea to a family friend who ran a vending machine leasing service that I could have a go at optimising the arrangement of products in his machines to maximise sales. How life might have been different if we’d actually tried that out…]
- Continuous recognition – tests memory for a growing list of items.
- Arbitrary visuomotor mapping – tests recall of stimulus-response pairings.
- Change detection – tests ability to detect changes in an array of objects reappearing after a delay.
- Visual acuity and contrast sensitivity – tests ability to identify small and low-contrast stimuli.
- Glass pattern detection – tests global form perception.
- Random dot motion discrimination – tests ability to perceive coherent motion.
- Multiple object tracking – tests ability to track moving objects over time.
Isn’t that nice of Google. Tools to help cog psych undergrads replicate classic cognitive psychology experiments with their computer models.
Each of these tasks have been validated to show that our human results mirror standard results in the cognitive psychology literature.
Good. So Google has an environment that allows you to replicate experiments from the literature.
Just remember that Google’s business is predicated on developing ad tech, and ad revenue in turn is predicated on find ways of persuading people to either persist in, or modify, their behaviour.
And once you’ve built the model, then you can start to manipulate the model.
When we did the same test on a state-of-the-art artificial agent, we found that, while it could perform the task, it did not show the human pattern of reaction time results. .. this data has suggested a difference between parallel and serial attention. Agents appear only to have parallel mechanisms. Identifying this difference between humans and our current artificial agents shows a path toward improving future agent designs.
“Agents appear only to have parallel mechanisms”. Erm? The models that Google built appear only to have parallel mechanisms?
This also makes me think even more we need to rebrand AI. Just like “toxic” neural network research required a rebranding as “deep learning” when a new algorithmic trick and bigger computers meant bigger networks with more impressive results than previously, I think we should move to AI as meaning “alt-intelligence” or “alternative intelligence” in the sense of “alternative facts”.
“[A] path toward improving future agent designs.” That doesn’t make sense? Do they mean models that more closely represent human behaviour in terms of the captured metrics?
What would be interesting would be if the DeepMind folk found they hit a brick wall with deep-learning models and that they couldn’t find a way to replicate to human behaviour. Because that might help encourage the development of “alternative intelligence” critiques.
Psychlab was designed as a tool for bridging between cognitive psychology, neuroscience, and AI. By open-sourcing it, we hope the wider research community will make use of it in their own research and help us shape it going forward.
Hmmm…. Google isn’t interested in understanding how people work from a point of view of pure inquiry. It wants to know how you work so it can control, or at least influence, your behaviour. (See also Charlie Stross, Dude, you broke the future!.)