Synthetic psychology is the idea that we can understand ourselves by building physical models of ourselves in the form of robots. It’s synthetic because synthesis is to build, and that contrasts with analysis, or analytic, which is to understand something by breaking it down. And most approaches in the natural sciences are analytic. You take a complex thing, and you try and understand how it works by breaking it down. We can try and do that with people, but we’re extremely complex, and it’s a real challenge. And psychology and neuroscience are really struggling to understand how we work.
So synthetic psychology is the complementary approach, taking our theories and trying to build working examples of those to see if we can recreate human-like behaviour in a robot. By building a physical model, we get insights that we might not get just by looking at the biological system, or just by writing down our ideas on paper, or even programming those ideas into a computer. Particularly with animals and with humans, our bodies are important, and it’s hard to simulate those. So a physical machine, like a robot, can actually be a better way of modelling the person, and it can operate in the same world as we do.
And we can do experiments with a robot in the same way, perhaps, as we might experiment with ourselves. To generate behaviour that’s life-like, like an animal or like a human, you might want to take that model and put it inside a robot. And then you can test whether you can get behaviour that looks like the animal that you’ve seen in the living organism. In my lab, we focused on the behaviour of mammals, and particularly rodents, like rats, because we know an awful lot about rodents from lab studies of their brains and their behaviour. And we’ve started to build robots that are rat-like, particularly, we’ve built robots that have whiskers.
One of our robots, Shrewbot, has been designed so that it can explore the world, and perceive the world, using a sense of touch that’s provided by its whiskers. In order to build that robot, we had to look at how real animals operate and behave. And we spent a lot of time filming and recording rats exploring their world, and then we built a robot that was similar in its body and its morphology to the rat. We focused on the head and on the whiskers, but we made sure that the rest of the robot could move in a similar way to the animal.
Then to control the robot, we took models of the brain, simplified models of parts of the brain, and we showed that you could put those models together, and you could get really life-like behaviour in our robot. And from that, we can go back to the biology and say, which aspects of animal behaviour have we captured and which have we missed? And then we can start again to build our next robot. Another approach that we have in my group is to try and model human minds and human behaviour. So we’re taking our iCub robot, and with a team of people that’s across Europe, we’re trying to create social intelligence for the iCub.
In other words, we’re trying to make the iCub understand that it’s not alone in the world, that there are other intelligent agents around it, particularly people, and to let it relate to those people. Now this is something that’s never been done before. Robots, right now, don’t have any idea that there are other intelligent things around them. So to make iCub intelligent, we’re looking at what we call the architecture of the brain. And we look at that in quite an abstract way, because the brain has billions of neurons, and we can’t possibly hope to copy every one of those in the control system of the robot.
So we think about what areas of the brain are doing and how the different areas of the brain contribute to make ourselves socially intelligent, and how we could build simplified models of those in a robot to create social intelligence. An important part of human social intelligence is our ability to remember things that have happened in the past, particularly perhaps conversations and interactions we’ve had with other people in the past. In our lab, we’re focusing on this challenge of creating a robot that has memory, autobiographical memory of its own past history of actions and interactions. So that when it meets somebody, it can remember if it’s seen them before, and perhaps also what they did or talked about the last time.
When we’re building our robot models of minds and brains, sometimes we want to take something which is quite a faithful reproduction of part of the brain, of a neural network, and test that. It might even have elements of it which are like neurons, that they generate spikes of electricity and pass them between each other. Other times, we go for a very abstract and high-level way of thinking about what the brain might be doing. In other words, we have something more like a computer algorithm. And it’s useful to have these different levels of building models and testing models.
Quite often, if we have a good idea about how the brain is working, we’ll start off with a high-level algorithmic approach to see if that might work. And other times, if we really aren’t very sure how the brain is operating, we might copy part of the brain quite faithfully and see what happens when you put that part of the brain in a robot. What kind of behaviour might emerge? And that might be best if you’re looking at a simpler animal which has a less complex nervous system than ourselves.