I work with a robot called Miro that was developed here at Sheffield Robotics in conjunction with some external partners. And inside Miro there is a model of emotion that’s based on the continuum model of emotion, and this is the idea that instead of having discrete emotions, all emotions sort of work on a continuum. And we use the Posner’s affect emotion model within the robot. It’s very, very simple to programme such a thing. We use coloured lights in the shell of Miro in order to express the affect. You can slow down or speed up the particular colour lights in order to display, alongside its basic behavioural movements, an emotion.
And when I say emotion, I of course mean “emotion,” because it’s whatever the human user is perceiving. But if you imagine that the Miro robot is flashing, quite quickly, a green colour, which is a calm colour, he’s sort of in happy playful mode, whereas if you switch that to a red colour, he would be a bit angry. We can use really nice, simple models from psychology in order to build models of affect in robots, and that’s one of the great uses that psychology has within robotics.
What we’re introducing into our environments is a new type of social agent with which a human being can have a relationship, which you can’t influence to the extent you would your mobile phone, but which does have the potential to influence you in being somewhere along the lines of being alive but not being alive. As researchers start to explore that dynamic more, I’m hoping there’ll be enough understanding that we’ll be prepared for whatever kind of results will occur in the future on the basis of the introduction of such advanced technology. Robots are going to be useful in educational settings, helping teachers, particularly giving one-on-one attention to children, scaffolding their learning of things like reading, writing, arithmetic, perhaps even more advanced subjects.
Robots right now are not very intelligent. We can’t expect a robot to act like a teacher, to understand the world like a teacher, and to be able to communicate. So what a robot can do is it can help a child that is trying to learn by being a kind of co-learner and by giving encouragement. It can also help with tasks that involve repetition and going over things like times tables. In Sheffield we’re looking at how robots that have human-like features might be useful as teaching assistants.
So we’re taking the Zeno robot, which has a human-like face, and can make facial expressions like smiling, and frowning, and laughing, and seeing if that’s useful as a way of encouraging learning in a situation where the robot is a teaching assistant. And actually, we’re looking at how the robot may be able to encourage children to learn about healthy living and exercise. Robots don’t have to be humanoid. So, another kind of robot that could encourage children to learn would be our Miro robot. It looks like an animal, but it can interact with children. All of these robots are beginning to be able to interact in a natural way through language.
Understanding what children say is still a challenge in our research, especially in noisy environments, so we look at that, and then making sense of what they say is even harder. And that’s a big part of the research we have to do going forward In the future, we might see bioinspiration appearing in all kinds of areas of robotics. For instance, underwater robots that have the same kind of manoeuvrability as fish because they use fins and flippers like fish do, and flying robots that can stabilise themselves, inspired by bird stabilisation. Robots could be sent out to inspect places where humans cannot go because the places are just too small, inaccessible like a pipe network where water, oil or gas is travelling through.
These otherwise inaccessible spaces are ideal for miniature mobile robots, because they can sneak in and they can inspect the environment and potentially repair it. In the very long term, you could think about even the robots going much smaller, and they may even enter your very own body. So think about all the blood vessels. This is a network of about 100,000 kilometres, the vascular network, and most of this is currently inaccessible to any technology. So if the robots would become very small and simple and scale down in size, they could potentially provide the next generation diagnosis and treatment.