OK. I’m Andrew Glennerster. I’m head of the VR lab here. And the purpose of the lab is to make stimuli more and more realistic, more and more like the real world. So we’ve done this for vision, so that you’re allowed to move around and move your head, move your eyes freely, as someone would normally. And now we’re manoeuvring into haptics, where you can not only see the world as you move around, but you can reach out and touch it, and it feels like it really should in the real world. I’m Peter Scarfe. I’m a lecturer here in psychology, and my background has been in human visual perception.
So I was initially studying 3D vision, and then after I completed my PhD, I worked in Germany. And that’s when I first got introduced to equipment such as this, so the robotics and the haptics. Then, having moved on to Reading, I was able to combine both of those research interests, so the stuff I’d already done and the stuff I wanted to do with haptics. And we’re doing that in a collaboration between the virtual reality lab here in psychology and the haptic robotics lab over in engineering. So what we have here is essentially a demo of the technology which we’re using. So we have a virtual maze which we can actually rotate and move.
But the nice thing here is with our robot arm, we can also navigate through the maze and actually get haptic feedback. So if we just went into the door of the maze, and if I try and push through the surface, I’m not going to be able to get through, because we’re simulating the maze both visually and by touch. Also, we have an essentially glass surface on the top of the maze as well. So again, you can’t get through that. And the robot is much stronger than you, so no matter how hard you push, you won’t be able to get through.
And as you navigate through the maze, we have various different objects which you can actually feel the surface properties of as well. So I’m just navigating through some hoops now, and you can feel the smooth, shiny surface those hoops have. Similarly, we can have things like– kind of like molasses or pushing through jelly. So now I’m going down past the corridor, which is much harder to move through, but I can still push through it. And then I can go up here, and I’m going over a bumpy surface. So you can get very fine feedback about what you’re feeling. And around the corner again. And now I’ve got– like a magnet. So if I just tap.
See? It’s kind of like a magnetic force field, basically. So we can combine all of these different types of sensory feedback to look at how we perceive the world. The main thing we’re using haptic mostly for is some research on multisensory integration. So essentially, how you combine information from vision and touch. The really nice thing about this equipment is you are not limited by the physical world. Say, in the physical world, when you reach out to touch something, like a surface of a table, your vision and your touch are always telling you the same thing. But what we can do with the virtual reality and the haptics is we can actually put those two types of information into conflict.
So you could be reaching out to touch a surface which visually looks flat, but actually feels as if it’s slanted. And the reason we do that is it allows us to pull apart the contributions of different sources of sensory information to estimates of say, shape or distance or depth. And on a bigger picture, how you combine those sorts of information when you’re, say, navigating around the world. The really nice thing is that engineers and psychologists have a completely different skill set, but we’re actually studying the same thing. We’re both interested in sensory systems and how they combine information, whether that’s human sensory systems or robotic sensory systems.
So the collaboration’s great from our perspective, because we have these two different skill sets which provide very different things. So the engineers can build us all this custom equipment, and then we can actually use this custom equipment to kind of probe human brain function, essentially.