Could you just raise your eyebrows for me? Excellent, so we can see where the muscle movement is. Very good, and relax. And then let’s just go back into our relaxed mode. Very good. I’m Graeme Cox. I’m the CEO and co-founder of Emteq. Our core IP is in being able to read emotional state and changes in mood from biofeedback data, sensor data taken from the head and upper body. So we read electrical muscle activity of the face that we can translate back into detailed information about how the face is moving, including emotional facial expressions.
We read heart rate, which allows us to understand how heart rate varies over time, and also a reading called heart rate variability, which is a very good indicator of stress. We have our own built-in IMUs, gyroscope, accelerometer, et cetera, for reading physical motions. So it’s a range of integrated sensors that have an AI on the top of it, the real-time interpretation running on the system, but also a cloud-based analytics, which is run in the neural network for understanding trends over time in an individual, which allow us to provide information back to the individual and to their clinicians on how well they are doing in the work they’ve been asked to do.
We’re working primarily with mobile headsets, because we want to have a simple to use experience. But also because we want to have a cost-effective experience. Because we’re dealing with use cases for virtual reality where the individuals that we’re delivering to would probably not be buying virtual reality for other purposes. We’re currently delivering service at the moment, which is helping people deal with long-term effects of facial paralysis, where an event has caused one half of their face to stop working effectively. And the rehabilitation process is quite long and arduous and very difficult. By gamifying that process of rehabilitation, we get people to do their therapy more consistently and with greater effect.
Simultaneously, we’re able to objectively measure progress so that we can deliver the right therapy, the right exercises for the right point in your progress through treatment. And we can provide a linkage back to your therapist, so that the therapist knows how you’re doing with your home treatment. There aren’t enough facial palsy therapists in the UK. And that means, for a lot of people, they have to drive a long way or pay a lot of money to get hold of them. So being able to do home therapy with the right feedback is enormously valuable for people. And that’s the basic premise of what we’re doing.
One of the problems that people with facial palsy have is that they tend to over-activate muscles. But they can’t see it in the mirror or they can’t feel it. So it overactivates, it gets tensed, and the face becomes tight. So what we show them is muscle activation. And then we can help them focus on which part of the face they need to relax. The first time Monica tried the headset, she was really surprised to see muscle activation on her brow. When she was relaxing her face, she couldn’t feel it at all. And she spent quite a few minutes just looking at this activation and trying to just lower the activation level.
If you give people a real-time readout of what they’re seeing on the screen, that is actionable. We actually realised that we can control that and bring that down. And we’ve seen that in building objective measures into virtual reality relaxation apps. So there’s a lot of VR relaxation and mindfulness out there. But there’s no way, normally, in these applications of measuring whether they’re actually working or not. As soon as you give people visible measures of their heart rate and their heart rate variability, they realise they can control them. They realise that if they actually relax, I can see that coming down.
So if we take fear of flying, which is one of the ones we’re working on, the therapist essentially has an exposure slider, in that, whereas, if you put the slider right the way down to the minimal end, then really all you’re asking the individual to do is to sit on an airplane as you hear the engine start to rev up. If you push the slider all the way to the other side, then the plane will take off. And there will be turbulence and the air masks will drop down. And basically, you can make things happen in the environment that stimulate the response you’re looking for.
Where we’ve done work with the Firefighters Association, we have environments that put people in a simple situation to start off with- a car crash that they’re dealing with. And depending on the level of emotional response that you’re getting from the person who is attending the scene, who’s in virtual reality, you can change what happens thereafter in order to drive more stress from the individual. Or if you feel you’re already getting enough stress response and they’re already working to control themselves, then just let them work through that process. So you can literally control the flow of events. Obviously, it needs to be carefully storyboarded. But in VR, you have a huge amount of flexibility about what you can do.
Other people we see creating virtual reality for surgery skills acquisition, where you can actually follow along with the leading surgeon in real-time and be there virtually with that surgeon, or for practicing surgical techniques on digital bodies instead of real bodies, so all those simulated environments that are just vastly cheaper and easier and safer to work in than the real thing. In every aspect of our work, we’re seeking to help people. We’re seeking to help them learn techniques, skills, ways of dealing with their situation that they didn’t have before. The level of positive engagement that that drives in our user community is fantastic.
The level of engagement and motivation that that drives in my team, in the development team and my delivery team, is incredible. And I would heartily recommended it. That sense of giving something back, that sense of delivering something that is genuinely worthwhile gets me out of bed every morning. And I hope it will continue to do so for a long time to come.