Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Skip to 0 minutes and 5 secondsThe system that we used is called brain-computer interface. So we have brain in one part and computer, or electronic device, in the other part, and the system connects the two together. We record brain signals while our users are doing different types of mental tasks. For example, imagination of movement of right hand versus imagination of movement of left hand. We apply a number of signal processing algorithms to improve signal-to-noise ratio. Thereafter, the relevant patterns of the brain will be extracted. And we use a model, a classifier, to distinguish between different types of mental tasks that have been performed. So the model can distinguish - can identify - which type of mental task has been performed.

Skip to 0 minutes and 57 secondsThe output of this model can be used for controlling a device. And the good thing is that the user can see what is the output of the device. The user can identify if his performed mental task has been identified, has been recognised, correctly or wrongly. So, in other words, the user can also start changing the strategy of doing different mental tasks to improve the performance of the system. The system that we used today is called P300-based BCI, and generally it's a software that we can use for communication, for spelling words.

Skip to 1 minute and 37 secondsIt will be very helpful, especially for people who are locked-in - those who are severely disabled - so they can use this device to communicate with the other people, with the external world. We call it P300 because it works based on a wave in our brain, which is called P300. When we see a target stimulus, our brain reacts, our brain generates a peak, which happens usually around 300 milliseconds after the onset of the stimulus. So in this system, as you saw, different letters are flashing. When the desired letter is flashed, the peak happens in the brain of the user. So the algorithm, the system, can capture this peak and translate it to the letter that was expected to be spelled.

Skip to 2 minutes and 31 secondsGenerally in brain-computer interface, one of the most challenging issues is noise. Our brain signals are very sensitive to body movements, blinking, any facial movements, because our muscles are also generating electrical activities. And these electrical activities can interfere our brain signals. When our participant blinked, we had a huge artefact in the signal, so we need to develop signal processing algorithms that can separate these movement-based artefacts, to focus on the brain signals. Our brains are different. My brain is different than your brain. So the patterns that my brain generates would be different than the patterns that the other person generates. It means that the model that we train will be different from person to person.

Skip to 3 minutes and 37 secondsWe need to collect, usually, a few minutes' data. And based on that data, we can train our model - we can adjust the parameters - and thereafter, the user gets able to use the model and control a device. Unfortunately irrelevant but concurrent neural activities can affect our brain signals. For example, when we start getting tired our brain patterns also change, so the model that we trained previously might not be optimal anymore. So mood can affect our brain signals, tiredness can affect... It means that, for example, come tomorrow, the model is not optimal anymore. So, we need to, again, adjust the parameters.

Skip to 4 minutes and 24 secondsWe need to develop, in the future, robust algorithms that can deal with all these variations that we have from time to time, from day to day. Using BCI we can help severely paralysed people to gain some sort of independence. They can use this technology to control their environments. They can have a type of assistive robot that is controlled by their brain signals. Just imagine those who are paralysed from below the neck, so they don't have any muscle movements, but they can use their brain signals to control a robotic hand, for example - that is one of the areas that I'm working in.

Skip to 5 minutes and 4 secondsIt's neuroprosthetic - how we can generate different patterns of our hand - grasping, pinching, taking different objects - based on the activities that our brain generates.

Using a brain-computer interface

Dr Mahnaz Arvaneh is developing a brain-computer interface (BCI) which uses electroencephalography (EEG) technology to read brain signals and convert them into actions on a computer screen.

In this video, Mahnaz demonstrates her system, using the Emotiv EPOC+, an EEG headset which measures brain signals.

What challenges do you think we could face when working with mind-controlled systems like this? Why might it be difficult?

Share this video:

This video is from the free online course:

Building a Future with Robots

The University of Sheffield

Get a taste of this course

Find out what this course is like by previewing some of the course steps before you join:

Contact FutureLearn for Support