Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

Using a brain-computer interface

Demonstrating a brain-computer interface at work.
5.2
The system that we used is called brain-computer interface. So we have brain in one part and computer, or electronic device, in the other part, and the system connects the two together. We record brain signals while our users are doing different types of mental tasks. For example, imagination of movement of right hand versus imagination of movement of left hand. We apply a number of signal processing algorithms to improve signal-to-noise ratio. Thereafter, the relevant patterns of the brain will be extracted. And we use a model, a classifier, to distinguish between different types of mental tasks that have been performed. So the model can distinguish - can identify - which type of mental task has been performed.
56.5
The output of this model can be used for controlling a device. And the good thing is that the user can see what is the output of the device. The user can identify if his performed mental task has been identified, has been recognised, correctly or wrongly. So, in other words, the user can also start changing the strategy of doing different mental tasks to improve the performance of the system. The system that we used today is called P300-based BCI, and generally it’s a software that we can use for communication, for spelling words.
97.4
It will be very helpful, especially for people who are locked-in - those who are severely disabled - so they can use this device to communicate with the other people, with the external world. We call it P300 because it works based on a wave in our brain, which is called P300. When we see a target stimulus, our brain reacts, our brain generates a peak, which happens usually around 300 milliseconds after the onset of the stimulus. So in this system, as you saw, different letters are flashing. When the desired letter is flashed, the peak happens in the brain of the user. So the algorithm, the system, can capture this peak and translate it to the letter that was expected to be spelled.
150.9
Generally in brain-computer interface, one of the most challenging issues is noise. Our brain signals are very sensitive to body movements, blinking, any facial movements, because our muscles are also generating electrical activities. And these electrical activities can interfere our brain signals. When our participant blinked, we had a huge artefact in the signal, so we need to develop signal processing algorithms that can separate these movement-based artefacts, to focus on the brain signals. Our brains are different. My brain is different than your brain. So the patterns that my brain generates would be different than the patterns that the other person generates. It means that the model that we train will be different from person to person.
217
We need to collect, usually, a few minutes’ data. And based on that data, we can train our model - we can adjust the parameters - and thereafter, the user gets able to use the model and control a device. Unfortunately irrelevant but concurrent neural activities can affect our brain signals. For example, when we start getting tired our brain patterns also change, so the model that we trained previously might not be optimal anymore. So mood can affect our brain signals, tiredness can affect… It means that, for example, come tomorrow, the model is not optimal anymore. So, we need to, again, adjust the parameters.
264.2
We need to develop, in the future, robust algorithms that can deal with all these variations that we have from time to time, from day to day. Using BCI we can help severely paralysed people to gain some sort of independence. They can use this technology to control their environments. They can have a type of assistive robot that is controlled by their brain signals. Just imagine those who are paralysed from below the neck, so they don’t have any muscle movements, but they can use their brain signals to control a robotic hand, for example - that is one of the areas that I’m working in.
303.5
It’s neuroprosthetic - how we can generate different patterns of our hand - grasping, pinching, taking different objects - based on the activities that our brain generates.

Dr Mahnaz Arvaneh is developing a brain-computer interface (BCI) which uses electroencephalography (EEG) technology to read brain signals and convert them into actions on a computer screen.

In this video, Mahnaz demonstrates her system, using the Emotiv EPOC+, an EEG headset which measures brain signals.

Discussion

What challenges do you think we could face when working with mind-controlled systems like this? Why might it be difficult?
This article is from the free online

Building a Future with Robots

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now