Brain controlled robots
An exciting, emerging area where biology meets robotics is in brain computer interfaces (BCIs). BCIs have the potential in the future to enable direct mind control of robots and other machines. This step explores the definition of a BCI and explains the key components.
Brain computer interfaces
In the past few years, BCIs have attracted a lot of attention from robotic groups, neuroscientists, computer scientists and neurologists, triggered by new scientific progress in understanding brain functions and by impressive applications.
The definition of BCI as quoted by Professor Wolpaw in ‘Brain-computer interfaces: principles and practice’ (2012) is:
” A system that measures central nervous system (CNS) activity and converts it into artificial output that replaces, restores, enhances, supplements, or improves natural CNS output and thereby changes the ongoing interactions between the CNS and its external or internal environment.”
Quoted with kind permission from Oxford University Press.
Based on this definition, a BCI system can control a robot or other assistive devices using our thoughts. Such a system can greatly help people with generalised paralysis to gain some level of independence.
Brain computer interfaces for robot control
The BCI input is the brain signals that carry informative neural features, recorded by electrodes either in the brain or on the head. The BCI outputs are used to control a device, such as an assistive robot, a wheelchair or a prosthetic hand.
Example: A person suffering from paralysis from the neck down would normally find independent mobility impossible. With a BCI system and a robotic wheelchair, the person can use mental imagery to imagine movement of their right hand in order to turn a wheelchair towards their right side, or their left hand to turn the chair to their left.
The BCI uses algorithms to translate the measured brain-wave activity into command signals to control the output device, i.e. in this case, the motors driving the wheelchair to the left or right.
One of the key challenges of BCIs to solve in the future is decoding brain-wave activity into desired actions. This problem is usually addressed with machine learning algorithms, known as ‘classification’ (also known as ‘pattern recognition’). This is where a particular set of brain-wave patterns are ‘classed’ as a specific action, e.g. move left or move right.
Key components of a BCI
The whole architecture of an online BCI system is summarised in the diagram below:
The core components of a BCI system are as follows:
1. Measurement of brain activity
This part is responsible for recording brain activities using various types of sensors. After amplification and digitisation, the recorded brain signals serve as BCI inputs.
This unit reduces noise and artifacts present in the brain signals in order to enhance the relevant information hidden in the input signals.
3. Feature extraction
The feature extractor transforms the preprocessed signals into feature values that correspond to the underlying neurological mechanism. These features are employed by BCI for controlling the output device.
This part is responsible for identifying the intention of the user from the extracted features.
5. Control a device
The output device can be a computer, a wheelchair or a robotic arm etc. The output of the classifier is used as a command to control the output device.
The BCI should feedback the consequences of the action to the user, in a closed loop, so that the user can make adjustments. Feedback can be in visual, auditory or tactile form.
All these units are highly important in the development of an efficient BCI and affect the BCI performance in terms of accuracy, speed, and information transfer rate. A BCI must be designed to comfortably carry out this process without any harm to the user’s health.
© The University of Sheffield