Mind-controlled robots now one step closer

Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own.

A feedback system developed at MIT enables human operators to correct a robot's choice in real-time using only brain signals. (CREDIT: MIT)

Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own. “People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.



Avoiding obstacles

To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard.

The engineers began by improving the robot’s mechanism for avoiding obstacles so that it would be more precise. “At first, the robot would choose a path that was too wide for some obstacles, taking it too far away, and not wide enough for others, keeping it too close,” says Carolina Gaspar Pinto Ramos Correia, a PhD student at Prof. Billard’s lab. “Since the goal of our robot was to help paralyzed patients, we had to find a way for users to be able to communicate with it that didn’t require speaking or moving.”


Related Stories:


An algorithm that can learn from thoughts

This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong – but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object?

To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct.



The process goes pretty quickly – only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system – or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.”

Next step: a mind-controlled wheelchair

The researchers hope to eventually use their algorithm to control wheelchairs. “For now there are still a lot of engineering hurdles to overcome,” says Prof. Billard. “And wheelchairs pose an entirely new set of challenges, since both the patient and the robot are in motion.” The team also plans to use their algorithm with a robot that can read several different kinds of signals and coordinate data received from the brain with those from visual motor functions.

For more science news stories check out our New Innovations section at The Brighter Side of News.


Note: Materials provided above by Ecole Polytechnique Federale de Lausanne. Content may be edited for style and length.


Like these kind of feel good stories? Get the Brighter Side of News' newsletter.


Tags: #New_Innovation, #Mind_Control, #Medical_News, #Robotics, #Technology, #Research, #Science, #The_Brighter_Side_of_News


Joseph Shavit
Joseph ShavitSpace, Technology and Medical News Writer
Joseph Shavit is the head science news writer with a passion for communicating complex scientific discoveries to a broad audience. With a strong background in both science, business, product management, media leadership and entrepreneurship, Joseph possesses the unique ability to bridge the gap between business and technology, making intricate scientific concepts accessible and engaging to readers of all backgrounds.