Paralyzed man feeds himself with robotic arms connected directly to his brain

Two robotic arms – a fork in one hand, a knife in the other – flank a seated man, who sits in front of a table, with a piece of cake.

[July 4, 2022: Colm Gorey, Frontiers]

Participant initiates task by moving robot right hand forward. (CREDIT: Frontiers in Neurorobotics)

Two robotic arms – a fork in one hand, a knife in the other – flank a seated man, who sits in front of a table, with a piece of cake on a plate. A computerized voice announces each action: “moving fork to food” and “retracting knife.” Partially paralyzed, the man makes subtle motions with his right and left fists at certain prompts, such as “select cut location”, so that the machine slices off a bite-sized piece. Now: “moving food to mouth” and another subtle gesture to align the fork with his mouth.

In less than 90 seconds, a person with very limited upper body mobility who hasn’t been able to use his fingers in about 30 years, just fed himself dessert using his mind and some smart robotic hands.

A team led by researchers at the Johns Hopkins Applied Physics Laboratory (APL), in Laurel, Maryland, and the Department of Physical Medicine and Rehabilitation (PMR) in the Johns Hopkins School of Medicine, published a paper in the journal Frontiers in Neurorobotics that described this latest feat using a brain-machine interface (BMI) and a pair of modular prosthetic limbs.

Also sometimes referred to as a brain-computer interface, BMI systems provide a direct communication link between the brain and a computer, which decodes neural signals and ‘translates’ them to perform various external functions, from moving a cursor on a screen to now enjoying a bite of cake. In this particular experiment, muscle movement signals from the brain helped control the robotic prosthetics.


Related Stories:


A new approach

The study built on more than 15 years of research in neural science, robotics, and software, led by APL in collaboration with the Department of PMR, as part of the Revolutionizing Prosthetics program, which was originally sponsored by the US Defense Advanced Research Project Agency (DARPA). The new paper outlines an innovative model for shared control that enables a human to maneuver a pair of robotic prostheses with minimal mental input.

“This shared control approach is intended to leverage the intrinsic capabilities of the brain machine interface and the robotic system, creating a ‘best of both worlds’ environment where the user can personalize the behavior of a smart prosthesis,” said Dr Francesco Tenore, a senior project manager in APL’s Research and Exploratory Development Department. The paper’s senior author, Tenore focuses on neural interface and applied neuroscience research.

System diagram for BMI-based shared control of bimanual robotic limbs. (A) Movements are decoded from neural signals through the brain-machine interface and mapped to two external robotic limbs while using a collaborative shared human-machine teaming control strategy to complete a self-feeding task requiring simultaneous bimanual manipulations. (B) NeuroPort electrode arrays (Blackrock Neurotech) implanted in the motor and somatosensory regions of the left and right hemispheres record neural activity. (C) Neural data is streamed from the cortical implants and processed before being decoded. Decoded gestures are passed to the shared control strategy for mapping onto robot degrees of freedom depending on the current state of the task. Autonomous portions of the task are performed by the robot while semi-autonomous steps are controlled in part by the participant using attempted gestures to modulate a subset of robotic limb end effector degrees of freedom using the current DOF mapping. The degrees of freedom controlled via BMI are based on a task library accessed by the robot.

“Although our results are preliminary, we are excited about giving users with limited capability a true sense of control over increasingly intelligent assistive machines,” he added.

Helping people with disabilities

One of the most important advances in robotics demonstrated in the paper is combining robot autonomy with limited human input, with the machine doing most of the work while enabling the user to customize robot behavior to their liking, according to Dr David Handelman, the paper’s first author and a senior roboticist in the Intelligent Systems Branch of the Research and Exploratory Development Department at APL.

“In order for robots to perform human-like tasks for people with reduced functionality, they will require human-like dexterity. Human-like dexterity requires complex control of a complex robot skeleton,” he explained. “Our goal is to make it easy for the user to control the few things that matter most for specific tasks.”

Dr Pablo Celnik, project principal investigator in the department of PMR said: “The human-machine interaction demonstrated in this project denotes the potential capabilities that can be developed to help people with disabilities.”

Closing the loop

While the DARPA program officially ended in August 2020, the team at APL and at the Johns Hopkins School of Medicine continues to collaborate with colleagues at other institutions to demonstrate and explore the potential of the technology.

Select screenshots of self-feeding task performance. The robot holds a fork in right hand and a knife in left hand. (A) Step 1: Participant initiates task by moving robot right hand forward. (B) Step 3: Participant repositions fork horizontally to align with desired piece of food. (C) Step 6: Participant repositions knife horizontally to select cut point. (D) Step 7: Participant moves knife down and fork back and right to cut food. (E) Step 10: Robot moves food to default position in front of participant's mouth. (F) Step 12: Participant places food in mouth.

The next iteration of the system may integrate previous research that found providing sensory stimulation to amputees enabled them to not only perceive their phantom limb, but use muscle movement signals from the brain to control a prosthetic. The theory is that the addition of sensory feedback, delivered straight to a person’s brain, may help him or her perform some tasks without requiring the constant visual feedback in the current experiment.

“This research is a great example of this philosophy where we knew we had all the tools to demonstrate this complex bimanual activity of daily living that non-disabled people take for granted,” Tenore said. “Many challenges still lie ahead, including improved task execution, in terms of both accuracy and timing, and closed-loop control without the constant need for visual feedback.”

Celnik added: “Future research will explore the boundaries of these interactions, even beyond basic activities of daily living.”


Note: Materials provided above by Frontiers. Content may be edited for style and length.

Like these kind of feel good stories? Get the Brighter Side of News' newsletter.


Joshua Shavit
Joshua ShavitScience and Good News Writer
Joshua Shavit is a bright and enthusiastic 17-year-old student with a passion for sharing positive stories that uplift and inspire. With a flair for writing and a deep appreciation for the beauty of human kindness, Joshua has embarked on a journey to spotlight the good news that happens around the world daily. His youthful perspective and genuine interest in spreading positivity make him a promising writer and co-founder at The Brighter Side of News.