top of page

Paralysed people can talk using their thoughts thanks to novel brain-reading devices

[Aug. 26, 2023: Staff Writer, The Brighter Side of News]


A brain-computer interface translates the study participant’s brain signals into the speech and facial movements of an animated avatar.(CREDIT: Noah Berger)


In a groundbreaking achievement, artificial intelligence (AI) advancements combined with brain-reading implants have empowered two paralyzed individuals to communicate more rapidly and accurately than ever before.


Recent studies shed light on brain-computer interfaces (BCIs) that can potentially revolutionize treatments for speech impairment.


 
 

Decoding Thought Into Speech


Two research teams, in separate studies published in Nature, detail their ventures into BCIs that convert neural signals to text or synthetic speech.


Although natural conversations flow at an average pace of 160 words per minute, the newly developed BCIs decode speech at remarkable rates of 62 words and 78 words per minute, surpassing any previous technology in the domain.


 

Related Stories

 

Francis Willett, a leading neuroscientist at Stanford University, commented at a press conference on 22 August, “It is now possible to imagine a future where we can restore fluid conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably.”


Considering the rapid progress, Christian Herff, a computational neuroscientist at Maastricht University, posited that these devices “could be products in the very near future.”


 
 

Venturing Deeper: Electrodes and Algorithms


Willett's team focused on interpreting neural activities at the cellular dimension, converting it to text. Collaborating with a 67-year-old patient named Pat Bennett, diagnosed with amyotrophic lateral sclerosis (ALS) — a disorder leading to muscle control deterioration — the team embarked on an innovative procedure.



Bennett underwent a surgical procedure where silicon electrodes were strategically placed within her brain regions responsible for speech. Advanced deep-learning algorithms were then trained to recognize Bennett’s unique brain signals corresponding to her speech attempts.


 
 

Two vocabulary sets were used: a comprehensive 125,000-word vocabulary and a concise 50-word set. The BCI's performance was exceptional, decoding the smaller vocabulary 2.7 times faster than prior BCIs, achieving a 9.1% word-error rate. However, for the extensive vocabulary, the error rate increased to 23.8%. “About three in every four words are deciphered correctly,” said Willett during the press conference.


Two brain–computer interfaces have been developed that bring unprecedented capabilities for translating brain signals into sentences. (CREDIT: Nature)


Recognizing the profound impact on her life, Bennett said, “For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships.”


 
 

Alternative Techniques: Reading Brain Activity


Concurrently, Edward Chang, a neurosurgeon at the University of California, San Francisco, spearheaded another study. Collaborating with a 47-year-old woman named Ann, who had lost her speech abilities due to a brainstem stroke, the team employed a different technique known as electrocorticography (ECoG). This method, involving a paper-thin rectangle embedded with 253 electrodes, records neural activities from the brain’s surface.


Training AI algorithms on patterns from Ann’s brain, the team enabled her to "speak" 249 sentences from a 1,024-word vocabulary. Impressively, the device achieved a rate of 78 words per minute, with a median error rate of 25.5%.


Blaise Yvert, a researcher at the Grenoble Institute of Neuroscience, said, “Nice to see that with ECoG, it's possible to achieve low word-error rate."


In a touching personalization, Chang's team recreated Ann’s voice from her wedding video recordings, producing a synthetic voice and an avatar with expressive capabilities. “The simple fact of hearing a voice similar to your own is emotional,” Ann shared in a feedback session, reflecting on the emotional importance of voice, an aspect Chang emphasized, saying, “It’s not just about communication, it’s also about who we are.”


 
 

Clinical Aspirations and Hurdles


While the developments are promising, much refinement is required before commercial application. For the technology to become mainstream, it must be fully implantable, devoid of visible connectors or cables. The BCIs must undergo extensive testing on larger populations to ensure reliability.


Willett and colleagues recorded neural activity from four intracranial MEAs while a study participant with ALS attempted to make orofacial movements or speak. (CREDIT: Nature)


Furthermore, some users still retain muscle and brain region functionalities related to speech. “This will not be the case for every patient,” noted Herff. However, as Willett optimistically stated, the studies act as proof of concept, motivating the industry to translate findings into usable products.


 
 

Judy Illes, a neuroethics researcher, cautioned, “No matter how elegant and technically sophisticated these data are, we have to understand them in context.” She stressed the need to avoid overpromising its applicability to vast demographics, adding, “I’m not sure we’re there yet.”


As science continues to make leaps and bounds, the recent findings illuminate a future where paralysis might not equate to a loss of voice, offering a glimmer of hope for many.






For more science stories check out our New Discoveries section at The Brighter Side of News.


 

Note: Materials provided above by The Brighter Side of News. Content may be edited for style and length.


 
 

Like these kind of feel good stories? Get the Brighter Side of News' newsletter.


 



Most Recent Stories

bottom of page