top of page

Revolutionary brain implant translates a person's brain signals into spoken words

Jonathan Viventi holds an ultrathin film implant that brings more electrodes into the brain to better gather neural signals.
Jonathan Viventi holds an ultrathin film implant that brings more electrodes into the brain to better gather neural signals. (CREDIT: Chris Hildreth)


A revolutionary partnership among Duke University's neuroscientists, neurosurgeons, and engineers has yielded an extraordinary speech implant. This innovative device translates the brain signals of individuals into audible speech with remarkable precision.


Published in the prestigious journal Nature Communications, this new technology holds the promise of enabling individuals suffering from neurological disorders, such as ALS (amyotrophic lateral sclerosis) and locked-in syndrome, to regain the ability to communicate through a cutting-edge brain-computer interface.


 
 

The current landscape for communication aids available to those with debilitating motor disorders is marked by significant limitations. Gregory Cogan, Ph.D., a professor of neurology at Duke University's School of Medicine and one of the leading researchers behind this project, underscores the challenges faced by these individuals.


He states, "There are many patients who suffer from debilitating motor disorders...that can impair their ability to speak. But the current tools available to allow them to communicate are generally very slow and cumbersome."


 

Related Stories

 

Imagine trying to comprehend an audiobook playing at half its normal speed. This comparison encapsulates the current state of speech decoding technology, which lags far behind natural human speech rates. The fastest speech decoding rate presently available struggles to achieve a mere 78 words per minute, whereas typical conversational speech flows at approximately 150 words per minute.


The disparity between these rates can be attributed, in part, to the limited number of brain activity sensors that can be integrated into a thin, flexible substrate placed atop the brain's surface. With fewer sensors, the information available for decoding is correspondingly diminished.


 
 

To address these critical limitations, Gregory Cogan partnered with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D. Viventi's biomedical engineering lab specializes in developing high-density, ultra-thin, and flexible brain sensors.


A device no bigger than a postage stamp (dotted portion within white band) packs 128 microscopic sensors that can translate brain cell activity into what someone intends to say.
A device no bigger than a postage stamp (dotted portion within white band) packs 128 microscopic sensors that can translate brain cell activity into what someone intends to say. (CREDIT: Dan Vahaba/Duke University)


Their collaborative efforts yielded a remarkable achievement: a tiny, postage stamp-sized piece of medical-grade plastic embedded with an astounding 256 microscopic brain sensors. This dense array of sensors is essential for capturing the nuanced activity patterns of neurons located mere millimeters apart, especially when coordinating the intricate process of speech production.


 
 

Having successfully fabricated this innovative implant, Cogan and Viventi joined forces with several neurosurgeons from Duke University Hospital, including Derek Southwell, M.D., Ph.D., Nandan Lad, M.D., Ph.D., and Allan Friedman, M.D., who played a crucial role in recruiting four patients willing to participate in the implant testing. These individuals, already undergoing brain surgery for other conditions, graciously offered their participation, affording the researchers a limited window of opportunity to assess the performance of their device.


Compared to current speech prosthetics with 128 electrodes (left), Duke engineers have developed a new device that accommodates twice as many sensors in a significantly smaller footprint
Compared to current speech prosthetics with 128 electrodes (left), Duke engineers have developed a new device that accommodates twice as many sensors in a significantly smaller footprint. (CREDIT: Dan Vahaba/Duke University)


Gregory Cogan aptly likens their experience to that of a high-speed NASCAR pit crew. Time was of the essence; the team had to install and evaluate the device within a brisk 15-minute window, ensuring that the surgical procedure remained unaltered. As soon as the signal to begin was given, the researchers sprang into action, and the patients commenced the task at hand.


 
 

The task was a relatively straightforward listen-and-repeat exercise. Participants were presented with a series of nonsensical words such as "ava," "kug," or "vip," and were instructed to vocalize each one. The implant diligently recorded the activity emanating from each patient's speech motor cortex, responsible for orchestrating the complex interplay of nearly 100 muscles involved in articulating sounds and words.


In the lab, Duke University Ph.D. candidate Kumar Duraivel analyzes a colorful array of brain-wave data. Each unique hue and line represent the activity from one of 256 sensors
In the lab, Duke University Ph.D. candidate Kumar Duraivel analyzes a colorful array of brain-wave data. Each unique hue and line represent the activity from one of 256 sensors, all recorded in real-time from a patient's brain in the operating room. (CREDIT: Dan Vahaba/Duke University)


Once the data was collected, Suseendrakumar Duraivel, a biomedical engineering graduate student at Duke University and the first author of the published report, embarked on a crucial phase of the experiment. He processed the neural and speech data obtained during the surgery and fed it into a machine learning algorithm. The goal was to evaluate the algorithm's ability to predict the sounds being made solely based on the recorded brain activity.


 
 

Impressively, the decoding algorithm exhibited notable success in certain scenarios. For instance, when the target sound was the initial sound in a string of three that composed a given nonsense word, such as the /g/ sound in the word "gak," the decoder achieved an impressive accuracy rate of 84%. However, as the algorithm delved into the middle or concluding sounds of the nonsense words, or when distinguishing between similar sounds like /p/ and /b/, its accuracy waned.


(LEFT) Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D. and (RIGHT) Gregory Cogan, Ph.D., a professor of neurology at Duke University's School of Medicine.
(LEFT) Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D. and (RIGHT) Gregory Cogan, Ph.D., a professor of neurology at Duke University's School of Medicine. (CREDIT: Duke University)


In the broader context, the decoder achieved an overall accuracy rate of 40%. While this may seem modest at first glance, it represents a significant achievement, given that comparable brain-to-speech technologies often require extensive hours or days of data to achieve similar results. Remarkably, the speech decoding algorithm employed by Duraivel was operating with a mere 90 seconds of spoken data gathered during the brief 15-minute test.


 
 

The potential of this technology has sparked considerable enthusiasm among the researchers involved, leading to further advancements. A recent grant of $2.4 million from the National Institutes of Health has enabled the team to pursue the development of a cordless version of the device. Gregory Cogan envisions the possibilities, stating, "We're now developing the same kind of recording devices, but without any wires. You'd be able to move around, and you wouldn't have to be tied to an electrical outlet, which is really exciting."


Highly resolved neural features as demonstrated by high-density µECoG during speech production.
Highly resolved neural features as demonstrated by high-density µECoG during speech production. (CREDIT: Nature Communications)


Despite these promising strides forward, there remains a considerable journey ahead before Jonathan Viventi and Gregory Cogan's speech prosthetic can become a readily accessible solution. As Jonathan Viventi aptly notes in a recent Duke Magazine feature on the technology, "We're at the point where it's still much slower than natural speech, but you can see the trajectory where you might be able to get there."


 
 

The journey towards achieving natural, real-time speech through brain-computer interfaces continues to captivate the scientific community, offering hope to those who have long yearned for an effective means of communication despite the challenges imposed by neurological disorders. With each breakthrough, researchers like Cogan, Viventi, and their dedicated teams are paving the way toward a future where the power of the mind can be harnessed to bridge the gap between thought and speech, ultimately enriching the lives of individuals who have faced profound communication barriers.







For more science stories check out our New Discoveries section at The Brighter Side of News.


 

Note: Materials provided above by The Brighter Side of News. Content may be edited for style and length.


 
 

Like these kind of feel good stories? Get the Brighter Side of News' newsletter.


 

Comentários


Most Recent Stories

bottom of page