Brain-inspired algorithm helps hearing aids cut through the noise

Boston researchers created a brain-inspired hearing algorithm that helps users tune into a single voice in noisy environments.

Joseph Shavit
Shy Cohen
Written By: Shy Cohen/
Edited By: Joseph Shavit
New brain-inspired algorithm improves speech recognition in noise for hearing aid users by up to 40%.

New brain-inspired algorithm improves speech recognition in noise for hearing aid users by up to 40%. (CREDIT: Unsplash)

In a busy room full of talking people, most of us can still pick out one voice to focus on. This common yet complex task—known as the “cocktail party effect”—relies on the brain’s incredible ability to sort through sound. But for people with hearing loss, filtering out background noise can feel impossible. Even the most advanced hearing aids often struggle in these noisy environments.

Now, researchers at Boston University may have found a new way to help. They’ve developed a brain-inspired algorithm that allows hearing aids to better isolate individual voices in a crowd. When tested, this method boosted speech recognition by an impressive 40 percentage points, far outperforming current technologies.

A New Approach to an Old Problem

In crowded social settings like dinner parties or workplace meetings, conversations often overlap. For those with hearing loss, these situations can be frustrating. Even with hearing aids, voices blur together in a mess of sound. This makes it hard to follow conversations, stay engaged, or even participate at all.

In a BU lab, researchers (from left) Kamal Sen, Alexander D. Boyd (ENG’23,’26), and Virginia Best tested a brain-inspired algorithm’s ability to help hearing aid users separate sounds in noisy places. (CREDIT: Jackie Ricciardi) Photo by Jackie Ricciardi for Boston University

Virginia Best, a speech and hearing researcher at BU, says this is the number one complaint among those with hearing loss. “These environments are very common in daily life,” Best explains, “and they tend to be really important to people.”

Traditional hearing aids often include tools like directional microphones—also called beamformers—that try to focus on sounds coming from one direction. But these tools have limitations. In complex environments with many voices, beamforming often fails. In fact, in tests conducted by the BU team, the standard industry algorithm didn’t help much—and sometimes made things worse.

That’s where the new technology, known as BOSSA, comes in. BOSSA stands for Biologically Oriented Sound Segregation Algorithm. It was developed by Kamal Sen, a biomedical engineering professor at BU’s College of Engineering. “We were extremely surprised and excited by the magnitude of the improvement in performance,” says Sen. “It’s pretty rare to find such big improvements.”

Built on Brain Science

Sen has spent two decades exploring how the brain decodes sound. His work focuses on how sound signals travel from the ears to the brain and how certain neurons help identify or suppress sounds. One key finding? The brain uses “inhibitory neurons” to cancel out background noise and enhance the sounds we want to hear.

All subjects average word recognition scores. (CREDIT: Kamal Sen, et al.)

“You can think of it as a form of internal noise cancellation,” Sen says. Different neurons are tuned to respond to different directions and pitches. This lets your brain focus attention on one sound source while ignoring others.

BOSSA was built to mimic this process. The algorithm uses spatial cues—like how loud a sound is and how quickly it arrives in each ear—to pinpoint its location. It then filters sounds based on these cues, separating them like your brain would. “It’s basically a computational model that mimics what the brain does,” Sen says.

Testing BOSSA in Real-Life Situations

To find out if BOSSA really works, the BU team tested it in the lab. They recruited young adults with sensorineural hearing loss, the most common form, often caused by genetics or childhood illness. Participants wore headphones and listened to simulated conversations, with voices coming from different directions. They were asked to focus on one speaker while the algorithm worked in the background.

Each person completed the task under three different conditions: no algorithm, the standard beamforming algorithm used in current hearing aids, and BOSSA. The results were striking. BOSSA delivered a major improvement in speech recognition. The standard algorithm showed little or no improvement—and in some cases, performance dropped.

Speech reception thresholds (SRT) are shown as boxplots for each processing condition. (CREDIT: Kamal Sen, et al.)

Alexander Boyd, a BU PhD candidate in biomedical engineering, helped collect and analyze the data. He was also the lead author of the study, which was published in Communications Engineering, part of the Nature Portfolio.

Best, who formerly worked at Australia’s National Acoustic Laboratories, helped design the study. She says testing new technologies like BOSSA with real people is essential. “Ultimately, the only way to know if a benefit will translate to the listener is via behavioral studies,” Best says. “That requires scientists and clinicians who understand the target population.”

Big Potential for Hearing Technology

An estimated 50 million Americans live with hearing loss, and the World Health Organization predicts that by 2050, nearly 2.5 billion people worldwide will be affected. That makes the need for better hearing solutions urgent.

Sen has patented BOSSA and hopes to partner with companies that want to bring it to market. He believes that major tech players entering the hearing aid space—like Apple with its AirPod Pro 2, which includes hearing aid features—will drive innovation forward. “If hearing aid companies don’t start innovating fast, they’re going to get wiped out,” says Sen. “Apple and other start-ups are entering the market.”

Individual participant audiograms. The different curves show pure-tone thresholds for each of the eight participants (averaged over left and right ears). Unique symbols distinguish individual subjects. (CREDIT: Kamal Sen, et al.)

And the timing couldn’t be better. As hearing technology becomes more widely available and advanced, tools like BOSSA could help millions of people reconnect with the world around them. From social events to everyday conversations, better sound separation can mean a better life.

Beyond Hearing Loss: A Wider Application

BOSSA was built to help those with hearing difficulties, but its potential doesn’t end there. The way the brain focuses on sound—what researchers call “selective attention”—matters in many conditions. “The [neural] circuits we are studying are much more general purpose and much more fundamental,” Sen says. “It ultimately has to do with attention, where you want to focus.”

That’s why the team is now exploring how the same science could help people with ADHD or autism. These groups also struggle with multiple competing inputs—whether sounds, visuals, or tasks—and may benefit from tools that help guide attention.

They’re also testing a new version of BOSSA that adds eye-tracking. By following where someone looks, the device could better figure out who they’re trying to listen to. This could make the technology even more effective in fast-paced, real-world settings.

Sharpening Sound, Changing Lives

The success of BOSSA offers real hope. It’s not just another upgrade in hearing tech—it’s a shift in how we approach sound processing. Instead of trying to boost all sound or block background noise blindly, it takes cues from biology, using the brain’s blueprint to help listeners find meaning in the noise.

For many with hearing loss, this could change everything. Being able to join conversations, pick out voices, and stay connected socially are vital parts of daily life. With tools like BOSSA, those goals move a little closer. And as this technology continues to grow, its reach may extend beyond hearing loss, offering help with focus and attention challenges too.

What started as a solution for a noisy dinner party could one day reshape how we interact with the world.

Research findings are available online in the journal Communications Engineering.




Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Shy Cohen
Shy CohenScience and Technology Writer

Shy Cohen
Science & Technology Writer

Shy Cohen is a Washington-based science and technology writer covering advances in AI, biotech, and beyond. He reports news and writes plain-language explainers that analyze how technological breakthroughs affect readers and society. His work focuses on turning complex research and fast-moving developments into clear, engaging stories. Shy draws on decades of experience, including long tenures at Microsoft and his independent consulting practice to bridge engineering, product, and business perspectives. He has crafted technical narratives, multi-dimensional due-diligence reports, and executive-level briefs, experience that informs his source-driven journalism and rigorous fact-checking. He studied at the Technion – Israel Institute of Technology and brings a methodical, reader-first approach to research, interviews, and verification. Comfortable with data and documentation, he distills jargon into crisp prose without sacrificing nuance.
Joseph Shavit
Joseph ShavitScience News Writer, Editor and Publisher

Joseph Shavit
Science News Writer, Editor-At-Large and Publisher

Joseph Shavit, based in Los Angeles, is a seasoned science journalist, editor and co-founder of The Brighter Side of News, where he transforms complex discoveries into clear, engaging stories for general readers. With experience at major media groups like Times Mirror and Tribune, he writes with both authority and curiosity. His work spans astronomy, physics, quantum mechanics, climate change, artificial intelligence, health, and medicine. Known for linking breakthroughs to real-world markets, he highlights how research transitions into products and industries that shape daily life.