Top NewsBrain implants could help stroke sufferers 'speak' through screens faster and more...

Brain implants could help stroke sufferers ‘speak’ through screens faster and more accurately than ever before, new research shows



CNN

Dr. Jaime Henderson had one wish throughout his childhood: for his father to talk to him. Now a scientist and neurosurgeon at Stanford Medicine, Henderson and his colleagues are developing brain implants that can fulfill similar wishes in others with strokes or speech disabilities.

Two studies published Wednesday in the journal Nature show how brain implants, described as neuroprostheses, can record brain activity while trying to speak naturally. Even communicated using an animated avatar.

“When I was 5, my dad was in a devastating car accident that left him unable to move or speak. I remember laughing at the jokes he tried to tell, but his speech was so weak that we couldn’t get the punchline,” Henderson, an author of the study and a professor at Stanford University, said at a news conference.

“So I wanted to get to know him and connect with him,” he said. “I think that early experience sparked my personal interest in understanding how the brain produces movement and speech.”

Henderson and his colleagues at Stanford and other US institutions studied the use of brain sensors implanted in 68-year-old Pat Bennett. He was diagnosed with amyotrophic lateral sclerosis in 2012 and it affected his speech.

Bennett could make some limited facial movements and vocalizations, but was unable to produce clear speech due to ALS, a rare neurological disease that affects nerve cells in the brain and spinal cord, the researchers wrote in their study.

In March 2022, Henderson underwent surgery to implant arrays of electrodes in two areas of Bennett’s brain. As Bennett attempted to make facial movements, make sounds or speak single words, the implants recorded neural activity.

See also  Stock futures rose on Sunday night as Wall Street looked ahead to big tech earnings: live updates

The arrays were attached to wires that came out of the skull and connected to a computer. Software decodes the neural activity and converts the activity into words displayed on a computer screen in real time. When Bennett finished speaking, she pressed a button to complete the decoding.

The researchers evaluated this brain-computer interface by having Bennett try to speak with vocalizations and using only “mouth” words without the vocalizations.

With a 50-word vocabulary, the researchers found that the rate of decoding errors was 9.1% on days with Bennett’s voice and 11.2% on quiet days. When using a 125,000-word vocabulary, the word error rate was 23.8% on all voice days and 24.7% on quiet days.

“In our work, we show that speech attempts can be understood with a word error rate of 23% when using 125,000 possible words. This means three out of every four words are understood correctly,” study author Frank Willett, a staff scientist at the Howard Hughes Medical Institute affiliated with the Neural Prosthetics Translational Laboratory, reported in a news release. said at the conference.

“With these new studies, we can now imagine a future where someone with a stroke can regain conversational fluency and say anything they want to say with enough precision to be reliably understood,” he said.

The researchers claim that the decoding happened at high speed. Bennett spoke at an average speed of 62 words per minute, “three times the speed” of previous brain-computer interfaces, which were 18 words per minute for handwriting models.

“These early results prove the concept, and ultimately people who can’t speak will like the technology to be more accessible,” Bennett wrote. A press release. “For non-verbal people, they can stay connected to the larger world, perhaps continue to work, and maintain friends and family relationships.”

See also  USA wins eighth consecutive Olympic women's basketball gold after France thriller | Paris Olympic Games 2024

For now, the researchers wrote that their findings are a “proof of concept” that decoding speech movements with a large vocabulary is possible, but it needs to be tested on more people before it can be considered for clinical use.

“These are very early studies,” Willett said. “We don’t have a huge database of data from other people.”

Another study published Wednesday found that Ann Johnson, who suffered a stroke in 2005 when she was 30 years old, was unable to speak clearly due to a stroke. In September 2022, an electrode device was implanted in his brain at UCSF Medical Center in San Francisco without surgical complications.

The implant recorded neural activity, which was decoded as text on a screen. The researchers wrote in the study that they found “accurate and rapid large vocabulary decoding” with an average rate of 78 words per minute and an average word error rate of 25%.

Separately, when Johnson tried to speak silently, her neural activity was integrated into speech sounds. The researchers also created an animation of a facial avatar based on the patient’s attempted facial movements, combined with synthesized speech.

Researchers from the University of California, San Francisco and other institutions in the United States and the United States, “Fast, more accurate and natural communication is one of the most desired needs of people who have lost the ability to speak after a severe stroke. Kingdom, wrote in their study. “To address all these needs with a speech-neurosystem. We have demonstrated here that can decode articulatory cortical activity into multiple output modes in real-time.”

See also  Nikola Jokic will not be suspended for incident with Suns owner

Two new brain implant studies are “convergent” in their findings and a “long-term goal” of restoring communication for stroke victims, said Dr. Edward Chang, a neurosurgeon at UC San Francisco and study author. Explanation.

“The results of the two studies — between 60 and 70 words per minute in both — are a real milestone for our field in general, and we’re very excited because it comes from two different patients, two different centers. Two different approaches,” Chang said. “The most important message is that there is hope that it will continue to be developed and provide a solution for years to come.”

Although the devices described in the two new papers are being studied as proof-of-concepts — currently not commercially available — they could pave the way for future science and future commercial devices.

“I’m very excited about the commercial activity in the brain-computer interface area,” Henderson said. “I’ve come full circle from wanting to connect with my dad as a kid to seeing it actually work. It’s indescribable.

Exclusive content

Latest article

More article