Skip to main content

bci

From Brain Waves to Real-Time Text Messaging

Posted on by Lawrence Tabak, D.D.S., Ph.D.

For people who have lost the ability to speak due to a severe disability, they want to get the words out. They just can’t physically do it. But in our digital age, there is now a fascinating way to overcome such profound physical limitations. Computers are being taught to decode brain waves as a person tries to speak and then interactively translate them onto a computer screen in real time.

The latest progress, demonstrated in the video above, establishes that it’s quite possible for computers trained with the help of current artificial intelligence (AI) methods to restore a vocabulary of more than a 1,000 words for people with the mental but not physical ability to speak. That covers more than 85 percent of most day-to-day communication in English. With further refinements, the researchers say a 9,000-word vocabulary is well within reach.

The findings published in the journal Nature Communications come from a team led by Edward Chang, University of California, San Francisco [1]. Earlier, Chang and colleagues established that this AI-enabled system could directly decode 50 full words in real time from brain waves alone in a person with paralysis trying to speak [2]. The study is known as BRAVO, short for Brain-computer interface Restoration Of Arm and Voice.

In the latest BRAVO study, the team wanted to figure out how to condense the English language into compact units for easier decoding and expand that 50-word vocabulary. They did it in the same way we all do: by focusing not on complete words, but on the 26-letter alphabet.

The study involved a 36-year-old male with severe limb and vocal paralysis. The team designed a sentence-spelling pipeline for this individual, which enabled him to silently spell out messages using code words corresponding to each of the 26 letters in his head. As he did so, a high-density array of electrodes implanted over the brain’s sensorimotor cortex, part of the cerebral cortex, recorded his brain waves.

A sophisticated system including signal processing, speech detection, word classification, and language modeling then translated those thoughts into coherent words and complete sentences on a computer screen. This so-called speech neuroprosthesis system allows those who have lost their speech to perform roughly the equivalent of text messaging.

Chang’s team put their spelling system to the test first by asking the participant to silently reproduce a sentence displayed on a screen. They then moved on to conversations, in which the participant was asked a question and could answer freely. For instance, as in the video above, when the computer asked, “How are you today?” he responded, “I am very good.” When asked about his favorite time of year, he answered, “summertime.” An attempted hand movement signaled the computer when he was done speaking.

The computer didn’t get it exactly right every time. For instance, in the initial trials with the target sentence, “good morning,” the computer got it exactly right in one case and in another came up with “good for legs.” But, overall, their tests show that their AI device could decode with a high degree of accuracy silently spoken letters to produce sentences from a 1,152-word vocabulary at a speed of about 29 characters per minute.

On average, the spelling system got it wrong 6 percent of the time. That’s really good when you consider how common it is for errors to arise with dictation software or in any text message conversation.

Of course, much more work is needed to test this approach in many more people. They don’t yet know how individual differences or specific medical conditions might affect the outcomes. They suspect that this general approach will work for anyone so long as they remain mentally capable of thinking through and attempting to speak.

They also envision future improvements as part of their BRAVO study. For instance, it may be possible to develop a system capable of more rapid decoding of many commonly used words or phrases. Such a system could then reserve the slower spelling method for other, less common words.

But, as these results clearly demonstrate, this combination of artificial intelligence and silently controlled speech neuroprostheses to restore not just speech but meaningful communication and authentic connection between individuals who’ve lost the ability to speak and their loved ones holds fantastic potential. For that, I say BRAVO.

References:

[1] Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis. Metzger SL, Liu JR, Moses DA, Dougherty ME, Seaton MP, Littlejohn KT, Chartier J, Anumanchipalli GK, Tu-CHan A, Gangly K, Chang, EF. Nature Communications (2022) 13: 6510.

[2] Neuroprosthesis for decoding speech in a paralyzed person with anarthria. Moses DA, Metzger SL, Liu JR, Tu-Chan A, Ganguly K, Chang EF, et al. N Engl J Med. 2021 Jul 15;385(3):217-227.

Links:

Voice, Speech, and Language (National Institute on Deafness and Other Communication Disorders/NIH)

ECoG BMI for Motor and Speech Control (BRAVO) (ClinicalTrials.gov)

Chang Lab (University of California, San Francisco)

NIH Support: National Institute on Deafness and Other Communication Disorders