For all our leaps in brain interface technology, the devices that help severely paralyzed patients communicate are still extremely sluggish.
That may be starting to change. Two teams of researchers in California, as detailed in dual new studies published in the journal Nature, say they’ve engineered a device set to revolutionize the field.
In essence, the device intercepts a patient’s brainwaves, interprets them into speech and facial expressions, and externalizes these in a digital avatar of themselves.
“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others,” said Edward Chang, chair of neurological surgery at University of California, San Francisco (UCSF) and co-author of the university’s study, in a statement.
“These advancements bring us much closer to making this a real solution for patients.”
The results already sound impressive. Chang’s team has shown that their brain implant can empower patients to “talk” up to 80 words per minute, and on average, between 60 and 70 — simply by thinking.
Though that’s not as fast as natural human speech — which can tumble forth at 160 words per minute — this is still over three times the previous record.
Serving as the foundation of the implant is a tool to convert brain signals into text. For this, the researchers trained an AI algorithm on the electric signals of their patient’s brain as they repeated a selection of phrases to themselves.
The algorithm was designed to not look for words, but their distinct units of sound called phonemes.
Alexander Silva, a co-author of the UCSF study, explained it this way: “if you make a P sound or a B sound, it involves bringing the lips together,” he told MIT Technology Review. “So that would activate a certain proportion of the electrodes that are involved in controlling the lips.”
This stage of the device is fairly accurate, too, with a 9 percent error rate — three times less than the record low.
But text on its own is less impressive.
To achieve something close to real-life speech, the researchers took the implant a step further by outfitting a piece of animation software with a custom AI that could use the patient’s word signals to simulate facial expressions. Pairing this with a reconstruction of their voice, the patient’s speech could be embodied in a digital lookalike on a nearby screen.
A caveat: that 9 percent error rate was with a vocabulary of just 50 words. By the time they got to a 125,000 word vocabulary, though, the error rate was close to 24 percent — which is still impressive, but would undoubtedly be frustrating to use.
The important thing is that the researchers have proven that this kind of device is possible — at least for the specific patient they designed it for. Future versions will have to prove their mettle in patients with all varieties of paralysis and hopefully bring down the errors, but so far the findings are promising.
More on brain implants: Paralyzed People Successfully Test Brain-Controlled Electric Wheelchairs