
Researchers at the University of California, San Francisco have successfully demonstrated and replicated the ability to translate brain signals into complete sentences. These transmitted brain signals had an error rate as low as three percent, which is below the threshold for professional speech to text transcription softwares.
The idea of a machine decoding your thoughts into words is an Orwellian nightmare if misused, but for millions of people who have lost the ability to speak for one reason or another, or never had the ability to speak at all, it could be life-changing. Even ordinary citizens having the ability to type out a text message just by “thinking” or sending commands to a digital assistant like “Google” or “Siri” telepathically is now not science fiction, but a scientific reality.
For over a decade we have been able to decode parts of speech from brain signals. With that said, most of the solutions have been a long way from consistently translating intelligible sentences, let alone sentences with any sort of precision and accuracy. Last year, researchers used a novel approach that achieved incredible results by using brain signals to animate a simulated vocal tract. 70 percent of the words were “intelligible.”
The key to improving this AI’s performance according to the researchers of new paper in Nature Neuroscience was the realization that there are strong parallels between translating brain signals to text AND machine translation between languages using neural networks. While most efforts to decode brain signals have focused on identifying neural activity that corresponds to the chunks of sound that make up spoken language/phonemes, for the first time these researchers decided to mimic machine translation, where the entire sentence is translated at once. This has proven to be a successful strategy because certain words are always more likely to appear close together, allowing the system to rely on “context” when it is necessary to fill-in any gaps.
The team implemented an encoder-decoder approach commonly used for machine translation with a twist. Here, one neural network analyzed the input signal to create a representation of “data;” and then a second neural network translated that data into the target language.
The Artificial Intelligence was trained by using the recorded brain activity from 4 women with electrodes implanted in their brains to monitor seizures as they read out a set of 50 sentences, including 250 unique words. (Recent studies show the average person’s working vocabulary around the globe is approximately 250 words.) This allowed the first network to work out what neural activity correlated with which parts of speech.
At the moment, the system can only decode 30-50 specific sentences using a limited vocabulary of 250 words. It also requires individuals to have electrodes implanted in their brains, which is legally only permitted for a small number of highly specific medical conditions/scientific reasons.
There are a number of signs that this AI holds considerable promise; as well as definitive signs this is very dangerous. One concern is: the system was being tested on sentences that were included in its training data. This means it might simply be learning to match specific sentences to specific neural signatures; very much in the same fashion Google search anticipates what you are typing and steers you with “suggestions.” To address this, researchers added another set of recordings to the training data that were not included in their initial testing and the Artificial Intelligence reduced error rates significantly on its own suggesting the system is learning sub-sentence information like words.
Some believe the vocabulary of such a system is likely to improve considerably as people build upon this approach—but others believe the 250 word vocabulary will be distilled into a Newspeak further simplifying and reducing language. Regardless, advocates of this AI believe even a limited palette of 250 words could be life changing to a paraplegic or stroke victim and could be tailored to a specific set of commands for telepathic control of other devices.
Lastly, the leap to not needing any electrodes implanted at all into a willing or unaware participant is not too far of a jump. Reading brain signs can lead to a new form of “thought crime” which could become highly problematic.
Business Anthropology is going to keep a close eye on the scrum of companies racing to develop the first practical neural interfaces. Additionally, we will monitor any new legislation which could impact the usage and implementation of this type of AI.
Comments