A new brain-computer interface (BCI) predicted unspoken words from brain activity.
A BCI has successfully been used in two separate studies to decode words in a part of the brain recently implicated in predictive speech patterns. The studies could help patients with brain injuries or diseases such as amyotrophic lateral sclerosis (ALS) that cause total-body paralysis, including the muscles involved in speech.
“This work represents the first proof-of-concept for a high-performance internal speech BMI,” wrote the Caltech researchers in their study, which has yet to be peer-reviewed.
In a different study published in November last year, HSE and Moscow Universities add that speech decoded directly from the brain offers the possibility of “seamless communication” for silent patients.
The machine that enabled these breakthroughs, otherwise known as a BCI, integrates the brain with artificial intelligence to restore movement, sensory abilities, and communication in patients suffering from varying levels of paralysis.
Specifically, they can reconstruct the spoken word – including ‘silent speech’ interfaces which produce speech by tracking the movement of facial muscles as the person mouths words. However, these devices are not workable with patients with total paralysis of the body (known as tetraplegia), encompassing the facial muscles.
The new studies have bypassed the need for facial muscles to predict speech by recording brain signals straight from single neurons in a part of the brain called the supramarginal gyrus (SMG)–an area recently implicated in the ‘intent to speak.’
In the first study, Caltech used invasive electrode arrays to train the machine to predict eight words. In contrast, the HSE-led study trialed minimally invasive electrodes, which could be applied under local anesthesia to train their BCI to predict 26 words yet to be uttered.
Sarah Wandelt, a Caltech graduate student, says, “Neurological disorders can lead to complete paralysis of voluntary muscles, resulting in patients being unable to speak or move, but they are still able to think and reason. For that population, an internal speech BMI would be incredibly helpful.”
Mapping words in the brain
The work was made possible by a previous study from Caltech that showed this brain area represents spoken words as well as the intention and action of grasping. Therefore, they reasoned that the intention for words or internal speech must also reside there.
“We have previously shown that we can decode imagined hand shapes for grasping from the human supramarginal gyrus,” says Andersen. “Being able to also decode speech from this area suggests that one implant can recover two important human abilities: grasping and speech.”
As a result, the team theorized that different areas of the brain they had investigated for motor skills would activate during spoken and internal speech depending on whether muscles were used or not.
For instance, while vocalizing speech and using the muscles to form spoken words, the SMG and a key area in the brain for enabling movement known as the primary somatosensory cortex (S1) would activate. Meaning only the SMG would activate during an internal monologue, as no facial muscles were being used.
The study participant, a person with tetraplegia who already had brain implants due to a prior spinal injury, was fitted with an invasive electrode array and asked to speak six words and two pseudowords out loud. The team then used this initial data to help the BCI understand what it was reading from brainwaves in the study: speech. Psuedowords, text that is pronounceable but has no meaning, were used to ensure the BCI was accurate.
The volunteer then had to ‘speak’ the same set of words internally and then visualize the same word without using any muscles to see if the BCI recognized the brain patterns it had initially recorded from the patient beforehand. Results showed the BCI predicted the eight words with an accuracy of up to 91 percent.
“In this work, we demonstrated a robust decoder for internal and vocalized speech, capturing single-neuron activity from the supramarginal gyrus,” wrote the Caltech researchers. “A chronically implanted, speech-abled participant with tetraplegia was able to use an online, closed-loop internal speech BMI to achieve up to 91 percent classification accuracy with an eight-word vocabulary.”
Pushing the singularity

The HSE and Moscow University teams worked from this premise to test whether they could predict even more words using a smaller set of minimally invasive electrodes.
In their study, they collected data from two patients with epilepsy who already had invasive brain implants.
To test the efficacy of their artificial intelligence, they fitted the first patient with minimally invasive sEEG shafts and implanted the second participant with more invasive electrocorticographic (ECoG) strips. The sEEG could provide a safer option for patients as clinicians may be able to fit their accompanying electrodes without using a general anesthetic.
After preparing the participants, the team showed them six sentences containing 26 words. Each sentence was presented to them 30 to 60 times in a randomized order which the volunteers read aloud while the arrays registered their brain activity.
The researchers used the audio data and accompanying brain activity recorded by the electrodes to create a ‘silent class’-a set of brain waves that a machine learning program could use to predict these words and convert into computerized audio.
The team stated that they achieved 55 percent accuracy using the sEEG electrode in the first patient and 70 percent accuracy using the ECoG strip in the second. They assure readers these figures are comparable to other studies using devices requiring electrodes implanted over the entire cortical surface.
“The use of such interfaces involves minimal risks for the patient. If everything works out, it could be possible to decode imaginary speech from neural activity recorded by a small number of minimally invasive electrodes implanted in an outpatient setting with local anaesthesia”, says Alexey Ossadtchi, lead study author and director of the Centre for Bioelectric Interfaces at the HSE Institute.
“By building models on internal speech directly, our results may translate to people who cannot vocalize speech or are completely locked in,” the Caltech researchers concluded in an interview with Psychology Today.
I am impressed with this web site, really I am a big fan .
Thank you so much, glad you enjoyed the article!