a study from researchers at the University of California, San Francisco develops an algorithm that can turn brain activity into text in realtime. The team states they use an electrocorticogram (ECoG), an electrical monitoring technique that records activity in the cerebral cortex via electrodes placed directly on the exposed brain, to detect and decode neural patterns into text while the person is speaking out loud.

Brain-computer interface turns brainwaves into text.

Life-changing injuries or life long disabilities can be extremely challenging and overwhelming at times, with researchers working around the clock to improve life for the newly or long-termed disabled. A brain-Computer Interface (BCI) conjoins the brain to artificial intelligence (AI), recording brainwaves to enable communication or to control a neuroprosthesis.

This technology is now being widely used, however, there is still a lot of room for improvement with key biological and engineering problems remaining to be resolved. In regards to communication by individuals who have impaired function, these hurdles include low-quality recordings by home users, low translation speed, the accuracy of the translation, and adapting applications to the needs of the user.

Turning brainwaves into text

Now, a study from researchers at the University of California, San Francisco develops an algorithm that can turn brain activity into text in realtime. The team states they use an electrocorticogram (ECoG), an electrical monitoring technique that records activity in the cerebral cortex via electrodes placed directly on the exposed brain, to detect and decode neural patterns into text while the person is speaking out loud. The opensource study is published in the journal Nature Neuroscience.

Previous studies show that even though BCIs have developed rapidly in the past decade they are still not achieving large-scale translation of brainwaves-to-text in realtime. For instance, when a BCI is paired with a virtual keyboard to produce text the word rate can still be limited to one finger typing.

Most recently direct decoding of limited spoken speech from disabled patients using BCIs has resulted in either isolated syllables or 4–8 words, and in the case of continuous speech, only 40% accuracy has been achieved. The current study investigates whether an ECoG can be used to decode brainwaves into text for people speaking out loud who are capable of 250-word vocabularies.

Mind-reading machines

The current study consists of four participants who had electrodes implanted in their brains previously to monitor their epilepsy. The participants were asked to repeat a set of 30-50 sentences multiple times, whilst their neural activity was tracked using an ECoG. Results show that after the data was fed through machine learning AI the average error rate was as low as 3%. Data findings show decoding is improved with transfer learning, a technique where AI learns to apply data from one task to another different task, thus improving its own performance.

Towards non-vocal patients

The lab explains their system uses only 40 minutes of data, as opposed to the millions of hours normally required, for transfer learning with each participant to achieve an impressive level of accuracy never before seen. They go on to stress the system is not yet ready to use with severely disabled patients as it needs to interpret sound, albeit limited, to text.

The team surmises they have developed an algorithm that can translate brain activity into text using data from ECoG. For the future, the researchers state it is hoped that their system could one day translate brainwaves without the use of sound aiding patients who are unable to speak or type.

Source: The Guardian

Don’t miss the latest discoveries from the health innovator community:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.