Artificial intelligence uses EEG data to reconstruct images based on what participants perceive.

Uncovering the dynamics of facial identity processing in the brain along with its representational basis and interpretation outlines a major endeavor in the study of visual processing, with obvious application for those unable to communicate without the use of computational technology.  Now, a study from researchers at the University of Toronto Scarborough can, for the first time, reconstruct images of what people perceive based on their brain activity gathered by electroencephalography (EEG).  The team state their deep-learning technique could provide a means of communication for people who are unable to verbally communicate, and could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects rather than relying on verbal descriptions provided to a sketch artist.  The study is published in the journal eneuro.

Previous studies show that when a person sees something, their brain creates a mental percept, which is essentially a mental impression of that thing.  And while techniques like fMRI, which measures brain activity by detecting changes in blood flow, can grab finer details of what’s going on in specific areas of the brain, EEG has greater practical potential given that it’s more common, portable, and inexpensive by comparison. Also, fMRI captures activity at the time scale of seconds, however, EEG captures activity at the millisecond scale.  The current study was able to digitally reconstruct images seen by participants based on EEG data.

The current study shows participants images of faces whilst they were hooked up to EEG equipment.  Results show that multiple temporal intervals support facial identity classification, face space estimation, visual feature extraction and image reconstruction. Data findings show that it takes the human brain about 170 milliseconds (0.17 seconds) to form a good representation of the face seen.

The group state that participant’s brain activity was recorded and then used to digitally recreate the image in the subject’s mind using a technique based on machine learning algorithms.  They go on to add that their study provides validation that EEG has potential for this type of image reconstruction, something many researchers doubted was possible given its apparent limitations.  They conclude that using EEG data for image reconstruction has great theoretical and practical potential from a neurotechnological standpoint, especially since it’s relatively inexpensive and portable.

The team surmise they have demonstrated, for the first time, the ability to reconstruct the appearance of stimulus images from EEG data.  For the future, the researchers state that in terms of next steps, work is currently underway to test how image reconstruction based on EEG data could be done using memory and applied to a wider range of objects beyond faces, and it could eventually have wide-ranging clinical applications as well.



Source: University of Toronto Scarborough


Mind-reading algorithm uses EEG data to reconstruct images based on what we perceive



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.