Artificial intelligence uses EEG data to reconstruct images based on what participants perceive.

Uncovering the dynamics of facial identity processing in the brain along with its representational basis and interpretation outlines an obvious application for those unable to communicate without the use of computational technology.  Now, a study from researchers at the University of Toronto Scarborough demonstrates, for the first time, how to reconstruct images from brain activity gathered by electroencephalography (EEG).  The team states their deep-learning technique could provide a means of communication for people who are unable to verbally communicate and could also have forensic uses for law enforcement in gathering more accurate eyewitness information.  The study is published in the journal eneuro.

Previous studies have shown when a person sees something their brain creates a mental percept, essentially a mental impression of a real image or event.  And while techniques like fMRI can detect the finer details of activity in specific areas of the brain, EEG has greater practical potential given it’s easier to access, portable, and inexpensive by comparison. Also, fMRI captures activity at the time scale of seconds whereas EEG captures activity on the millisecond scale.  The current study was able to digitally reconstruct images seen by participants based on EEG data.

The current study shows participant’s images of faces whilst they were hooked up to EEG equipment.  Results show multiple temporal intervals support facial identity classification, face space estimation, extraction of visual features, as well as image reconstruction. Data findings show it takes the human brain about 170 milliseconds (0.17 seconds) to form a good representation of the face seen to interpret into tangible data.

The group states the participant’s brain activity was recorded and used to digitally recreate the image in the subject’s mind using a technique based on machine learning algorithms.  They go on to add their study shows EEG has the potential for this type of image reconstruction, something many researchers doubted was possible given its apparent limitations.  They conclude using EEG data for image reconstruction has great theoretical and practical potential from a neurotechnological standpoint, especially since it’s relatively inexpensive and portable.

The team surmises they have illustrated, for the first time, the ability to reconstruct the appearance of stimulus images from EEG data.  For the future, the researchers state work is currently underway to test how image reconstruction based on EEG data can be performed using memory and applied to a wider range of objects beyond faces.

Source: University of Toronto Scarborough

Get Healthinnovations delivered to your inbox:

2 thoughts on “Artificial intelligence uses EEG data to reconstruct images based on what participants perceive.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.