New neuroscience research may provide us with a way to see the world through other people’s eyes.
The study, which was the senior thesis of current University of California, Berkeley graduate student Alan Cowen ’13, used fMRI imaging to reconstruct facial images. Subjects were presented with a series of faces, and the researchers were able to recreate the images based on subjects’ brain activity. The study was the first ever to reconstruct images of faces from brain activity alone and allowed researchers to peek into the black box of subjective experience, said Marvin Chun, Yale professor of psychology and study co-author.
“Scientists benefit from improved ways to study neural representation,” Chun said. “Many futuristic movies have elements of mind reading, so this kind of work brings science fiction closer to reality.”
The study builds upon previous research in the fields of visual perception and image reconstruction. Cowen, who was an undergraduate working in Chun’s lab at the time and is a co-author of the study, was inspired by similar studies carried out at Berkeley, Chun said.
The researchers showed subjects pictures of eigenfaces — derivatives of normal faces that are manipulated to highlight basic components of the original face. During the training phase, the six subjects were presented with eigenfaces, and their brain activity was recorded with fMRI in order to determine the locations and levels of activity triggered by each facial feature.
Activity was measured both in areas of the brain that process visual stimuli and in regions known to process faces. The researchers were curious whether faces could be reconstructed without brain activity from regions of visual processing, allowing the team to understand not only how the brain sees images, but how it understands faces, Cowen said.
After initial fMRI measurements, subjects’ brains were imaged once more as they viewed 30 test faces. Without referencing the test faces, researchers were able to reconstruct them by matching the measured brain activity to the original brain activity triggered by the eigenfaces.
All the original eigenfaces were generated by a computer program capable of identifying basic facial features, and the objective accuracy of the final reconstructions was also tested by computational means. However, in order to ensure that the reconstructions could be useful to humans, the subjects were also asked to determine if the reconstructions accurately represented the original faces.
Both humans and computers reported that the fMRI reconstructions accurately recreated the original faces, even when brain activity from the visual processing centers was excluded.
“The method offers an exciting way to ‘see’ what is being remembered or misremembered when we recall a face,” said Brice Kuhl, professor of psychology at New York University and study co-author.
Understanding how the brain responds to faces could lead to insights in diverse fields, according to the study’s co-authors. Implicit prejudices against certain races could influence how an individual’s brain processes facial images, such as making faces of other races appear darker or angrier, Cowen said.
Disordered face processing is also a symptom of autism, Kuhl said, and this new way of reconstructing images could enable scientists to visualize how autistic individuals see faces.
This research has the potential to provide new access to what people see, experience and think, Chun said.
“Face recognition is one of the human brain’s most remarkable and important functions, so being able to decode that is significant,” Chun said.
The study appeared in the journal NeuroImage on March 17.