Face neurons encode nonsemantic features [video]
Date Posted:
June 23, 2022
Date Recorded:
June 23, 2022
CBMM Speaker(s):
Will Xiao ,
Gabriel Kreiman All Captioned Videos Publication Releases
Description:
CBMM researchers Will Xiao and Gabriel Kreiman discuss their latest publication in PNAS and how their results suggest that so-called "face neurons" are better described as tuned to visual features rather than semantic categories.
[MUSIC PLAYING] WILL XIAO: Hello. I'm Will. I'm getting my PhD in the MCO program at Harvard, working in the Livingstone and Kreiman labs. So the Kreiman lab mostly works on computational models, particularly of vision. And the Livingstone lab mostly works on the visual cortex of macaque monkeys. And I basically work at the interface of the two, building computational models of neuronal activity in the visual cortex.
So there's this long standing question out there about this famous type of neurons, known as the face neurons. These neurons have been known for decades, but people still disagree about whether they truly represent the semantic category of faces, or whether they happen to respond to visual features that correlate with faces.
GABRIEL KREIMAN: The idea of semantics is fundamental to most of how we interact with objects in the world. And the idea is that there could be objects that look very different from each other, but they belong to the same semantic category. So for example, a watermelon and a lemon are quite different from the point of view of the visual features, but they belong to the category fruits.
And in contrast, a tennis ball and a tennis court also belong to the same color, and they also look very different, whereas a tennis ball may be more similar to a lemon. So that's the distinction between visual features and semantics.
WILL XIAO: The reason these two [? VLs ?] have not been distinguished is because face neurons do seem to respond most strongly to faces. But on the other hand, people haven't really tried to look for other things that face neurons might respond to, and see if all of those things count as faces. So we had the opportunity to ask this question based on previous work we did at the lab that developed an unbiased and automatic way to find synthetic stimuli that strongly drive visual neurons.
And we thought of applying this technique to ask this question about face neurons. Namely, can we make synthetic stimuli that can strongly drive face neurons as strongly as real faces can? And if so, do these synthetic stimuli look like faces?
GABRIEL KREIMAN: Will Xiao in our lab, together with Carlos Ponce and Marge Livingstone, developed a methodology to interrogate what neurons prefer in an unbiased and systematic fashion. So what's particularly interesting and fascinating about what they did is that they were able to record the activity of neurons in a closed loop fashion while changing the stimuli using an artificial neural network image generator to probe the neural preferences of neurons in cortex.
WILL XIAO: So we got these images by recording face neurons in macaque monkeys. And the next challenge we had was how to objectively evaluate how face-like these images are. And the experiments we designed was to ask human participants on Mechanical Turk to rate how face-like these images are. We designed a series of six different experiments, six different ways of asking this question, in the hopes of getting a robust answer.
So what results did we get? Well, imagine you are participants on Mechanical Turk looking at these pictures. You might agree that they look kind of like faces, but are also clearly not real faces. And in a nutshell, that is the result we found. We found that these synthetic images made for face neurons, they strongly activated face neural responses as strongly as real faces did. But these synthetic images were rated by humans as much less face-like than real face photos were.
On the other hand, these images were not completely unrelated to faces either. They were rated more face-like than actual non-face objects like chairs and cars, and they were also rated slightly more face-like than evolved images made for non-face preferring neurons. Our results very nicely addresses this long standing question of face neurons, whether they represent a semantic category, or whether they represent visual features. I think the answer is the latter, and that makes face neurons fit much more nicely into the big picture where previous results have shown that other neurons from the same part of the brain where face neurons come from, namely the visual recognition part of the brain, those neurons also are better explained by visual features than object categories.
GABRIEL KREIMAN: So at some point, we will have to deal with the intersection between vision and language, and between vision and semantic information. We use vision to navigate the world, to understand the world, to act, to make plans, and so on. So at some point, we need to understand how semantics is encoded. We don't think that this is encoded directly into the activity of neurons in visual cortex, and in areas like primary visual cortex area, inferior temporal cortex, and so on. So now we need to go beyond visual cortex and begin to elucidate that fundamental transformation of the alphabet of visual features into semantics. And that will bring us to the fascinating domain of connecting vision to language, and connecting vision to cognition in general.
I want to especially acknowledge Alex's work. She started as an undergrad in the lab for just one summer, but she really got the project going and collected all of the human psychophysics data that eventually turned into this paper.
[MUSIC PLAYING]
Associated Research Module: