1
|
Noad KN, Watson DM, Andrews TJ. Familiarity enhances functional connectivity between visual and nonvisual regions of the brain during natural viewing. Cereb Cortex 2024; 34:bhae285. [PMID: 39038830 DOI: 10.1093/cercor/bhae285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 06/18/2024] [Accepted: 06/26/2024] [Indexed: 07/24/2024] Open
Abstract
We explored the neural correlates of familiarity with people and places using a naturalistic viewing paradigm. Neural responses were measured using functional magnetic resonance imaging, while participants viewed a movie taken from Game of Thrones. We compared inter-subject correlations and functional connectivity in participants who were either familiar or unfamiliar with the TV series. Higher inter-subject correlations were found between familiar participants in regions, beyond the visual brain, that are typically associated with the processing of semantic, episodic, and affective information. However, familiarity also increased functional connectivity between face and scene regions in the visual brain and the nonvisual regions of the familiarity network. To determine whether these regions play an important role in face recognition, we measured responses in participants with developmental prosopagnosia (DP). Consistent with a deficit in face recognition, the effect of familiarity was significantly attenuated across the familiarity network in DP. The effect of familiarity on functional connectivity between face regions and the familiarity network was also attenuated in DP. These results show that the neural response to familiarity involves an extended network of brain regions and that functional connectivity between visual and nonvisual regions of the brain plays an important role in the recognition of people and places during natural viewing.
Collapse
Affiliation(s)
- Kira N Noad
- Department of Psychology, University of York, York Y010 5DD, United Kingdom
| | - David M Watson
- Department of Psychology, University of York, York Y010 5DD, United Kingdom
| | - Timothy J Andrews
- Department of Psychology, University of York, York Y010 5DD, United Kingdom
| |
Collapse
|
2
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Zhang Z, Chen T, Liu Y, Wang C, Zhao K, Liu CH, Fu X. Decoding the temporal representation of facial expression in face-selective regions. Neuroimage 2023; 283:120442. [PMID: 37926217 DOI: 10.1016/j.neuroimage.2023.120442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023] Open
Abstract
The ability of humans to discern facial expressions in a timely manner typically relies on distributed face-selective regions for rapid neural computations. To study the time course in regions of interest for this process, we used magnetoencephalography (MEG) to measure neural responses participants viewed facial expressions depicting seven types of emotions (happiness, sadness, anger, disgust, fear, surprise, and neutral). Analysis of the time-resolved decoding of neural responses in face-selective sources within the inferior parietal cortex (IP-faces), lateral occipital cortex (LO-faces), fusiform gyrus (FG-faces), and posterior superior temporal sulcus (pSTS-faces) revealed that facial expressions were successfully classified starting from ∼100 to 150 ms after stimulus onset. Interestingly, the LO-faces and IP-faces showed greater accuracy than FG-faces and pSTS-faces. To examine the nature of the information processed in these face-selective regions, we entered with facial expression stimuli into a convolutional neural network (CNN) to perform similarity analyses against human neural responses. The results showed that neural responses in the LO-faces and IP-faces, starting ∼100 ms after the stimuli, were more strongly correlated with deep representations of emotional categories than with image level information from the input images. Additionally, we observed a relationship between the behavioral performance and the neural responses in the LO-faces and IP-faces, but not in the FG-faces and lpSTS-faces. Together, these results provided a comprehensive picture of the time course and nature of information involved in facial expression discrimination across multiple face-selective regions, which advances our understanding of how the human brain processes facial expressions.
Collapse
Affiliation(s)
- Zhihao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Chen
- Chongqing Key Laboratory of Non-Linear Circuit and Intelligent Information Processing, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing 400715, China
| | - Ye Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chongyang Wang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
4
|
Coutanche MN, Sauter J, Akpan E, Buckser R, Vincent A, Caulfield MK. Novel approaches to functional lateralization: Assessing information in activity patterns across hemispheres and more accurately identifying structural homologues. Neuropsychologia 2023; 190:108684. [PMID: 37741550 DOI: 10.1016/j.neuropsychologia.2023.108684] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 05/16/2023] [Accepted: 09/20/2023] [Indexed: 09/25/2023]
Abstract
Functional lateralization is typically measured by comparing activation levels across the right and left hemispheres of the brain. Significant additional information, however, exists within distributed multi-voxel patterns of activity - a format not detectable by traditional activation-based analysis of functional magnetic resonance imaging (fMRI) data. We introduce and test two methods -one anatomical, one functional- that allow hemispheric information asymmetries to be detected. We first introduce and apply a novel tool that draws on brain 'surface fingerprints' to pair every location in one hemisphere with its hemispheric homologue. We use anatomical data to show that this approach is more accurate than the common distance-from-midline method for comparing bilateral regions. Next, we introduce a complementary analysis method that quantifies multivariate laterality in functional data. This new 'multivariate Laterality Index' (mLI) reflects both quantitative and qualitative information-differences across homologous activity patterns. We apply the technique here to functional data collected as participants viewed faces and non-faces. Using the previously generated surface fingerprints to pair-up homologous searchlights in each hemisphere, we use the novel multivariate laterality technique to identify face-information asymmetries across right and left counterparts of the fusiform gyrus, inferior temporal gyrus, superior parietal lobule, and early visual areas. The typical location of the fusiform face area has greater information asymmetry for faces than for shapes. More generally, we argue that the field should consider an information-based approach to lateralization.
Collapse
Affiliation(s)
- Marc N Coutanche
- Department of Psychology, University of Pittsburgh, PA, 15260, USA; Learning Research & Development Center, University of Pittsburgh, PA, 15260, USA; Brain Institute, University of Pittsburgh, PA, 15260, USA.
| | - Jake Sauter
- State University of New York at Oswego, Oswego, NY, USA
| | - Essang Akpan
- Department of Psychology, University of Pittsburgh, PA, 15260, USA; Learning Research & Development Center, University of Pittsburgh, PA, 15260, USA
| | - Rae Buckser
- Department of Psychology, University of Pittsburgh, PA, 15260, USA; Learning Research & Development Center, University of Pittsburgh, PA, 15260, USA
| | - Augusta Vincent
- Department of Psychology, University of Pittsburgh, PA, 15260, USA; Learning Research & Development Center, University of Pittsburgh, PA, 15260, USA
| | | |
Collapse
|
5
|
Wang A, Sliwinska MW, Watson DM, Smith S, Andrews TJ. Distinct patterns of neural response to faces from different races in humans and deep networks. Soc Cogn Affect Neurosci 2023; 18:nsad059. [PMID: 37837305 PMCID: PMC10634630 DOI: 10.1093/scan/nsad059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 07/27/2023] [Accepted: 10/06/2023] [Indexed: 10/15/2023] Open
Abstract
Social categories such as the race or ethnicity of an individual are typically conveyed by the visual appearance of the face. The aim of this study was to explore how these differences in facial appearance are represented in human and artificial neural networks. First, we compared the similarity of faces from different races using a neural network trained to discriminate identity. We found that the differences between races were most evident in the fully connected layers of the network. Although these layers were also able to predict behavioural judgements of face identity from human participants, performance was biased toward White faces. Next, we measured the neural response in face-selective regions of the human brain to faces from different races in Asian and White participants. We found distinct patterns of response to faces from different races in face-selective regions. We also found that the spatial pattern of response was more consistent across participants for own-race compared to other-race faces. Together, these findings show that faces from different races elicit different patterns of response in human and artificial neural networks. These differences may underlie the ability to make categorical judgements and explain the behavioural advantage for the recognition of own-race faces.
Collapse
Affiliation(s)
- Ao Wang
- Department of Psychology, University of York, York YO10 5DD, UK
- Department of Psychology, University of Southampton, Southampton SO17 1BJ, UK
| | - Magdalena W Sliwinska
- Department of Psychology, University of York, York YO10 5DD, UK
- School of Psychology, Liverpool John Moores University, Liverpool L2 2QP, UK
| | - David M Watson
- Department of Psychology, University of York, York YO10 5DD, UK
| | - Sam Smith
- Department of Psychology, University of York, York YO10 5DD, UK
| | | |
Collapse
|
6
|
Laurent MA, Audurier P, De Castro V, Gao X, Durand JB, Jonas J, Rossion B, Cottereau BR. Towards an optimization of functional localizers in non-human primate neuroimaging with (fMRI) frequency-tagging. Neuroimage 2023; 270:119959. [PMID: 36822249 DOI: 10.1016/j.neuroimage.2023.119959] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 02/16/2023] [Accepted: 02/20/2023] [Indexed: 02/25/2023] Open
Abstract
Non-human primate (NHP) neuroimaging can provide essential insights into the neural basis of human cognitive functions. While functional magnetic resonance imaging (fMRI) localizers can play an essential role in reaching this objective (Russ et al., 2021), they often differ substantially across species in terms of paradigms, measured signals, and data analysis, biasing the comparisons. Here we introduce a functional frequency-tagging face localizer for NHP imaging, successfully developed in humans and outperforming standard face localizers (Gao et al., 2018). FMRI recordings were performed in two awake macaques. Within a rapid 6 Hz stream of natural non-face objects images, human or monkey face stimuli were presented in bursts every 9 s. We also included control conditions with phase-scrambled versions of all images. As in humans, face-selective activity was objectively identified and quantified at the peak of the face-stimulation frequency (0.111 Hz) and its second harmonic (0.222 Hz) in the Fourier domain. Focal activations with a high signal-to-noise ratio were observed in regions previously described as face-selective, mainly in the STS (clusters PL, ML, MF; also, AL, AF), both for human and monkey faces. Robust face-selective activations were also found in the prefrontal cortex of one monkey (PVL and PO clusters). Face-selective neural activity was highly reliable and excluded all contributions from low-level visual cues contained in the amplitude spectrum of the stimuli. These observations indicate that fMRI frequency-tagging provides a highly valuable approach to objectively compare human and monkey visual recognition systems within the same framework.
Collapse
Affiliation(s)
| | - Pauline Audurier
- Centre de Recherche Cerveau et Cognition, Université Toulouse 3 Paul Sabatier, CNRS, 31052 Toulouse, France
| | - Vanessa De Castro
- Centre de Recherche Cerveau et Cognition, Université Toulouse 3 Paul Sabatier, CNRS, 31052 Toulouse, France
| | - Xiaoqing Gao
- Center for Psychological Sciences, Zhejiang University, Hangzhou City, China
| | - Jean-Baptiste Durand
- Centre de Recherche Cerveau et Cognition, Université Toulouse 3 Paul Sabatier, CNRS, 31052 Toulouse, France
| | - Jacques Jonas
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; Universite de Lorraine, CHRU-Nancy, Service de neurologie, F-54000, France
| | - Bruno Rossion
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France
| | - Benoit R Cottereau
- Centre de Recherche Cerveau et Cognition, Université Toulouse 3 Paul Sabatier, CNRS, 31052 Toulouse, France.
| |
Collapse
|
7
|
Revsine C, Gonzalez-Castillo J, Merriam EP, Bandettini PA, Ramírez FM. A unifying model for discordant and concordant results in human neuroimaging studies of facial viewpoint selectivity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.08.527219. [PMID: 36945636 PMCID: PMC10028835 DOI: 10.1101/2023.02.08.527219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
Our ability to recognize faces regardless of viewpoint is a key property of the primate visual system. Traditional theories hold that facial viewpoint is represented by view-selective mechanisms at early visual processing stages and that representations become increasingly tolerant to viewpoint changes in higher-level visual areas. Newer theories, based on single-neuron monkey electrophysiological recordings, suggest an additional intermediate processing stage invariant to mirror-symmetric face views. Consistent with traditional theories, human studies combining neuroimaging and multivariate pattern analysis (MVPA) methods have provided evidence of view-selectivity in early visual cortex. However, contradictory results have been reported in higher-level visual areas concerning the existence in humans of mirror-symmetrically tuned representations. We believe these results reflect low-level stimulus confounds and data analysis choices. To probe for low-level confounds, we analyzed images from two popular face databases. Analyses of mean image luminance and contrast revealed biases across face views described by even polynomials-i.e., mirror-symmetric. To explain major trends across human neuroimaging studies of viewpoint selectivity, we constructed a network model that incorporates three biological constraints: cortical magnification, convergent feedforward projections, and interhemispheric connections. Given the identified low-level biases, we show that a gradual increase of interhemispheric connections across network layers is sufficient to replicate findings of mirror-symmetry in high-level processing stages, as well as view-tuning in early processing stages. Data analysis decisions-pattern dissimilarity measure and data recentering-accounted for the variable observation of mirror-symmetry in late processing stages. The model provides a unifying explanation of MVPA studies of viewpoint selectivity. We also show how common analysis choices can lead to erroneous conclusions.
Collapse
Affiliation(s)
- Cambria Revsine
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD
- Department of Psychology, University of Chicago, Chicago, IL
| | - Javier Gonzalez-Castillo
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD
| | - Peter A Bandettini
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD
- Functional MRI Core, National Institutes of Health, Bethesda, MD
| | - Fernando M Ramírez
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD
| |
Collapse
|
8
|
Elson R, Schluppeck D, Johnston A. fMRI evidence that hyper-caricatured faces activate object-selective cortex. Front Psychol 2023; 13:1035524. [PMID: 36710782 PMCID: PMC9878608 DOI: 10.3389/fpsyg.2022.1035524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 12/20/2022] [Indexed: 01/13/2023] Open
Abstract
Many brain imaging studies have looked at the cortical responses to object categories and faces. A popular way to manipulate face stimuli is by using a "face space," a high dimensional representation of individual face images, with the average face located at the origin. However, how the brain responds to faces that deviate substantially from average has not been much explored. Increasing the distance from the average (leading to increased caricaturing) could increase neural responses in face-selective regions, an idea supported by results from non-human primates. Here, we used a face space based on principal component analysis (PCA) to generate faces ranging from average to heavily caricatured. Using functional magnetic resonance imaging (fMRI), we first independently defined face-, object- and scene-selective areas with a localiser scan and then measured responses to parametrically caricatured faces. We also included conditions in which the images of faces were inverted. Interestingly in the right fusiform face area (FFA), we found that the patterns of fMRI response were more consistent as caricaturing increased. However, we found no consistent effect of either caricature level or facial inversion on the average fMRI response in the FFA or face-selective regions more broadly. In contrast, object-selective regions showed an increase in both the consistency of response pattern and the average fMRI response with increasing caricature level. This shows that caricatured faces recruit processing from regions typically defined as object-selective, possibly through enhancing low-level properties that are characteristic of objects.
Collapse
|
9
|
Coggan DD, Watson DM, Wang A, Brownbridge R, Ellis C, Jones K, Kilroy C, Andrews TJ. The representation of shape and texture in category-selective regions of ventral-temporal cortex. Eur J Neurosci 2022; 56:4107-4120. [PMID: 35703007 PMCID: PMC9545892 DOI: 10.1111/ejn.15737] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 05/31/2022] [Accepted: 06/01/2022] [Indexed: 11/27/2022]
Abstract
Neuroimaging studies using univariate and multivariate approaches have shown that the fusiform face area (FFA) and parahippocampal place area (PPA) respond selectively to images of faces and places. The aim of this study was to determine the extent to which this selectivity to faces or places is based on the shape or texture properties of the images. Faces and houses were filtered to manipulate their texture properties, while preserving the shape properties (spatial envelope) of the images. In Experiment 1, multivariate pattern analysis (MVPA) showed that patterns of fMRI response to faces and houses in FFA and PPA were predicted by the shape properties, but not by the texture properties of the image. In Experiment 2, a univariate analysis (fMR‐adaptation) showed that responses in the FFA and PPA were sensitive to changes in both the shape and texture properties of the image. These findings can be explained by the spatial scale of the representation of images in the FFA and PPA. At a coarser scale (revealed by MVPA), the neural selectivity to faces and houses is sensitive to variation in the shape properties of the image. However, at a finer scale (revealed by fMR‐adaptation), the neural selectivity is sensitive to the texture properties of the image. By combining these neuroimaging paradigms, our results provide insights into the spatial scale of the neural representation of faces and places in the ventral‐temporal cortex.
Collapse
Affiliation(s)
- David D Coggan
- Department of Psychology, University of York, York, UK.,Department of Psychology, Vanderbilt University, Nashville, Tennessee, USA
| | | | - Ao Wang
- Department of Psychology, University of York, York, UK
| | | | | | - Kathryn Jones
- Department of Psychology, University of York, York, UK
| | | | | |
Collapse
|
10
|
Rogers D, Andrews TJ. The emergence of view-symmetric neural responses to familiar and unfamiliar faces. Neuropsychologia 2022; 172:108275. [PMID: 35660513 DOI: 10.1016/j.neuropsychologia.2022.108275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 05/19/2022] [Accepted: 05/24/2022] [Indexed: 10/18/2022]
Abstract
Successful recognition of familiar faces is thought to depend on the ability to integrate view-dependent representations of a face into a view-invariant representation. It has been proposed that a key intermediate step in achieving view invariance is the representation of symmetrical views. However, key unresolved questions remain, such as whether these representations are specific for naturally occurring changes in viewpoint and whether view-symmetric representations exist for familiar faces. To address these issues, we compared behavioural and neural responses to natural (canonical) and unnatural (noncanonical) rotations of the face. Similarity judgements revealed that symmetrical viewpoints were perceived to be more similar than non-symmetrical viewpoints for both canonical and non-canonical rotations. Next, we measured patterns of neural response from early to higher level regions of visual cortex. Early visual areas showed a view-dependent representation for natural or canonical rotations of the face, such that the similarity between patterns of response were related to the difference in rotation. View symmetric patterns of neural response to canonically rotated faces emerged in higher visual areas, particularly in face-selective regions. The emergence of a view-symmetric representation from a view-dependent representation for canonical rotations of the face was also evident for familiar faces, suggesting that view-symmetry is an important intermediate step in generating view-invariant representations. Finally, we measured neural responses to unnatural or non-canonical rotations of the face. View-symmetric patterns of response were also found in face-selective regions. However, in contrast to natural or canonical rotations of the face, these view-symmetric responses did not arise from an initial view-dependent representation in early visual areas. This suggests differences in the way that view-symmetrical representations emerge with canonical or non-canonical rotations. The similarity in the neural response to canonical views of familiar and unfamiliar faces in the core face network suggests that the neural correlates of familiarity emerge at later stages of processing.
Collapse
Affiliation(s)
- Daniel Rogers
- Department of Psychology, University of York, York, YO10 5DD, United Kingdom
| | - Timothy J Andrews
- Department of Psychology, University of York, York, YO10 5DD, United Kingdom.
| |
Collapse
|
11
|
Cattaneo Z, Bona S, Ciricugno A, Silvanto J. The chronometry of symmetry detection in the lateral occipital (LO) cortex. Neuropsychologia 2022; 167:108160. [PMID: 35038443 DOI: 10.1016/j.neuropsychologia.2022.108160] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 12/13/2021] [Accepted: 01/13/2022] [Indexed: 11/24/2022]
Abstract
The lateral occipital cortex (LO) has been shown to code the presence of both vertical and horizontal visual symmetry in dot patterns. However, the specific time window at which LO is causally involved in symmetry encoding has not been investigated. This was assessed using a chronometric transcranial magnetic stimulation (TMS) approach. Participants were presented with a series of dot configurations and instructed to judge whether they were symmetric along the vertical axis or not while receiving a double pulse of TMS over either the right LO (rLO) or the vertex (baseline) at different time windows (ranging from 50 ms to 290 ms from stimulus onset). We found that TMS delivered over the rLO significantly decreased participants' accuracy in discriminating symmetric from non-symmetric patterns when TMS was applied between 130 ms and 250 ms from stimulus onset, suggesting that LO is causally involved in symmetry perception within this time window. These findings confirm and extend prior neuroimaging and ERP evidence by demonstrating not only that LO is causally involved in symmetry encoding but also that its contribution occurs in a relatively large temporal window, at least in tasks requiring fast discrimination of mirror symmetry in briefly (75 ms) presented patterns as in our study.
Collapse
Affiliation(s)
- Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milan, Italy; IRCCS Mondino Foundation, Pavia, Italy
| | - Silvia Bona
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | | | - Juha Silvanto
- School of Psychology, University of Surrey, Surrey, UK
| |
Collapse
|
12
|
Murray T, O'Brien J, Sagiv N, Garrido L. The role of stimulus-based cues and conceptual information in processing facial expressions of emotion. Cortex 2021; 144:109-132. [PMID: 34666297 DOI: 10.1016/j.cortex.2021.08.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 07/16/2021] [Accepted: 08/09/2021] [Indexed: 01/07/2023]
Abstract
Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity analysis to investigate the relative roles of shape, surface, and conceptual information in the perception, categorisation, and neural representation of facial expressions. In Study 1, 50 participants completed a perceptual task designed to measure the perceptual similarity of expression pairs, and a categorical task designed to measure the confusability between expression pairs when assigning emotion labels to a face. We used representational similarity analysis and constructed three models of the similarities between emotions using distinct information. Two models were based on stimulus-based cues (face shapes and surface textures) and one model was based on emotion concepts. Using multiple linear regression, we found that behaviour during both tasks was related with the similarity of emotion concepts. The model based on face shapes was more related with behaviour in the perceptual task than in the categorical, and the model based on surface textures was more related with behaviour in the categorical than the perceptual task. In Study 2, 30 participants viewed facial expressions while undergoing fMRI, allowing for the measurement of brain representational geometries of facial expressions of emotion in three core face-responsive regions (the Fusiform Face Area, Occipital Face Area, and Superior Temporal Sulcus), and a region involved in theory of mind (Medial Prefrontal Cortex). Across all four regions, the representational distances between facial expression pairs were related to the similarities of emotion concepts, but not to either of the stimulus-based cues. Together, these results highlight the important top-down influence of high-level emotion concepts both in behavioural tasks and in the neural representation of facial expressions.
Collapse
Affiliation(s)
- Thomas Murray
- Psychology Department, School of Biological and Behavioural Sciences, Queen Mary University London, United Kingdom.
| | - Justin O'Brien
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Noam Sagiv
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Lúcia Garrido
- Department of Psychology, City, University of London, United Kingdom
| |
Collapse
|
13
|
FFA and OFA Encode Distinct Types of Face Identity Information. J Neurosci 2021; 41:1952-1969. [PMID: 33452225 DOI: 10.1523/jneurosci.1449-20.2020] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 01/11/2023] Open
Abstract
Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.
Collapse
|
14
|
Ramírez FM, Revsine C, Merriam EP. What do across-subject analyses really tell us about neural coding? Neuropsychologia 2020; 143:107489. [PMID: 32437761 PMCID: PMC8596303 DOI: 10.1016/j.neuropsychologia.2020.107489] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 04/27/2020] [Accepted: 05/04/2020] [Indexed: 12/18/2022]
Abstract
A key challenge in human neuroscience is to gain information about patterns of neural activity using indirect measures. Multivariate pattern analysis methods testing for generalization of information across subjects have been used to support inferences regarding neural coding. One critical assumption of an important class of such methods is that anatomical normalization is suited to align spatially-structured neural patterns across individual brains. We asked whether anatomical normalization is suited for this purpose. If not, what sources of information are such across-subject cross-validated analyses likely to reveal? To investigate these questions, we implemented two-layered feedforward randomly-connected networks. A key feature of these simulations was a gain-field with a spatial structure shared across networks. To investigate whether total-signal imbalances across conditions-e.g. differences in overall activity-affect the observed pattern of results, we manipulated the energy-profile of images conforming to a pre-specified correlation structure. To investigate whether the level of granularity of the data also influences results, we manipulated the density of connections between network layers. Simulations showed that anatomical normalization is unsuited to align neural representations. Pattern similarity-relationships were explained by the observed total-signal imbalances across conditions. Further, we observed that deceptively complex representational structures emerge from arbitrary analysis choices, such as whether the data are mean-subtracted during preprocessing. These simulations also led to testable predictions regarding the distribution of low-level features in images used in recent fMRI studies that relied on leave-one-subject-out pattern analyses. Image analyses broadly confirmed these predictions. Finally, hyperalignment emerged as a principled alternative to test across-subject generalization of spatially-structured information. We illustrate cases in which hyperalignment proved successful, as well as cases in which it only partially recovered the latent correlation structure in the pattern of responses. Our results highlight the need for robust, high-resolution measurements from individual subjects. We also offer a way forward for across-subject analyses. We suggest ways to inform hyperalignment results with estimates of the strength of the signal associated with each condition. Such information can usefully constrain ensuing inferences regarding latent representational structures as well as population tuning dimensions.
Collapse
Affiliation(s)
- Fernando M Ramírez
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Building 10, Rm 4C118, Bethesda, MD, 20892-1366, USA.
| | - Cambria Revsine
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Building 10, Rm 4C118, Bethesda, MD, 20892-1366, USA
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Building 10, Rm 4C118, Bethesda, MD, 20892-1366, USA
| |
Collapse
|
15
|
Fan X, Wang F, Shao H, Zhang P, He S. The bottom-up and top-down processing of faces in the human occipitotemporal cortex. eLife 2020; 9:48764. [PMID: 31934855 PMCID: PMC7000216 DOI: 10.7554/elife.48764] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 01/10/2020] [Indexed: 01/07/2023] Open
Abstract
Although face processing has been studied extensively, the dynamics of how face-selective cortical areas are engaged remains unclear. Here, we uncovered the timing of activation in core face-selective regions using functional Magnetic Resonance Imaging and Magnetoencephalography in humans. Processing of normal faces started in the posterior occipital areas and then proceeded to anterior regions. This bottom-up processing sequence was also observed even when internal facial features were misarranged. However, processing of two-tone Mooney faces lacking explicit prototypical facial features engaged top-down projection from the right posterior fusiform face area to right occipital face area. Further, face-specific responses elicited by contextual cues alone emerged simultaneously in the right ventral face-selective regions, suggesting parallel contextual facilitation. Together, our findings chronicle the precise timing of bottom-up, top-down, as well as context-facilitated processing sequences in the occipital-temporal face network, highlighting the importance of the top-down operations especially when faced with incomplete or ambiguous input.
Collapse
Affiliation(s)
- Xiaoxu Fan
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Fan Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Hanyu Shao
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Peng Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Minnesota, Minneapolis, United States
| |
Collapse
|
16
|
Quantifying the effect of viewpoint changes on sensitivity to face identity. Vision Res 2019; 165:1-12. [DOI: 10.1016/j.visres.2019.09.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 08/28/2019] [Accepted: 09/16/2019] [Indexed: 11/20/2022]
|
17
|
Inamizu S, Yamada E, Ogata K, Uehara T, Kira JI, Tobimatsu S. Neuromagnetic correlates of hemispheric specialization for face and word recognition. Neurosci Res 2019; 156:108-116. [PMID: 31730780 DOI: 10.1016/j.neures.2019.11.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 11/11/2019] [Indexed: 10/25/2022]
Abstract
The adult human brain appears to have specialized and independent neural systems for the visual processing of faces and words: greater selectivity for faces in the right hemisphere (RH) while greater selectivity for words in the left hemisphere (LH). Nevertheless, the nature of functional differences between the hemispheres is still largely unknown. To elucidate the hemispheric specialization for face and word recognition, event-related magnetic fields (ERFs) were recorded in young adults while they passively viewed faces and words presented either in the right visual field or in the left visual field. If the neural correlates of face recognition and word recognition reflect the same lateralization profile, then the lateralization of the magnetic source of the M170 component should follow a similar profile, with a greater M170 response for faces in the RH and a greater M170 response for words in the LH. We observed the expected finding of a larger M170 in the LH for words. Unexpectedly, a larger M170 response in the RH for faces was not found. Thus, the hemispheric organization of face recognition is different from that of word recognition in terms of specificity.
Collapse
Affiliation(s)
- Saeko Inamizu
- Department of Clinical Neurophysiology, Faculty of Medicine, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan; Department of Neurology, Neurological Institute, Faculty of Medicine, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan.
| | - Emi Yamada
- Department of Clinical Neurophysiology, Faculty of Medicine, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Katsuya Ogata
- Department of Clinical Neurophysiology, Faculty of Medicine, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Taira Uehara
- Department of Clinical Neurophysiology, Faculty of Medicine, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Jun-Ichi Kira
- Department of Neurology, Neurological Institute, Faculty of Medicine, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Shozo Tobimatsu
- Department of Clinical Neurophysiology, Faculty of Medicine, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
18
|
Roberge A, Duncan J, Fiset D, Brisson B. Dual-Task Interference on Early and Late Stages of Facial Emotion Detection Is Revealed by Human Electrophysiology. Front Hum Neurosci 2019; 13:391. [PMID: 31780912 PMCID: PMC6856761 DOI: 10.3389/fnhum.2019.00391] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 10/21/2019] [Indexed: 11/29/2022] Open
Abstract
Rapid and accurate processing of potential social threats is paramount to social thriving, and provides a clear evolutionary advantage. Though automatic processing of facial expressions has been assumed for some time, some researchers now question the extent to which this is the case. Here, we provide electrophysiological data from a psychological refractory period (PRP) dual-task paradigm in which participants had to decide whether a target face exhibited a neutral or fearful expression, as overlap with a concurrent auditory tone categorization task was experimentally manipulated. Specifically, we focused on four event-related potentials (ERP) linked to emotional face processing, covering distinct processing stages and topography: the early posterior negativity (EPN), early frontal positivity (EFP), late positive potential (LPP), and also the face-sensitive N170. As expected, there was an emotion modulation of each ERP. Most importantly, there was a significant attenuation of this emotional response proportional to the degree of task overlap for each component, except the N170. In fact, when the central overlap was greatest, this emotion-specific amplitude was statistically null for the EFP and LPP, and only marginally different from zero for the EPN. N170 emotion modulation was, on the other hand, unaffected by central overlap. Thus, our results show that emotion-specific ERPs for three out of four processing stages—i.e., perceptual encoding (EPN), emotion detection (EFP), or content evaluation (LPP)—are attenuated and even eliminated by central resource scarcity. Models assuming automatic processing should be revised to account for these results.
Collapse
Affiliation(s)
- Amélie Roberge
- Département de Psychologie, Université du Québec à Trois-Rivières, Trois-Rivières, QC, Canada
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, Canada
| | - Justin Duncan
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, Canada
- Département de Psychologie, Université du Québec à Montréal, Montreal, QC, Canada
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, Canada
| | - Benoit Brisson
- Département de Psychologie, Université du Québec à Trois-Rivières, Trois-Rivières, QC, Canada
- *Correspondence: Benoit Brisson
| |
Collapse
|
19
|
Coggan DD, Giannakopoulou A, Ali S, Goz B, Watson DM, Hartley T, Baker DH, Andrews TJ. A data-driven approach to stimulus selection reveals an image-based representation of objects in high-level visual areas. Hum Brain Mapp 2019; 40:4716-4731. [PMID: 31338936 DOI: 10.1002/hbm.24732] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 06/25/2019] [Accepted: 06/26/2019] [Indexed: 11/07/2022] Open
Abstract
The ventral visual pathway is directly involved in the perception and recognition of objects. However, the extent to which the neural representation of objects in this region reflects low-level or high-level properties remains unresolved. A problem in resolving this issue is that only a small proportion of the objects experienced during natural viewing can be shown during a typical experiment. This can lead to an uneven sampling of objects that biases our understanding of how they are represented. To address this issue, we developed a data-driven approach to stimulus selection that involved describing a large number objects in terms of their image properties. In the first experiment, clusters of objects were evenly selected from this multi-dimensional image space. Although the clusters did not have any consistent semantic features, each elicited a distinct pattern of neural response. In the second experiment, we asked whether high-level, category-selective patterns of response could be elicited by objects from other categories, but with similar image properties. Object clusters were selected based on the similarity of their image properties to objects from five different categories (bottle, chair, face, house, and shoe). The pattern of response to each metameric object cluster was similar to the pattern elicited by objects from the corresponding category. For example, the pattern for bottles was similar to the pattern for objects with similar image properties to bottles. In both experiments, the patterns of response were consistent across participants providing evidence for common organising principles. This study provides a more ecological approach to understanding the perceptual representations of objects and reveals the importance of image properties.
Collapse
Affiliation(s)
| | | | - Sanah Ali
- Department of Psychology, University of York, York, UK
| | - Burcu Goz
- Department of Psychology, University of York, York, UK
| | - David M Watson
- School of Psychology, The University of Nottingham, Nottingham, UK
| | - Tom Hartley
- Department of Psychology, University of York, York, UK.,York Biomedical Research Institute, University of York, York, UK
| | - Daniel H Baker
- Department of Psychology, University of York, York, UK.,York Biomedical Research Institute, University of York, York, UK
| | | |
Collapse
|
20
|
Symmetrical Viewpoint Representations in Face-Selective Regions Convey an Advantage in the Perception and Recognition of Faces. J Neurosci 2019; 39:3741-3751. [PMID: 30842248 DOI: 10.1523/jneurosci.1977-18.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 01/11/2019] [Accepted: 01/16/2019] [Indexed: 11/21/2022] Open
Abstract
Learning new identities is crucial for effective social interaction. A critical aspect of this process is the integration of different images from the same face into a view-invariant representation that can be used for recognition. The representation of symmetrical viewpoints has been proposed to be a key computational step in achieving view-invariance. The aim of this study was to determine whether the representation of symmetrical viewpoints in face-selective regions is directly linked to the perception and recognition of face identity. In Experiment 1, we measured fMRI responses while male and female human participants viewed images of real faces from different viewpoints (-90, -45, 0, 45, and 90° from full-face view). Within the face regions, patterns of neural response to symmetrical views (-45 and 45° or -90 and 90°) were more similar than responses to nonsymmetrical views in the fusiform face area and superior temporal sulcus, but not in the occipital face area. In Experiment 2, participants made perceptual similarity judgements to pairs of face images. Images with symmetrical viewpoints were reported as being more similar than nonsymmetric views. In Experiment 3, we asked whether symmetrical views also convey an advantage when learning new faces. We found that recognition was best when participants were tested with novel face images that were symmetrical to the learning viewpoint. Critically, the pattern of perceptual similarity and recognition across different viewpoints predicted the pattern of neural response in face-selective regions. Together, our results provide support for the functional value of symmetry as an intermediate step in generating view-invariant representations.SIGNIFICANCE STATEMENT The recognition of identity from faces is crucial for successful social interactions. A critical step in this process is the integration of different views into a unified, view-invariant representation. The representation of symmetrical views (e.g., left profile and right profile) has been proposed as an important intermediate step in computing view-invariant representations. We found view symmetric representations were specific to some face-selective regions, but not others. We also show that these neural representations influence the perception of faces. Symmetric views were perceived to be more similar and were recognized more accurately than nonsymmetric views. Moreover, the perception and recognition of faces at different viewpoints predicted patterns of response in those face regions with view symmetric representations.
Collapse
|