1
|
Representational structure of fMRI/EEG responses to dynamic facial expressions. Neuroimage 2022; 263:119631. [PMID: 36113736 DOI: 10.1016/j.neuroimage.2022.119631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 09/09/2022] [Accepted: 09/12/2022] [Indexed: 11/23/2022] Open
Abstract
Face perception provides an excellent example of how the brain processes nuanced visual differences and transforms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expression, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.
Collapse
|
2
|
Izumika R, Cabeza R, Tsukiura T. Neural Mechanisms of Perceiving and Subsequently Recollecting Emotional Facial Expressions in Young and Older Adults. J Cogn Neurosci 2022; 34:1183-1204. [PMID: 35468212 DOI: 10.1162/jocn_a_01851] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is known that emotional facial expressions modulate the perception and subsequent recollection of faces and that aging alters these modulatory effects. Yet, the underlying neural mechanisms are not well understood, and they were the focus of the current fMRI study. We scanned healthy young and older adults while perceiving happy, neutral, or angry faces paired with names. Participants were then provided with the names of the faces and asked to recall the facial expression of each face. fMRI analyses focused on the fusiform face area (FFA), the posterior superior temporal sulcus (pSTS), the OFC, the amygdala, and the hippocampus (HC). Univariate activity, multivariate pattern (MVPA), and functional connectivity analyses were performed. The study yielded two main sets of findings. First, in pSTS and the amygdala, univariate activity and MVPA discrimination during the processing of facial expressions were similar in young and older adults, whereas in FFA and OFC, MVPA discriminated facial expressions less accurately in older than young adults. These findings suggest that facial expression representations in FFA and OFC reflect age-related dedifferentiation and positivity effect. Second, HC-OFC connectivity showed subsequent memory effects (SMEs) for happy expressions in both age groups, HC-FFA connectivity exhibited SMEs for happy and neutral expressions in young adults, and HC-pSTS interactions displayed SMEs for happy expressions in older adults. These results could be related to compensatory mechanisms and positivity effects in older adults. Taken together, the results clarify the effects of aging on the neural mechanisms in perceiving and encoding facial expressions.
Collapse
|
3
|
Recognition Of Pareidolic Objects In Developmental Prosopagnosic And Neurotypical Individuals. Cortex 2022; 153:21-31. [DOI: 10.1016/j.cortex.2022.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 02/02/2022] [Accepted: 04/05/2022] [Indexed: 11/18/2022]
|
4
|
Murray T, O'Brien J, Sagiv N, Garrido L. The role of stimulus-based cues and conceptual information in processing facial expressions of emotion. Cortex 2021; 144:109-132. [PMID: 34666297 DOI: 10.1016/j.cortex.2021.08.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 07/16/2021] [Accepted: 08/09/2021] [Indexed: 01/07/2023]
Abstract
Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity analysis to investigate the relative roles of shape, surface, and conceptual information in the perception, categorisation, and neural representation of facial expressions. In Study 1, 50 participants completed a perceptual task designed to measure the perceptual similarity of expression pairs, and a categorical task designed to measure the confusability between expression pairs when assigning emotion labels to a face. We used representational similarity analysis and constructed three models of the similarities between emotions using distinct information. Two models were based on stimulus-based cues (face shapes and surface textures) and one model was based on emotion concepts. Using multiple linear regression, we found that behaviour during both tasks was related with the similarity of emotion concepts. The model based on face shapes was more related with behaviour in the perceptual task than in the categorical, and the model based on surface textures was more related with behaviour in the categorical than the perceptual task. In Study 2, 30 participants viewed facial expressions while undergoing fMRI, allowing for the measurement of brain representational geometries of facial expressions of emotion in three core face-responsive regions (the Fusiform Face Area, Occipital Face Area, and Superior Temporal Sulcus), and a region involved in theory of mind (Medial Prefrontal Cortex). Across all four regions, the representational distances between facial expression pairs were related to the similarities of emotion concepts, but not to either of the stimulus-based cues. Together, these results highlight the important top-down influence of high-level emotion concepts both in behavioural tasks and in the neural representation of facial expressions.
Collapse
Affiliation(s)
- Thomas Murray
- Psychology Department, School of Biological and Behavioural Sciences, Queen Mary University London, United Kingdom.
| | - Justin O'Brien
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Noam Sagiv
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Lúcia Garrido
- Department of Psychology, City, University of London, United Kingdom
| |
Collapse
|
5
|
Consistent behavioral and electrophysiological evidence for rapid perceptual discrimination among the six human basic facial expressions. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2020; 20:928-948. [PMID: 32918269 DOI: 10.3758/s13415-020-00811-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
The extent to which the six basic human facial expressions perceptually differ from one another remains controversial. For instance, despite the importance of rapidly decoding fearful faces, this expression often is confused with other expressions, such as Surprise in explicit behavioral categorization tasks. We quantified implicit visual discrimination among rapidly presented facial expressions with an oddball periodic visual stimulation approach combined with electroencephalography (EEG), testing for the relationship with behavioral explicit measures of facial emotion discrimination. We report robust facial expression discrimination responses bilaterally over the occipito-temporal cortex for each pairwise expression change. While fearful faces presented as repeated stimuli led to the smallest deviant responses from all other basic expressions, deviant fearful faces were well discriminated overall and to a larger extent than expressions of Sadness and Anger. Expressions of Happiness did not differ quantitatively as much in EEG as for behavioral subjective judgments, suggesting that the clear dissociation between happy and other expressions, typically observed in behavioral studies, reflects higher-order processes. However, this expression differed from all others in terms of scalp topography, pointing to a qualitative rather than quantitative difference. Despite this difference, overall, we report for the first time a tight relationship of the similarity matrices across facial expressions obtained for implicit EEG responses and behavioral explicit measures collected under the same temporal constraints, paving the way for new approaches of understanding facial expression discrimination in developmental, intercultural, and clinical populations.
Collapse
|
6
|
Cao R, Li X, Todorov A, Wang S. A Flexible Neural Representation of Faces in the Human Brain. Cereb Cortex Commun 2020; 1:tgaa055. [PMID: 34296119 PMCID: PMC8152845 DOI: 10.1093/texcom/tgaa055] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 07/27/2020] [Accepted: 08/21/2020] [Indexed: 11/13/2022] Open
Abstract
An important question in human face perception research is to understand whether the neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed 2 task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.
Collapse
Affiliation(s)
- Runnan Cao
- Department of Chemical and Biomedical Engineering, Rockefeller Neurosciences Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506, USA
| | - Alexander Todorov
- Booth School of Business, University of Chicago, Chicago, IL 60637, USA
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering, Rockefeller Neurosciences Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
7
|
Zhao K, Liu M, Gu J, Mo F, Fu X, Hong Liu C. The Preponderant Role of Fusiform Face Area for the Facial Expression Confusion Effect: An MEG Study. Neuroscience 2020; 433:42-52. [PMID: 32169552 DOI: 10.1016/j.neuroscience.2020.03.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 02/24/2020] [Accepted: 03/02/2020] [Indexed: 01/21/2023]
Abstract
Although the recognition of facial expressions seems automatic and effortless, discrimination of expressions can still be error prone. Common errors are often due to visual similarities between some expressions (e.g., fear and surprise). However, little is known about the neural mechanisms underlying such a confusion effect. To address this question, we recorded the magnetoencephalography (MEG) while participants judged facial expressions that were either easily confused with or easily distinguished from other expressions. The results showed that the fusiform face area (FFA), rather than the posterior superior temporal sulcus (pSTS), played a preponderant role in discriminating confusable facial expressions. No difference between high confusion and low confusion conditions was observed on the M170 component in either the FFA or the pSTS, whilst a difference between two conditions started to emerge in the late positive potential (LPP), with the low confusion condition eliciting a larger LPP amplitude in the FFA. In addition, the power of delta was stronger in the time window of LPP component. This confusion effect was reflected in the FFA, which might be associated with the perceptual-to-conceptual shift.
Collapse
Affiliation(s)
- Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Mingtong Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Jingjin Gu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Fan Mo
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| |
Collapse
|
8
|
Mileva M, Young AW, Kramer RS, Burton AM. Understanding facial impressions between and within identities. Cognition 2019; 190:184-198. [DOI: 10.1016/j.cognition.2019.04.027] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 04/24/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
|
9
|
Coggan DD, Giannakopoulou A, Ali S, Goz B, Watson DM, Hartley T, Baker DH, Andrews TJ. A data-driven approach to stimulus selection reveals an image-based representation of objects in high-level visual areas. Hum Brain Mapp 2019; 40:4716-4731. [PMID: 31338936 DOI: 10.1002/hbm.24732] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 06/25/2019] [Accepted: 06/26/2019] [Indexed: 11/07/2022] Open
Abstract
The ventral visual pathway is directly involved in the perception and recognition of objects. However, the extent to which the neural representation of objects in this region reflects low-level or high-level properties remains unresolved. A problem in resolving this issue is that only a small proportion of the objects experienced during natural viewing can be shown during a typical experiment. This can lead to an uneven sampling of objects that biases our understanding of how they are represented. To address this issue, we developed a data-driven approach to stimulus selection that involved describing a large number objects in terms of their image properties. In the first experiment, clusters of objects were evenly selected from this multi-dimensional image space. Although the clusters did not have any consistent semantic features, each elicited a distinct pattern of neural response. In the second experiment, we asked whether high-level, category-selective patterns of response could be elicited by objects from other categories, but with similar image properties. Object clusters were selected based on the similarity of their image properties to objects from five different categories (bottle, chair, face, house, and shoe). The pattern of response to each metameric object cluster was similar to the pattern elicited by objects from the corresponding category. For example, the pattern for bottles was similar to the pattern for objects with similar image properties to bottles. In both experiments, the patterns of response were consistent across participants providing evidence for common organising principles. This study provides a more ecological approach to understanding the perceptual representations of objects and reveals the importance of image properties.
Collapse
Affiliation(s)
| | | | - Sanah Ali
- Department of Psychology, University of York, York, UK
| | - Burcu Goz
- Department of Psychology, University of York, York, UK
| | - David M Watson
- School of Psychology, The University of Nottingham, Nottingham, UK
| | - Tom Hartley
- Department of Psychology, University of York, York, UK.,York Biomedical Research Institute, University of York, York, UK
| | - Daniel H Baker
- Department of Psychology, University of York, York, UK.,York Biomedical Research Institute, University of York, York, UK
| | | |
Collapse
|
10
|
Symmetrical Viewpoint Representations in Face-Selective Regions Convey an Advantage in the Perception and Recognition of Faces. J Neurosci 2019; 39:3741-3751. [PMID: 30842248 DOI: 10.1523/jneurosci.1977-18.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 01/11/2019] [Accepted: 01/16/2019] [Indexed: 11/21/2022] Open
Abstract
Learning new identities is crucial for effective social interaction. A critical aspect of this process is the integration of different images from the same face into a view-invariant representation that can be used for recognition. The representation of symmetrical viewpoints has been proposed to be a key computational step in achieving view-invariance. The aim of this study was to determine whether the representation of symmetrical viewpoints in face-selective regions is directly linked to the perception and recognition of face identity. In Experiment 1, we measured fMRI responses while male and female human participants viewed images of real faces from different viewpoints (-90, -45, 0, 45, and 90° from full-face view). Within the face regions, patterns of neural response to symmetrical views (-45 and 45° or -90 and 90°) were more similar than responses to nonsymmetrical views in the fusiform face area and superior temporal sulcus, but not in the occipital face area. In Experiment 2, participants made perceptual similarity judgements to pairs of face images. Images with symmetrical viewpoints were reported as being more similar than nonsymmetric views. In Experiment 3, we asked whether symmetrical views also convey an advantage when learning new faces. We found that recognition was best when participants were tested with novel face images that were symmetrical to the learning viewpoint. Critically, the pattern of perceptual similarity and recognition across different viewpoints predicted the pattern of neural response in face-selective regions. Together, our results provide support for the functional value of symmetry as an intermediate step in generating view-invariant representations.SIGNIFICANCE STATEMENT The recognition of identity from faces is crucial for successful social interactions. A critical step in this process is the integration of different views into a unified, view-invariant representation. The representation of symmetrical views (e.g., left profile and right profile) has been proposed as an important intermediate step in computing view-invariant representations. We found view symmetric representations were specific to some face-selective regions, but not others. We also show that these neural representations influence the perception of faces. Symmetric views were perceived to be more similar and were recognized more accurately than nonsymmetric views. Moreover, the perception and recognition of faces at different viewpoints predicted patterns of response in those face regions with view symmetric representations.
Collapse
|
11
|
Weibert K, Flack TR, Young AW, Andrews TJ. Patterns of neural response in face regions are predicted by low-level image properties. Cortex 2018; 103:199-210. [PMID: 29655043 DOI: 10.1016/j.cortex.2018.03.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 01/26/2018] [Accepted: 03/13/2018] [Indexed: 11/30/2022]
Abstract
Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area - OFA, fusiform face area - FFA, superior temporal sulcus - STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.
Collapse
Affiliation(s)
- Katja Weibert
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Tessa R Flack
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Andrew W Young
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Timothy J Andrews
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom.
| |
Collapse
|
12
|
Yang X, Xu J, Cao L, Li X, Wang P, Wang B, Liu B. Linear Representation of Emotions in Whole Persons by Combining Facial and Bodily Expressions in the Extrastriate Body Area. Front Hum Neurosci 2018; 11:653. [PMID: 29375348 PMCID: PMC5767685 DOI: 10.3389/fnhum.2017.00653] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Accepted: 12/21/2017] [Indexed: 11/13/2022] Open
Abstract
Our human brain can rapidly and effortlessly perceive a person’s emotional state by integrating the isolated emotional faces and bodies into a whole. Behavioral studies have suggested that the human brain encodes whole persons in a holistic rather than part-based manner. Neuroimaging studies have also shown that body-selective areas prefer whole persons to the sum of their parts. The body-selective areas played a crucial role in representing the relationships between emotions expressed by different parts. However, it remains unclear in which regions the perception of whole persons is represented by a combination of faces and bodies, and to what extent the combination can be influenced by the whole person’s emotions. In the present study, functional magnetic resonance imaging data were collected when participants performed an emotion distinction task. Multi-voxel pattern analysis was conducted to examine how the whole person-evoked responses were associated with the face- and body-evoked responses in several specific brain areas. We found that in the extrastriate body area (EBA), the whole person patterns were most closely correlated with weighted sums of face and body patterns, using different weights for happy expressions but equal weights for angry and fearful ones. These results were unique for the EBA. Our findings tentatively support the idea that the whole person patterns are represented in a part-based manner in the EBA, and modulated by emotions. These data will further our understanding of the neural mechanism underlying perceiving emotional persons.
Collapse
Affiliation(s)
- Xiaoli Yang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Applications, Tianjin University, Tianjin, China
| | - Junhai Xu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Applications, Tianjin University, Tianjin, China
| | - Linjing Cao
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Applications, Tianjin University, Tianjin, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Applications, Tianjin University, Tianjin, China.,Research State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
13
|
Abstract
The fact that the face is a source of diverse social signals allows us to use face and person perception as a model system for asking important psychological questions about how our brains are organised. A key issue concerns whether we rely primarily on some form of generic representation of the common physical source of these social signals (the face) to interpret them, or instead create multiple representations by assigning different aspects of the task to different specialist components. Variants of the specialist components hypothesis have formed the dominant theoretical perspective on face perception for more than three decades, but despite this dominance of formally and informally expressed theories, the underlying principles and extent of any division of labour remain uncertain. Here, I discuss three important sources of constraint: first, the evolved structure of the brain; second, the need to optimise responses to different everyday tasks; and third, the statistical structure of faces in the perceiver’s environment. I show how these constraints interact to determine the underlying functional organisation of face and person perception.
Collapse
|
14
|
Furl N, Lohse M, Pizzorni-Ferrarese F. Low-frequency oscillations employ a general coding of the spatio-temporal similarity of dynamic faces. Neuroimage 2017; 157:486-499. [PMID: 28619657 PMCID: PMC6390175 DOI: 10.1016/j.neuroimage.2017.06.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2016] [Revised: 06/01/2017] [Accepted: 06/09/2017] [Indexed: 12/14/2022] Open
Abstract
Brain networks use neural oscillations as information transfer mechanisms. Although the face perception network in occipitotemporal cortex is well-studied, contributions of oscillations to face representation remain an open question. We tested for links between oscillatory responses that encode facial dimensions and the theoretical proposal that faces are encoded in similarity-based "face spaces". We quantified similarity-based encoding of dynamic faces in magnetoencephalographic sensor-level oscillatory power for identity, expression, physical and perceptual similarity of facial form and motion. Our data show that evoked responses manifest physical and perceptual form similarity that distinguishes facial identities. Low-frequency induced oscillations (< 20Hz) manifested more general similarity structure, which was not limited to identity, and spanned physical and perceived form and motion. A supplementary fMRI-constrained source reconstruction implicated fusiform gyrus and V5 in this similarity-based representation. These findings introduce a potential link between "face space" encoding and oscillatory network communication, which generates new hypotheses about the potential oscillation-mediated mechanisms that might encode facial dimensions.
Collapse
Affiliation(s)
- Nicholas Furl
- Department of Psychology, Royal Holloway, University of London, Surrey TW20 0EX, United Kingdom; Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom.
| | - Michael Lohse
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom; Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3QX, United Kingdom
| | | |
Collapse
|
15
|
Daudelin-Peltier C, Forget H, Blais C, Deschênes A, Fiset D. The effect of acute social stress on the recognition of facial expression of emotions. Sci Rep 2017; 7:1036. [PMID: 28432314 PMCID: PMC5430718 DOI: 10.1038/s41598-017-01053-3] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Accepted: 03/27/2017] [Indexed: 11/26/2022] Open
Abstract
This study investigates the effect of acute social stress on the recognition of facial expression of emotions in healthy young men. Participants underwent both a standardized psychosocial laboratory stressor (TSST-G) and a control condition. Then, they performed a homemade version of the facial expressions megamix. All six basic emotions were included in the task. First, our results show a systematic increase in the intensity threshold for disgust following stress, meaning that the participants' performance with this emotion was impaired. We suggest that this may reflect an adaptive coping mechanism where participants attempt to decrease their anxiety and protect themselves from a socio-evaluative threat. Second, our results show a systematic decrease in the intensity threshold for surprise, therefore positively affecting the participants' performance with that emotion. We suggest that the enhanced perception of surprise following the induction of social stress may be interpreted as an evolutionary adaptation, wherein being in a stressful environment increases the benefits of monitoring signals indicating the presence of a novel or threatening event. An alternative explanation may derive from the opposite nature of the facial expressions of disgust and surprise; the decreased recognition of disgust could therefore have fostered the propensity to perceive surprise.
Collapse
Affiliation(s)
- Camille Daudelin-Peltier
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada
| | - Hélène Forget
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada
- Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Andréa Deschênes
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.
- Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada.
| |
Collapse
|