1
|
Schaller P, Richoz AR, Duncan J, de Lissa P, Caldara R. Prosopagnosia and the role of face-sensitive areas in race perception. Sci Rep 2025; 15:5751. [PMID: 39962188 PMCID: PMC11832744 DOI: 10.1038/s41598-025-88769-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2024] [Accepted: 01/30/2025] [Indexed: 02/20/2025] Open
Abstract
Race is rapidly and effortlessly extracted from faces. Previous fMRI studies have reported race-related modulations in the bilateral Fusiform Face Areas (FFAs) and Occipital Face Areas (OFAs) during the categorization of faces by race. However, our recent findings revealed a comparable Other-Race Categorization Advantage between a well-studied case of pure acquired prosopagnosia-patient PS-and healthy controls. Notably, PS demonstrated faster categorization by race of other-compared to same-race faces, similar to healthy participants, despite sustaining lesions in the right OFA (rOFA) and left FFA (lFFA). This observation suggests that race processing can occur effectively even with damage to core face-sensitive regions, challenging the functional significance of race-related activations in the rOFA and lFFA observed in healthy individuals with fMRI. To address this apparent contradiction, we tested PS and age-matched controls during the categorization by race of same- to other-race morphed faces. Our data showed that PS required more visual information to accurately categorize racially ambiguous faces, indicating that intact rOFA and/or lFFA are crucial for extracting fine-grained racial information. These results refine our understanding of the functional roles of these key cortical regions and offer novel insights into the neural mechanisms underlying the perception of face race and prosopagnosia.
Collapse
Affiliation(s)
- Pauline Schaller
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, Fribourg, 1700, Switzerland
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, Fribourg, 1700, Switzerland.
| | - Justin Duncan
- Département de psychoéducation et psychologie, Université du Québec en Outaouais, Gatineau, Canada
| | - Peter de Lissa
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, Fribourg, 1700, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, Fribourg, 1700, Switzerland
| |
Collapse
|
2
|
Alonso-Recio L, Mendoza L, Serrano JM. Recognition of static and dynamic emotional facial expressions in mild cognitive impairment, healthy elderly and young people. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-9. [PMID: 39679911 DOI: 10.1080/23279095.2024.2443174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2024]
Abstract
INTRODUCTION The ability to recognize emotions is essential for social cognition, and its impairment can affect social interactions, contributing to loneliness and the worsening of issues in individuals with mild cognitive impairment (MCI). This study aims to investigate the ability to recognize emotional facial expressions in MCI individuals compared to healthy elderly and young individuals. METHOD We evaluated 27 MCI individuals, 31 healthy elderly, and 29 healthy young participants using two tasks: one with static facial expressions (photographs) and another with dynamic ones (video clips). RESULTS The younger group recognized all negative emotional expressions better than the other two groups and also performed better on neutral expressions compared to MCI patients. The healthy elderly group outperformed MCI patients in recognizing most expressions, except for happiness and neutral. Additionally, the ability to recognize dynamic expressions was superior to static ones across all groups for several emotions. DISCUSSION These results emphasize the importance of assessing the ability to recognize emotional facial expressions within neuropsychological protocols, to help detect this condition early on. Given the pivotal role that emotional facial expressions play in social interactions, these difficulties can contribute to a decline in such interactions and an increase in social isolation.
Collapse
Affiliation(s)
- Laura Alonso-Recio
- Departamento de Psicología Biológica y de la Salud, Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | - Liz Mendoza
- Departamento de Psicología Biológica y de la Salud, Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | - Juan Manuel Serrano
- Departamento de Psicología Biológica y de la Salud, Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| |
Collapse
|
3
|
Faghel-Soubeyrand S, Richoz AR, Waeber D, Woodhams J, Caldara R, Gosselin F, Charest I. Neural computations in prosopagnosia. Cereb Cortex 2024; 34:bhae211. [PMID: 38795358 PMCID: PMC11127037 DOI: 10.1093/cercor/bhae211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/30/2024] [Accepted: 05/03/2024] [Indexed: 05/27/2024] Open
Abstract
We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics.
Collapse
Affiliation(s)
- Simon Faghel-Soubeyrand
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Woodstock Rd, Oxford OX2 6GG
| | - Anne-Raphaelle Richoz
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Delphine Waeber
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Jessica Woodhams
- School of Psychology, University of Birmingham, Hills Building, Edgbaston Park Rd, Birmingham B15 2TT, UK
| | - Roberto Caldara
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Frédéric Gosselin
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| | - Ian Charest
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| |
Collapse
|
4
|
Chang CH, Drobotenko N, Ruocco AC, Lee ACH, Nestor A. Perception and memory-based representations of facial emotions: Associations with personality functioning, affective states and recognition abilities. Cognition 2024; 245:105724. [PMID: 38266352 DOI: 10.1016/j.cognition.2024.105724] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 11/09/2023] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
Personality traits and affective states are associated with biases in facial emotion perception. However, the precise personality impairments and affective states that underlie these biases remain largely unknown. To investigate how relevant factors influence facial emotion perception and recollection, Experiment 1 employed an image reconstruction approach in which community-dwelling adults (N = 89) rated the similarity of pairs of facial expressions, including those recalled from memory. Subsequently, perception- and memory-based expression representations derived from such ratings were assessed across participants and related to measures of personality impairment, state affect, and visual recognition abilities. Impairment in self-direction and level of positive affect accounted for the largest components of individual variability in perception and memory representations, respectively. Additionally, individual differences in these representations were impacted by face recognition ability. In Experiment 2, adult participants (N = 81) rated facial image reconstructions derived in Experiment 1, revealing that individual variability was associated with specific visual face properties, such as expressiveness, representation accuracy, and positivity/negativity. These findings highlight and clarify the influence of personality, affective state, and recognition abilities on individual differences in the perception and recollection of facial expressions.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Natalia Drobotenko
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Anthony C Ruocco
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Department of Psychological Clinical Science at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Rotman Research Institute, Baycrest Centre, 3560 Bathurst St, North York, Ontario M6A 2E1, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada.
| |
Collapse
|
5
|
Richoz AR, Stacchi L, Schaller P, Lao J, Papinutto M, Ticcinelli V, Caldara R. Recognizing facial expressions of emotion amid noise: A dynamic advantage. J Vis 2024; 24:7. [PMID: 38197738 PMCID: PMC10790674 DOI: 10.1167/jov.24.1.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 11/12/2023] [Indexed: 01/11/2024] Open
Abstract
Humans communicate internal states through complex facial movements shaped by biological and evolutionary constraints. Although real-life social interactions are flooded with dynamic signals, current knowledge on facial expression recognition mainly arises from studies using static face images. This experimental bias might stem from previous studies consistently reporting that young adults minimally benefit from the richer dynamic over static information, whereas children, the elderly, and clinical populations very strongly do (Richoz, Jack, Garrod, Schyns, & Caldara, 2015, Richoz, Jack, Garrod, Schyns, & Caldara, 2018b). These observations point to a near-optimal facial expression decoding system in young adults, almost insensitive to the advantage of dynamic over static cues. Surprisingly, no study has yet tested the idea that such evidence might be rooted in a ceiling effect. To this aim, we asked 70 healthy young adults to perform static and dynamic facial expression recognition of the six basic expressions while parametrically and randomly varying the low-level normalized phase and contrast signal (0%-100%) of the faces. As predicted, when 100% face signals were presented, static and dynamic expressions were recognized with equal efficiency with the exception of those with the most informative dynamics (i.e., happiness and surprise). However, when less signal was available, dynamic expressions were all better recognized than their static counterpart (peaking at ∼20%). Our data show that facial movements increase our ability to efficiently identify emotional states of others under the suboptimal visual conditions that can occur in everyday life. Dynamic signals are more effective and sensitive than static ones for decoding all facial expressions of emotion for all human observers.
Collapse
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Lisa Stacchi
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Pauline Schaller
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Michael Papinutto
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Valentina Ticcinelli
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
6
|
Jiahui G, Feilong M, Visconti di Oleggio Castello M, Nastase SA, Haxby JV, Gobbini MI. Modeling naturalistic face processing in humans with deep convolutional neural networks. Proc Natl Acad Sci U S A 2023; 120:e2304085120. [PMID: 37847731 PMCID: PMC10614847 DOI: 10.1073/pnas.2304085120] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 09/11/2023] [Indexed: 10/19/2023] Open
Abstract
Deep convolutional neural networks (DCNNs) trained for face identification can rival and even exceed human-level performance. The ways in which the internal face representations in DCNNs relate to human cognitive representations and brain activity are not well understood. Nearly all previous studies focused on static face image processing with rapid display times and ignored the processing of naturalistic, dynamic information. To address this gap, we developed the largest naturalistic dynamic face stimulus set in human neuroimaging research (700+ naturalistic video clips of unfamiliar faces). We used this naturalistic dataset to compare representational geometries estimated from DCNNs, behavioral responses, and brain responses. We found that DCNN representational geometries were consistent across architectures, cognitive representational geometries were consistent across raters in a behavioral arrangement task, and neural representational geometries in face areas were consistent across brains. Representational geometries in late, fully connected DCNN layers, which are optimized for individuation, were much more weakly correlated with cognitive and neural geometries than were geometries in late-intermediate layers. The late-intermediate face-DCNN layers successfully matched cognitive representational geometries, as measured with a behavioral arrangement task that primarily reflected categorical attributes, and correlated with neural representational geometries in known face-selective topographies. Our study suggests that current DCNNs successfully capture neural cognitive processes for categorical attributes of faces but less accurately capture individuation and dynamic features.
Collapse
Affiliation(s)
- Guo Jiahui
- Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH03755
| | - Ma Feilong
- Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH03755
| | | | - Samuel A. Nastase
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ08544
| | - James V. Haxby
- Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH03755
| | - M. Ida Gobbini
- Department of Medical and Surgical Sciences, University of Bologna, Bologna40138, Italy
- Istituti di Ricovero e Cura a Carattere Scientifico, Istituto delle Scienze Neurologiche di Bologna, Bologna40139, Italia
| |
Collapse
|
7
|
Rodger H, Sokhn N, Lao J, Liu Y, Caldara R. Developmental eye movement strategies for decoding facial expressions of emotion. J Exp Child Psychol 2023; 229:105622. [PMID: 36641829 DOI: 10.1016/j.jecp.2022.105622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/21/2022] [Accepted: 12/23/2022] [Indexed: 01/15/2023]
Abstract
In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.
Collapse
Affiliation(s)
- Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Yingdi Liu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| |
Collapse
|
8
|
Chang CH, Zehra S, Nestor A, Lee ACH. Using image reconstruction to investigate face perception in amnesia. Neuropsychologia 2023; 185:108573. [PMID: 37119985 DOI: 10.1016/j.neuropsychologia.2023.108573] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 05/01/2023]
Abstract
Damage to the medial temporal lobe (MTL), which is traditionally considered to subserve memory exclusively, has been reported to contribute to impaired face perception. However, it remains unknown how exactly such brain lesions may impact face representations and in particular facial shape and surface information, both of which are crucial for face perception. The present study employed a behavioral-based image reconstruction approach to reveal the pictorial representations of face perception in two amnesic patients: DA, who has an extensive bilateral MTL lesion that extends beyond the MTL in the right hemisphere, and BL, who has damage to the hippocampal dentate gyrus (DG). Both patients and their respective matched controls completed similarity judgments for pairs of faces, from which facial shape and surface features were subsequently derived and synthesized to create images of reconstructed facial appearance. Participants also completed a face oddity judgment task (FOJT) that has previously been shown to be sensitive to MTL cortical damage. While BL exhibited an impaired pattern of performance on the FOJT, DA demonstrated intact performance accuracy. Notably, the recovered pictorial content of faces was comparable between both patients and controls, although there was evidence for atypical face representations in BL particularly with regards to color. Our work provides novel insight into the face representations underlying face perception in two well-studied amnesic patients in the literature and demonstrates the applicability of the image reconstruction approach to individuals with brain damage.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, Canada
| | - Sukhan Zehra
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, Canada
| | - Adrian Nestor
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, Canada
| | - Andy C H Lee
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada.
| |
Collapse
|
9
|
Schaller P, Caldara R, Richoz AR. Prosopagnosia does not abolish other-race effects. Neuropsychologia 2023; 180:108479. [PMID: 36623806 DOI: 10.1016/j.neuropsychologia.2023.108479] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 12/28/2022] [Accepted: 01/05/2023] [Indexed: 01/09/2023]
Abstract
Healthy observers recognize more accurately same-than other-race faces (i.e., the Same-Race Recognition Advantage - SRRA) but categorize them by race more slowly than other-race faces (i.e., the Other-Race Categorization Advantage - ORCA). Several fMRI studies reported discrepant bilateral activations in the Fusiform Face Area (FFA) and Occipital Face Area (OFA) correlating with both effects. However, due to the very nature and limits of fMRI results, whether these face-sensitive regions play an unequivocal causal role in those other-race effects remains to be clarified. To this aim, we tested PS, a well-studied pure case of acquired prosopagnosia with lesions encompassing the left FFA and the right OFA. PS, healthy age-matched and young adults performed two recognition and three categorization by race tasks, respectively using Western Caucasian and East Asian faces normalized for their low-level properties with and without-external features, as well as in naturalistic settings. As expected, PS was slower and less accurate than the controls. Crucially, however, the magnitudes of her SRRA and ORCA were comparable to the controls in all the tasks. Our data show that prosopagnosia does not abolish other-race effects, as an intact face system, the left FFA and/or right OFA are not critical for eliciting the SRRA and ORCA. Race is a strong visual and social signal that is encoded in a large neural face-sensitive network, robustly tuned for processing same-race faces.
Collapse
Affiliation(s)
- Pauline Schaller
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.
| |
Collapse
|
10
|
Rossion B. Twenty years of investigation with the case of prosopagnosia PS to understand human face identity recognition. Part II: Neural basis. Neuropsychologia 2022; 173:108279. [PMID: 35667496 DOI: 10.1016/j.neuropsychologia.2022.108279] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 04/30/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022]
Abstract
Patient PS sustained her dramatic brain injury in 1992, the same year as the first report of a neuroimaging study of human face recognition. The present paper complements the review on the functional nature of PS's prosopagnosia (part I), illustrating how her case study directly, i.e., through neuroimaging investigations of her brain structure and activity, but also indirectly, through neural studies performed on other clinical cases and neurotypical individuals, inspired and constrained neural models of human face recognition. In the dominant right hemisphere for face recognition in humans, PS's main lesion concerns (inputs to) the inferior occipital gyrus (IOG), in a region where face-selective activity is typically found in normal individuals ('Occipital Face Area', OFA). Her case study initially supported the criticality of this region for face identity recognition (FIR) and provided the impetus for transcranial magnetic stimulation (TMS), intracerebral electrical stimulation, and cortical surgery studies that have generally supported this view. Despite PS's right IOG lesion, typical face-selectivity is found anteriorly in the middle portion of the fusiform gyrus, a hominoid structure (termed the right 'Fusiform Face Area', FFA) that is widely considered to be the most important region for human face recognition. This finding led to the original proposal of direct anatomico-functional connections from early visual cortices to the FFA, bypassing the IOG/OFA (lulu), a hypothesis supported by further neuroimaging studies of PS, other neurological cases and neuro-typical individuals with original visual stimulation paradigms, data recordings and analyses. The proposal of a lack of sensitivity to face identity in PS's right FFA due to defective reentrant inputs from the IOG/FFA has also been supported by other cases, functional connectivity and cortical surgery studies. Overall, neural studies of, and based on, the case of prosopagnosia PS strongly question the hierarchical organization of the human neural face recognition system, supporting a more flexible and dynamic view of this key social brain function.
Collapse
Affiliation(s)
- Bruno Rossion
- Université de Lorraine, CNRS, CRAN, F-54000, Nancy, France; CHRU-Nancy, Service de Neurologie, F-5400, France; Psychological Sciences Research Institute, Institute of Neuroscience, University of Louvain, Belgium.
| |
Collapse
|
11
|
Rossion B. Twenty years of investigation with the case of prosopagnosia PS to understand human face identity recognition. Part I: Function. Neuropsychologia 2022; 173:108278. [DOI: 10.1016/j.neuropsychologia.2022.108278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/28/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022]
|
12
|
Research on Voice-Driven Facial Expression Film and Television Animation Based on Compromised Node Detection in Wireless Sensor Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8563818. [PMID: 35111214 PMCID: PMC8803464 DOI: 10.1155/2022/8563818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/18/2021] [Accepted: 12/27/2021] [Indexed: 11/18/2022]
Abstract
With the continuous development of social economy, film and television animation, as the spiritual needs of ordinary people, is more and more popular. Especially for the development of emerging technologies, the corresponding voice can be used to change AI expression. But at the same time, how to ensure the synchronization of language sound and facial expression is one of the difficulties in animation transformation. Relying on the compromised node detection of wireless sensor networks, this paper combs the synchronous traffic flow between the speech signals and facial expressions, finds the pattern distribution of facial motion based on unsupervised classification, realizes training and learning through neural networks, and realizes one-to-one mapping to facial expressions by using the rhyme distribution of speech features. It avoids the defect of robustness of speech recognition, improves the learning ability of speech recognition, and realizes the driving analysis of facial expression film and television animation. The simulation results show that the compromised node detection in wireless sensor networks is effective and can support the analysis and research of speech-driven facial expression film and television animation.
Collapse
|
13
|
Pauzé A, Plouffe-Demers MP, Fiset D, Saint-Amour D, Cyr C, Blais C. The relationship between orthorexia nervosa symptomatology and body image attitudes and distortion. Sci Rep 2021; 11:13311. [PMID: 34172763 PMCID: PMC8233361 DOI: 10.1038/s41598-021-92569-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 05/25/2021] [Indexed: 12/23/2022] Open
Abstract
Orthorexia Nervosa (ON), a condition characterized by a fixation on healthy eating, still does not conform to any consensus concerning diagnostic criteria, notably in regard to a possible body image component. This study investigated the relationship between ON symptomatology, measured with the Eating Habit Questionnaire, and body image attitudes and body image distortion in a non-clinical sample. Explicit body image attitudes and distortion were measured using the Multidimensional Body-Self Relations Questionnaire. Implicit body image attitudes and distortion were assessed using the reverse correlation technique. Correlational analyses showed that ON is associated with both explicit and implicit attitudes and distortion toward body image. More precisely, multivariate analyses combining various body image components showed that ON is mostly associated with explicit overweight preoccupation, explicit investment in physical health and leading a healthy lifestyle, and implicit muscularity distortion. These findings suggest that ON symptomatology is positively associated with body image attitudes and distortion in a non-clinical sample. However, further studies should be conducted to better understand how ON symptomatology relates to body image, especially among clinical samples.
Collapse
Affiliation(s)
- Adrianne Pauzé
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Succursale Hull, C.P. 1250, Gatineau, QC, J8X 3X7, Canada
| | - Marie-Pier Plouffe-Demers
- Département de Psychologie, Université du Québec à Montréal, Succursale Centre-Ville, C.P. 8888, Montreal, QC, H3C 3P8, Canada
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Succursale Hull, C.P. 1250, Gatineau, QC, J8X 3X7, Canada
| | - Dave Saint-Amour
- Département de Psychologie, Université du Québec à Montréal, Succursale Centre-Ville, C.P. 8888, Montreal, QC, H3C 3P8, Canada
| | - Caroline Cyr
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Succursale Hull, C.P. 1250, Gatineau, QC, J8X 3X7, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Succursale Hull, C.P. 1250, Gatineau, QC, J8X 3X7, Canada.
| |
Collapse
|
14
|
Barton JJS, Davies-Thompson J, Corrow SL. Prosopagnosia and disorders of face processing. HANDBOOK OF CLINICAL NEUROLOGY 2021; 178:175-193. [PMID: 33832676 DOI: 10.1016/b978-0-12-821377-3.00006-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Face recognition is a form of expert visual processing. Acquired prosopagnosia is the loss of familiarity for facial identity and has several functional variants, namely apperceptive, amnestic, and associative forms. Acquired forms are usually caused by either occipitotemporal or anterior temporal lesions, right or bilateral in most cases. In addition, there is a developmental form, whose functional and structural origins are still being elucidated. Despite their difficulties with recognizing faces, some of these subjects still show signs of covert recognition, which may have a number of explanations. Other aspects of face perception can be spared in prosopagnosic subjects. Patients with other types of face processing difficulties have been described, including impaired expression processing, impaired lip-reading, false familiarity for faces, and a people-specific amnesia. Recent rehabilitative studies have shown some modest ability to improve face perception in prosopagnosic subjects through perceptual training protocols.
Collapse
Affiliation(s)
- Jason J S Barton
- Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, and Psychology, University of British Columbia, Vancouver, BC, Canada.
| | - Jodie Davies-Thompson
- Face Research Swansea, Department of Psychology, Swansea University, Sketty, United Kingdom
| | - Sherryse L Corrow
- Visual Cognition Lab, Department of Psychology, Bethel University, St. Paul, MN, United States
| |
Collapse
|
15
|
Nestor A, Lee ACH, Plaut DC, Behrmann M. The Face of Image Reconstruction: Progress, Pitfalls, Prospects. Trends Cogn Sci 2020; 24:747-759. [PMID: 32674958 PMCID: PMC7429291 DOI: 10.1016/j.tics.2020.06.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 05/27/2020] [Accepted: 06/15/2020] [Indexed: 10/23/2022]
Abstract
Recent research has demonstrated that neural and behavioral data acquired in response to viewing face images can be used to reconstruct the images themselves. However, the theoretical implications, promises, and challenges of this direction of research remain unclear. We evaluate the potential of this research for elucidating the visual representations underlying face recognition. Specifically, we outline complementary and converging accounts of the visual content, the representational structure, and the neural dynamics of face processing. We illustrate how this research addresses fundamental questions in the study of normal and impaired face recognition, and how image reconstruction provides a powerful framework for uncovering face representations, for unifying multiple types of empirical data, and for facilitating both theoretical and methodological progress.
Collapse
Affiliation(s)
- Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - David C Plaut
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| |
Collapse
|
16
|
Skiba RM, Vuilleumier P. Brain Networks Processing Temporal Information in Dynamic Facial Expressions. Cereb Cortex 2020; 30:6021-6038. [DOI: 10.1093/cercor/bhaa176] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 04/30/2020] [Accepted: 05/22/2020] [Indexed: 11/14/2022] Open
Abstract
Abstract
This fMRI study examines the role of local and global motion information in facial movements during exposure to novel dynamic face stimuli. We found that synchronous expressions distinctively engaged medial prefrontal areas in the rostral and caudal sectors of anterior cingulate cortex (r/cACC) extending to inferior supplementary motor areas, as well as motor cortex and bilateral superior frontal gyrus (global temporal-spatial processing). Asynchronous expressions in which one part of the face unfolded before the other activated more the right superior temporal sulcus (STS) and inferior frontal gyrus (local temporal-spatial processing). These differences in temporal dynamics had no effect on visual face-responsive areas. Dynamic causal modeling analysis further showed that processing of asynchronous expression features was associated with a differential information flow, centered on STS, which received direct input from occipital cortex and projected to the amygdala. Moreover, STS and amygdala displayed selective interactions with cACC where the integration of both local and global motion cues could take place. These results provide new evidence for a role of local and global temporal dynamics in emotional expressions, extracted in partly separate brain pathways. Importantly, we show that dynamic expressions with synchronous movement cues may distinctively engage brain areas responsible for motor execution of expressions.
Collapse
Affiliation(s)
- Rafal M Skiba
- Laboratory for Behavioural Neurology and Imaging of Cognition, Department of Basic Neuroscience, University of Geneva, 1211 Geneva, Switzerland
- Swiss Center for Affective Science, University of Geneva, Campus Biotech, 1202 Geneva, Switzerland
| | - Patrik Vuilleumier
- Laboratory for Behavioural Neurology and Imaging of Cognition, Department of Basic Neuroscience, University of Geneva, 1211 Geneva, Switzerland
- Swiss Center for Affective Science, University of Geneva, Campus Biotech, 1202 Geneva, Switzerland
| |
Collapse
|
17
|
Stoll C, Rodger H, Lao J, Richoz AR, Pascalis O, Dye M, Caldara R. Quantifying Facial Expression Intensity and Signal Use in Deaf Signers. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:346-355. [PMID: 31271428 DOI: 10.1093/deafed/enz023] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 04/30/2019] [Accepted: 05/03/2019] [Indexed: 06/09/2023]
Abstract
We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.
Collapse
Affiliation(s)
- Chloé Stoll
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes
- Laboratory for Investigative Neurophysiology, Centre Hospitalier Universitaire Vaudois and University of Lausanne
| | - Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Olivier Pascalis
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes
| | - Matthew Dye
- National Technical Institute for Deaf/Rochester Institute of Technology
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| |
Collapse
|
18
|
Plouffe-Demers MP, Fiset D, Saumure C, Duncan J, Blais C. Strategy Shift Toward Lower Spatial Frequencies in the Recognition of Dynamic Facial Expressions of Basic Emotions: When It Moves It Is Different. Front Psychol 2019; 10:1563. [PMID: 31379648 PMCID: PMC6650765 DOI: 10.3389/fpsyg.2019.01563] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 06/20/2019] [Indexed: 11/15/2022] Open
Abstract
Facial expressions of emotion play a key role in social interactions. While in everyday life, their dynamic and transient nature calls for a fast processing of the visual information they contain, a majority of studies investigating the visual processes underlying their recognition have focused on their static display. The present study aimed to gain a better understanding of these processes while using more ecological dynamic facial expressions. In two experiments, we directly compared the spatial frequency (SF) tuning during the recognition of static and dynamic facial expressions. Experiment 1 revealed a shift toward lower SFs for dynamic expressions in comparison to static ones. Experiment 2 was designed to verify if changes in SF tuning curves were specific to the presence of emotional information in motion by comparing the SF tuning profiles for static, dynamic, and shuffled dynamic expressions. Results showed a similar shift toward lower SFs for shuffled expressions, suggesting that the difference found between dynamic and static expressions might not be linked to informative motion per se but to the presence of motion regardless its nature.
Collapse
Affiliation(s)
- Marie-Pier Plouffe-Demers
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
- Département de Psychologie, Université du Québec à Montréal, Montreal, QC, Canada
| | - Daniel Fiset
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
| | - Camille Saumure
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
| | - Justin Duncan
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
- Département de Psychologie, Université du Québec à Montréal, Montreal, QC, Canada
| | - Caroline Blais
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
| |
Collapse
|
19
|
Abstract
Reverse correlation is an influential psychophysical paradigm that uses a participant’s responses to randomly varying images to build a classification image (CI), which is commonly interpreted as a visualization of the participant’s mental representation. It is unclear, however, how to statistically quantify the amount of signal present in CIs, which limits the interpretability of these images. In this article, we propose a novel metric, infoVal, which assesses informational value relative to a resampled random distribution and can be interpreted like a z score. In the first part, we define the infoVal metric and show, through simulations, that it adheres to typical Type I error rates under various task conditions (internal validity). In the second part, we show that the metric correlates with markers of data quality in empirical reverse-correlation data, such as the subjective recognizability, objective discriminability, and test–retest reliability of the CIs (convergent validity). In the final part, we demonstrate how the infoVal metric can be used to compare the informational value of reverse-correlation datasets, by comparing data acquired online with data acquired in a controlled lab environment. We recommend a new standard of good practice in which researchers assess the infoVal scores of reverse-correlation data in order to ensure that they do not read signal in CIs where no signal is present. The infoVal metric is implemented in the open-source rcicr R package, to facilitate its adoption.
Collapse
|
20
|
Bomfim AJDL, Ribeiro RADS, Chagas MHN. Recognition of dynamic and static facial expressions of emotion among older adults with major depression. TRENDS IN PSYCHIATRY AND PSYCHOTHERAPY 2019; 41:159-166. [PMID: 30942267 DOI: 10.1590/2237-6089-2018-0054] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 11/07/2018] [Indexed: 11/22/2022]
Abstract
INTRODUCTION The recognition of facial expressions of emotion is essential to living in society. However, individuals with major depression tend to interpret information considered imprecise in a negative light, which can exert a direct effect on their capacity to decode social stimuli. OBJECTIVE To compare basic facial expression recognition skills during tasks with static and dynamic stimuli in older adults with and without major depression. METHODS Older adults were selected through a screening process for psychiatric disorders at a primary care service. Psychiatric evaluations were performed using criteria from the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5). Twenty-three adults with a diagnosis of depression and 23 older adults without a psychiatric diagnosis were asked to perform two facial emotion recognition tasks using static and dynamic stimuli. RESULTS Individuals with major depression demonstrated greater accuracy in recognizing sadness (p=0.023) and anger (p=0.024) during the task with static stimuli and less accuracy in recognizing happiness during the task with dynamic stimuli (p=0.020). The impairment was mainly related to the recognition of emotions of lower intensity. CONCLUSIONS The performance of older adults with depression in facial expression recognition tasks with static and dynamic stimuli differs from that of older adults without depression, with greater accuracy regarding negative emotions (sadness and anger) and lower accuracy regarding the recognition of happiness.
Collapse
Affiliation(s)
| | | | - Marcos Hortes Nisihara Chagas
- Departamento de Psicologia, Universidade Federal de São Carlos, São Carlos, SP, Brazil.,Departamento de Gerontologia, Universidade Federal de São Carlos, São Carlos, SP, Brazil
| |
Collapse
|
21
|
Ramon M. The power of how-lessons learned from neuropsychology and face processing. Cogn Neuropsychol 2019; 35:83-86. [PMID: 29658421 DOI: 10.1080/02643294.2017.1414777] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Meike Ramon
- a Visual and Social Neuroscience, Department of Psychology , University of Fribourg , Faucigny 2, 1700 Fribourg , Switzerland
| |
Collapse
|
22
|
Richoz AR, Lao J, Pascalis O, Caldara R. Tracking the recognition of static and dynamic facial expressions of emotion across the life span. J Vis 2018; 18:5. [PMID: 30208425 DOI: 10.1167/18.9.5] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The effective transmission and decoding of dynamic facial expressions of emotion is omnipresent and critical for adapted social interactions in everyday life. Thus, common intuition would suggest an advantage for dynamic facial expression recognition (FER) over the static snapshots routinely used in most experiments. However, although many studies reported an advantage in the recognition of dynamic over static expressions in clinical populations, results obtained from healthy participants are contrasted. To clarify this issue, we conducted a large cross-sectional study to investigate FER across the life span in order to determine if age is a critical factor to account for such discrepancies. More than 400 observers (age range 5-96) performed recognition tasks of the six basic expressions in static, dynamic, and shuffled (temporally randomized frames) conditions, normalized for the amount of energy sampled over time. We applied a Bayesian hierarchical step-linear model to capture the nonlinear relationship between age and FER for the different viewing conditions. Although replicating the typical accuracy profiles of FER, we determined the age at which peak efficiency was reached for each expression and found greater accuracy for most dynamic expressions across the life span. This advantage in the elderly population was driven by a significant decrease in performance for static images, which was twice as large as for the young adults. Our data posit the use of dynamic stimuli as being critical in the assessment of FER in the elderly population, inviting caution when drawing conclusions from the sole use of static face images to this aim.
Collapse
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland.,LPNC, University of Grenoble Alpes, Grenoble, France
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | | | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
23
|
Dobs K, Bülthoff I, Schultz J. Use and Usefulness of Dynamic Face Stimuli for Face Perception Studies-a Review of Behavioral Findings and Methodology. Front Psychol 2018; 9:1355. [PMID: 30123162 PMCID: PMC6085596 DOI: 10.3389/fpsyg.2018.01355] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 07/13/2018] [Indexed: 01/01/2023] Open
Abstract
Faces that move contain rich information about facial form, such as facial features and their configuration, alongside the motion of those features. During social interactions, humans constantly decode and integrate these cues. To fully understand human face perception, it is important to investigate what information dynamic faces convey and how the human visual system extracts and processes information from this visual input. However, partly due to the difficulty of designing well-controlled dynamic face stimuli, many face perception studies still rely on static faces as stimuli. Here, we focus on evidence demonstrating the usefulness of dynamic faces as stimuli, and evaluate different types of dynamic face stimuli to study face perception. Studies based on dynamic face stimuli revealed a high sensitivity of the human visual system to natural facial motion and consistently reported dynamic advantages when static face information is insufficient for the task. These findings support the hypothesis that the human perceptual system integrates sensory cues for robust perception. In the present paper, we review the different types of dynamic face stimuli used in these studies, and assess their usefulness for several research questions. Natural videos of faces are ecological stimuli but provide limited control of facial form and motion. Point-light faces allow for good control of facial motion but are highly unnatural. Image-based morphing is a way to achieve control over facial motion while preserving the natural facial form. Synthetic facial animations allow separation of facial form and motion to study aspects such as identity-from-motion. While synthetic faces are less natural than videos of faces, recent advances in photo-realistic rendering may close this gap and provide naturalistic stimuli with full control over facial motion. We believe that many open questions, such as what dynamic advantages exist beyond emotion and identity recognition and which dynamic aspects drive these advantages, can be addressed adequately with different types of stimuli and will improve our understanding of face perception in more ecological settings.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States.,Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Isabelle Bülthoff
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Johannes Schultz
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Division of Medical Psychology and Department of Psychiatry, University of Bonn, Bonn, Germany
| |
Collapse
|
24
|
Ramon M, Sokhn N, Lao J, Caldara R. Decisional space determines saccadic reaction times in healthy observers and acquired prosopagnosia. Cogn Neuropsychol 2018; 35:304-313. [PMID: 29749293 DOI: 10.1080/02643294.2018.1469482] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
Determining the familiarity and identity of a face have been considered as independent processes. Covert face recognition in cases of acquired prosopagnosia, as well as rapid detection of familiarity have been taken to support this view. We tested P.S. a well-described case of acquired prosopagnosia, and two healthy controls (her sister and daughter) in two saccadic reaction time (SRT) experiments. Stimuli depicted their family members and well-matched unfamiliar distractors in the context of binary gender, or familiarity decisions. Observers' minimum SRTs were estimated with Bayesian approaches. For gender decisions, P.S. and her daughter achieved sufficient performance, but displayed different SRT distributions. For familiarity decisions, her daughter exhibited above chance level performance and minimum SRTs corresponding to those reported previously in healthy observers, while P.S. performed at chance. These findings extend previous observations, indicating that decisional space determines performance in both the intact and impaired face processing system.
Collapse
Affiliation(s)
- Meike Ramon
- a Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology , University of Fribourg , Fribourg , Switzerland
| | - Nayla Sokhn
- a Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology , University of Fribourg , Fribourg , Switzerland
| | - Junpeng Lao
- a Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology , University of Fribourg , Fribourg , Switzerland
| | - Roberto Caldara
- a Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology , University of Fribourg , Fribourg , Switzerland
| |
Collapse
|
25
|
Fiset D, Blais C, Royer J, Richoz AR, Dugas G, Caldara R. Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia. Soc Cogn Affect Neurosci 2018; 12:1334-1341. [PMID: 28459990 PMCID: PMC5597863 DOI: 10.1093/scan/nsx068] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 04/23/2017] [Indexed: 12/01/2022] Open
Abstract
Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images.
Collapse
Affiliation(s)
- Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.,Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.,Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Jessica Royer
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.,Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Gabrielle Dugas
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Canada.,Centre de Recherche en Neuropsychologie et Cognition, Montréal, Canada
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
26
|
Turano MT, Lao J, Richoz AR, de Lissa P, Degosciu SBA, Viggiano MP, Caldara R. Fear boosts the early neural coding of faces. Soc Cogn Affect Neurosci 2017; 12:1959-1971. [PMID: 29040780 PMCID: PMC5716185 DOI: 10.1093/scan/nsx110] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Revised: 09/18/2017] [Accepted: 10/02/2017] [Indexed: 11/14/2022] Open
Abstract
The rapid extraction of facial identity and emotional expressions is critical for adapted social interactions. These biologically relevant abilities have been associated with early neural responses on the face sensitive N170 component. However, whether all facial expressions uniformly modulate the N170, and whether this effect occurs only when emotion categorization is task-relevant, is still unclear. To clarify this issue, we recorded high-resolution electrophysiological signals while 22 observers perceived the six basic expressions plus neutral. We used a repetition suppression paradigm, with an adaptor followed by a target face displaying the same identity and expression (trials of interest). We also included catch trials to which participants had to react, by varying identity (identity-task), expression (expression-task) or both (dual-task) on the target face. We extracted single-trial Repetition Suppression (stRS) responses using a data-driven spatiotemporal approach with a robust hierarchical linear model to isolate adaptation effects on the trials of interest. Regardless of the task, fear was the only expression modulating the N170, eliciting the strongest stRS responses. This observation was corroborated by distinct behavioral performance during the catch trials for this facial expression. Altogether, our data reinforce the view that fear elicits distinct neural processes in the brain, enhancing attention and facilitating the early coding of faces.
Collapse
Affiliation(s)
- Maria Teresa Turano
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Peter de Lissa
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Sarah B A Degosciu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Maria Pia Viggiano
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
27
|
Brinkman L, Todorov A, Dotsch R. Visualising mental representations: A primer on noise-based reverse correlation in social psychology. EUROPEAN REVIEW OF SOCIAL PSYCHOLOGY 2017. [DOI: 10.1080/10463283.2017.1381469] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- L. Brinkman
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - A. Todorov
- Department of Psychology, Princeton University, Princeton, New Jersey, USA
| | - R. Dotsch
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
28
|
Abstract
Faces are one of the most important means of communication in humans. For example, a short glance at a person's face provides information on identity and emotional state. What are the computations the brain uses to solve these problems so accurately and seemingly effortlessly? This article summarizes current research on computational modeling, a technique used to answer this question. Specifically, my research studies the hypothesis that this algorithm is tasked to solve the inverse problem of production. For example, to recognize identity, our brain needs to identify shape and shading image features that are invariant to facial expression, pose and illumination. Similarly, to recognize emotion, the brain needs to identify shape and shading features that are invariant to identity, pose and illumination. If one defines the physics equations that render an image under different identities, expressions, poses and illuminations, then gaining invariance to these factors is readily resolved by computing the inverse of this rendering function. I describe our current understanding of the algorithms used by our brains to resolve this inverse problem. I also discuss how these results are driving research in computer vision to design computer systems that are as accurate, robust and efficient as humans.
Collapse
|
29
|
Ramon M, Busigny T, Gosselin F, Rossion B. All new kids on the block? Impaired holistic processing of personally familiar faces in a kindergarten teacher with acquired prosopagnosia. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2016.1273985] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Affiliation(s)
- Meike Ramon
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Thomas Busigny
- Psychological Sciences Research Institute and Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
| | - Frederic Gosselin
- Département de Psychologie, Université de Montréal, Montreal, Canada
| | - Bruno Rossion
- Psychological Sciences Research Institute and Institute of Neuroscience, University of Louvain, Louvain-La-Neuve, Belgium
| |
Collapse
|
30
|
Ruffieux N, Ramon M, Lao J, Colombo F, Stacchi L, Borruat FX, Accolla E, Annoni JM, Caldara R. Residual perception of biological motion in cortical blindness. Neuropsychologia 2016; 93:301-311. [DOI: 10.1016/j.neuropsychologia.2016.11.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 10/19/2016] [Accepted: 11/09/2016] [Indexed: 11/25/2022]
|
31
|
Ramon M, Miellet S, Dzieciol AM, Konrad BN, Dresler M, Caldara R. Super-Memorizers Are Not Super-Recognizers. PLoS One 2016; 11:e0150972. [PMID: 27008627 PMCID: PMC4805230 DOI: 10.1371/journal.pone.0150972] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 02/22/2016] [Indexed: 11/18/2022] Open
Abstract
Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory.
Collapse
Affiliation(s)
- Meike Ramon
- University of Fribourg, Department of Psychology, Rue P.A. de Faucigny 2, 1700 Fribourg, Switzerland
- * E-mail:
| | - Sebastien Miellet
- Bournemouth University, Department of Psychology, Talbot Campus, BH12 5BB, Poole, United Kingdom
| | - Anna M. Dzieciol
- Cognitive Neuroscience and Neuropsychiatry Section, UCL Institute of Child Health, 30 Guilford Street, WC1N 1EH, London, United Kingdom
| | - Boris Nikolai Konrad
- Radboud University Medical Centre, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, 6525 EN Nijmegen, The Netherlands
| | - Martin Dresler
- Radboud University Medical Centre, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, 6525 EN Nijmegen, The Netherlands
- Max Planck Institute of Psychiatry, Kraepelinstr. 2–10, 80804 Munich, Germany
| | - Roberto Caldara
- University of Fribourg, Department of Psychology, Rue P.A. de Faucigny 2, 1700 Fribourg, Switzerland
| |
Collapse
|
32
|
Reinl M, Bartels A. Perception of temporal asymmetries in dynamic facial expressions. Front Psychol 2015; 6:1107. [PMID: 26300807 PMCID: PMC4523710 DOI: 10.3389/fpsyg.2015.01107] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 07/20/2015] [Indexed: 11/13/2022] Open
Abstract
In the current study we examined whether timeline-reversals and emotional direction of dynamic facial expressions affect subjective experience of human observers. We recorded natural movies of faces that increased or decreased their expressions of fear, and played them either in the natural frame order or reversed from last to first frame (reversed timeline). This led to four conditions of increasing or decreasing fear, either following the natural or reversed temporal trajectory of facial dynamics. This 2-by-2 factorial design controlled for visual low-level properties, static visual content, and motion energy across the different factors. It allowed us to examine perceptual consequences that would occur if the timeline trajectory of facial muscle movements during the increase of an emotion are not the exact mirror of the timeline during the decrease. It additionally allowed us to study perceptual differences between increasing and decreasing emotional expressions. Perception of these time-dependent asymmetries have not yet been quantified. We found that three emotional measures, emotional intensity, artificialness of facial movement, and convincingness or plausibility of emotion portrayal, were affected by timeline-reversals as well as by the emotional direction of the facial expressions. Our results imply that natural dynamic facial expressions contain temporal asymmetries, and show that deviations from the natural timeline lead to a reduction of perceived emotional intensity and convincingness, and to an increase of perceived artificialness of the dynamic facial expression. In addition, they show that decreasing facial expressions are judged as less plausible than increasing facial expressions. Our findings are of relevance for both, behavioral as well as neuroimaging studies, as processing and perception are influenced by temporal asymmetries.
Collapse
Affiliation(s)
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| |
Collapse
|
33
|
Bate S, Bennetts R. The independence of expression and identity in face-processing: evidence from neuropsychological case studies. Front Psychol 2015; 6:770. [PMID: 26106348 PMCID: PMC4460300 DOI: 10.3389/fpsyg.2015.00770] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Accepted: 05/22/2015] [Indexed: 11/13/2022] Open
Abstract
The processing of facial identity and facial expression have traditionally been seen as independent—a hypothesis that has largely been informed by a key double dissociation between neurological patients with a deficit in facial identity recognition but not facial expression recognition, and those with the reverse pattern of impairment. The independence hypothesis is also reflected in more recent anatomical models of face-processing, although these theories permit some interaction between the two processes. Given that much of the traditional patient-based evidence has been criticized, a review of more recent case reports that are accompanied by neuroimaging data is timely. Further, the performance of individuals with developmental face-processing deficits has recently been considered with regard to the independence debate. This paper reviews evidence from both acquired and developmental disorders, identifying methodological and theoretical strengths and caveats in these reports, and highlighting pertinent avenues for future research.
Collapse
Affiliation(s)
- Sarah Bate
- Department of Psychology, Faculty of Science and Technology, Bournemouth University , Poole, UK
| | - Rachel Bennetts
- Department of Psychology, Faculty of Science and Technology, Bournemouth University , Poole, UK
| |
Collapse
|
34
|
Maguinness C, Newell FN. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia. Neuropsychologia 2015; 70:281-95. [PMID: 25737056 DOI: 10.1016/j.neuropsychologia.2015.02.038] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Revised: 02/11/2015] [Accepted: 02/27/2015] [Indexed: 11/30/2022]
Abstract
There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity.
Collapse
Affiliation(s)
- Corrina Maguinness
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| |
Collapse
|