1
|
Martin ER, Hays JS, Soto FA. Face shape and motion are perceptually separable: Support for a revised model of face processing. Psychon Bull Rev 2024; 31:2160-2169. [PMID: 38381300 DOI: 10.3758/s13423-024-02470-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/24/2024] [Indexed: 02/22/2024]
Abstract
A recent model of face processing proposes that face shape and motion are processed in parallel brain pathways. Although tested in neuroimaging, the assumptions of this theory remain relatively untested through controlled psychophysical studies until now. Recruiting undergraduate students over the age of 18, we test this hypothesis using a tight control of stimulus factors, through computerized three-dimensional face models and calibration of dimensional discriminability, and of decisional factors, through a model-based analysis using general recognition theory (GRT). Theoretical links between neural and perceptual forms of independence within GRT allowed us to derive the a priori hypotheses that perceptual separability of shape and motion should hold, while other forms of independence defined within GRT might fail. We found evidence to support both of those predictions.
Collapse
Affiliation(s)
- Emily Renae Martin
- Department of Psychology, Florida International University, Miami, FL, USA.
| | - Jason S Hays
- Department of Psychology, Florida International University, Miami, FL, USA
| | - Fabian A Soto
- Department of Psychology, Florida International University, Miami, FL, USA
| |
Collapse
|
2
|
Soto FA, Beevers CG. Perceptual Observer Modeling Reveals Likely Mechanisms of Face Expression Recognition Deficits in Depression. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2024; 9:597-605. [PMID: 38336169 DOI: 10.1016/j.bpsc.2024.01.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 01/21/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024]
Abstract
BACKGROUND Deficits in face emotion recognition are well documented in depression, but the underlying mechanisms are poorly understood. Psychophysical observer models provide a way to precisely characterize such mechanisms. Using model-based analyses, we tested 2 hypotheses about how depression may reduce sensitivity to detect face emotion: 1) via a change in selectivity for visual information diagnostic of emotion or 2) via a change in signal-to-noise ratio in the system performing emotion detection. METHODS Sixty adults, one half meeting criteria for major depressive disorder and the other half healthy control participants, identified sadness and happiness in noisy face stimuli, and their responses were used to estimate templates encoding the visual information used for emotion identification. We analyzed these templates using traditional and model-based analyses; in the latter, the match between templates and stimuli, representing sensory evidence for the information encoded in the template, was compared against behavioral data. RESULTS Estimated happiness templates produced sensory evidence that was less strongly correlated with response times in participants with depression than in control participants, suggesting that depression was associated with a reduced signal-to-noise ratio in the detection of happiness. The opposite results were found for the detection of sadness. We found little evidence that depression was accompanied by changes in selectivity (i.e., information used to detect emotion), but depression was associated with a stronger influence of face identity on selectivity. CONCLUSIONS Depression is more strongly associated with changes in signal-to-noise ratio during emotion recognition, suggesting that deficits in emotion detection are driven primarily by deprecated signal quality rather than suboptimal sampling of information used to detect emotion.
Collapse
Affiliation(s)
- Fabian A Soto
- Department of Psychology, Florida International University, Miami, Florida
| | | |
Collapse
|
3
|
Krumpholz C, Quigley C, Fusani L, Leder H. Vienna Talking Faces (ViTaFa): A multimodal person database with synchronized videos, images, and voices. Behav Res Methods 2024; 56:2923-2940. [PMID: 37950115 PMCID: PMC11133183 DOI: 10.3758/s13428-023-02264-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2023] [Indexed: 11/12/2023]
Abstract
Social perception relies on different sensory channels, including vision and audition, which are specifically important for judgements of appearance. Therefore, to understand multimodal integration in person perception, it is important to study both face and voice in a synchronized form. We introduce the Vienna Talking Faces (ViTaFa) database, a high-quality audiovisual database focused on multimodal research of social perception. ViTaFa includes different stimulus modalities: audiovisual dynamic, visual dynamic, visual static, and auditory dynamic. Stimuli were recorded and edited under highly standardized conditions and were collected from 40 real individuals, and the sample matches typical student samples in psychological research (young individuals aged 18 to 45). Stimuli include sequences of various types of spoken content from each person, including German sentences, words, reading passages, vowels, and language-unrelated pseudo-words. Recordings were made with different emotional expressions (neutral, happy, angry, sad, and flirtatious). ViTaFa is freely accessible for academic non-profit research after signing a confidentiality agreement form via https://osf.io/9jtzx/ and stands out from other databases due to its multimodal format, high quality, and comprehensive quantification of stimulus features and human judgements related to attractiveness. Additionally, over 200 human raters validated emotion expression of the stimuli. In summary, ViTaFa provides a valuable resource for investigating audiovisual signals of social perception.
Collapse
Affiliation(s)
- Christina Krumpholz
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010, Vienna, Austria
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine, Vienna, Austria
- Department of Behavioural and Cognitive Biology, University of Vienna, Vienna, Austria
| | - Cliodhna Quigley
- Department of Behavioural and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Leonida Fusani
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine, Vienna, Austria
- Department of Behavioural and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Helmut Leder
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010, Vienna, Austria.
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria.
| |
Collapse
|
4
|
Yang S, Enkhzaya G, Zhu BH, Chen J, Wang ZJ, Kim ES, Kim NY. High-Definition Transcranial Direct Current Stimulation in the Right Ventrolateral Prefrontal Cortex Lengthens Sustained Attention in Virtual Reality. Bioengineering (Basel) 2023; 10:721. [PMID: 37370652 DOI: 10.3390/bioengineering10060721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 06/29/2023] Open
Abstract
Due to the current limitations of three-dimensional (3D) simulation graphics technology, mind wandering commonly occurs in virtual reality tasks, which has impeded it being applied more extensively. The right ventrolateral prefrontal cortex (rVLPFC) plays a vital role in executing continuous two-dimensional (2D) mental paradigms, and transcranial direct current stimulation (tDCS) over this cortical region has been shown to successfully modulate sustained 2D attention. Accordingly, we further explored the effects of electrical activation of the rVLPFC on 3D attentional tasks using anodal high-definition (HD)-tDCS. A 3D Go/No-go (GNG) task was developed to compare the after effects of real and sham brain stimulation. Specifically, GNG tasks were periodically interrupted to assess the subjective perception of attentional level, behavioral reactions were tracked and decomposed into an underlying decision cognition process, and electroencephalography data were recorded to calculate event-related potentials (ERPs) in rVLPFC. The p-values statistically indicated that HD-tDCS improved the subjective mentality, led to more cautious decisions, and enhanced neuronal discharging in rVLPFC. Additionally, the neurophysiological P300 ERP component and stimulation being active or sham could effectively predict several objective outcomes. These findings indicate that the comprehensive approach including brain stimulation, 3D mental paradigm, and cross-examined performance could significantly lengthen and robustly compare sustained 3D attention.
Collapse
Affiliation(s)
- Shan Yang
- RFIC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
- NDAC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
| | - Ganbold Enkhzaya
- RFIC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
- NDAC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
| | - Bao-Hua Zhu
- RFIC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
| | - Jian Chen
- RFIC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
| | - Zhi-Ji Wang
- RFIC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
- Department of Pediatrics, Severance Children's Hospital, Yonsei University, Seoul 03722, Republic of Korea
| | - Eun-Seong Kim
- RFIC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
| | - Nam-Young Kim
- RFIC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
- NDAC Center, Department of Electronic Engineering, Kwangwoon University, Nonwon-gu, Seoul 01897, Republic of Korea
| |
Collapse
|
5
|
Soto FA, Stewart RA, Hosseini S, Hays J, Beevers CG. A computational account of the mechanisms underlying face perception biases in depression. JOURNAL OF ABNORMAL PSYCHOLOGY 2021; 130:443-454. [PMID: 34472882 DOI: 10.1037/abn0000681] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Here, we take a computational approach to understand the mechanisms underlying face perception biases in depression. Thirty participants diagnosed with major depressive disorder and 30 healthy control participants took part in three studies involving recognition of identity and emotion in faces. We used signal detection theory to determine whether any perceptual biases exist in depression aside from decisional biases. We found lower sensitivity to happiness in general, and lower sensitivity to both happiness and sadness with ambiguous stimuli. Our use of highly-controlled face stimuli ensures that such asymmetry is truly perceptual in nature, rather than the result of studying expressions with inherently different discriminability. We found no systematic effect of depression on the perceptual interactions between face expression and identity. We also found that decisional strategies used in our task were different for people with depression and controls, but in a way that was highly specific to the stimulus set presented. We show through simulation that the observed perceptual effects, as well as other biases found in the literature, can be explained by a computational model in which channels encoding positive expressions are selectively suppressed. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
6
|
Soto FA, Escobar K, Salan J. Adaptation aftereffects reveal how categorization training changes the encoding of face identity. J Vis 2020; 20:18. [PMID: 33064122 PMCID: PMC7571276 DOI: 10.1167/jov.20.10.18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Previous research suggests that learning to categorize faces along a novel dimension changes the perceptual representation of such dimension, increasing its discriminability, its invariance, and the information used to identify faces varying along the dimension. A common interpretation of these results is that categorization training promotes the creation of novel dimensions, rather than simply the enhancement of already existing representations. Here, we trained a group of participants to categorize faces that varied along two morphing dimensions, one of them relevant to the categorization task and the other irrelevant to the task. An untrained group did not receive such categorization training. In three experiments, we used face adaptation aftereffects to explore how categorization training changes the encoding of face identities at the extremes of the category-relevant dimension and whether such training produces encoding of the category-relevant dimension as a preferred direction in face space. The pattern of results suggests that categorization training enhances the already existing norm-based coding of face identity, rather than creating novel category-relevant representations. We formalized this conclusion in a model that explains the most important results in our experiments and serves as a working hypothesis for future work in this area.
Collapse
Affiliation(s)
- Fabian A Soto
- Florida International University, Department of Psychology, Miami, FL, USA.,
| | - Karla Escobar
- Florida International University, Department of Psychology, Miami, FL, USA.,
| | | |
Collapse
|