1
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
2
|
Lee M, Lori A, Langford NA, Rilling JK. The neural basis of smile authenticity judgments and the potential modulatory role of the oxytocin receptor gene (OXTR). Behav Brain Res 2023; 437:114144. [PMID: 36216140 DOI: 10.1016/j.bbr.2022.114144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 09/03/2022] [Accepted: 09/30/2022] [Indexed: 11/13/2022]
Abstract
Accurate perception of genuine vs. posed smiles is crucial for successful social navigation in humans. While people vary in their ability to assess the authenticity of smiles, little is known about the specific biological mechanisms underlying this variation. We investigated the neural substrates of smile authenticity judgments using functional magnetic resonance imaging (fMRI). We also tested a preliminary hypothesis that a common polymorphism in the oxytocin receptor gene (OXTR) rs53576 would modulate the behavioral and neural indices of accurate smile authenticity judgments. A total of 185 healthy adult participants (Neuroimaging arm: N = 44, Behavioral arm: N = 141) determined the authenticity of dynamic facial expressions of genuine and posed smiles either with or without fMRI scanning. Correctly identified genuine vs. posed smiles activated brain areas involved with reward processing, facial mimicry, and mentalizing. Activation within the inferior frontal gyrus and dorsomedial prefrontal cortex correlated with individual differences in sensitivity (d') and response criterion (C), respectively. Our exploratory genetic analysis revealed that rs53576 G homozygotes in the neuroimaging arm had a stronger tendency to judge posed smiles as genuine than did A allele carriers and showed decreased activation in the medial prefrontal cortex when viewing genuine vs. posed smiles. Yet, OXTR rs53576 did not modulate task performance in the behavioral arm, which calls for further studies to evaluate the legitimacy of this result. Our findings extend previous literature on the biological foundations of smile authenticity judgments, particularly emphasizing the involvement of brain regions implicated in reward, facial mimicry, and mentalizing.
Collapse
Affiliation(s)
| | - Adriana Lori
- Department of Psychiatry and Behavioral Science, USA
| | - Nicole A Langford
- Department of Psychiatry and Behavioral Science, USA; Nell Hodgson Woodruff School of Nursing, USA
| | - James K Rilling
- Department of Anthropology, USA; Department of Psychiatry and Behavioral Science, USA; Center for Behavioral Neuroscience, USA; Emory National Primate Research Center, USA; Center for Translational Social Neuroscience, USA.
| |
Collapse
|
3
|
Belyk M, McGettigan C. Real-time magnetic resonance imaging reveals distinct vocal tract configurations during spontaneous and volitional laughter. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210511. [PMID: 36126659 PMCID: PMC9489295 DOI: 10.1098/rstb.2021.0511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 02/15/2022] [Indexed: 12/22/2022] Open
Abstract
A substantial body of acoustic and behavioural evidence points to the existence of two broad categories of laughter in humans: spontaneous laughter that is emotionally genuine and somewhat involuntary, and volitional laughter that is produced on demand. In this study, we tested the hypothesis that these are also physiologically distinct vocalizations, by measuring and comparing them using real-time magnetic resonance imaging (rtMRI) of the vocal tract. Following Ruch and Ekman (Ruch and Ekman 2001 In Emotions, qualia, and consciousness (ed. A Kaszniak), pp. 426-443), we further predicted that spontaneous laughter should be relatively less speech-like (i.e. less articulate) than volitional laughter. We collected rtMRI data from five adult human participants during spontaneous laughter, volitional laughter and spoken vowels. We report distinguishable vocal tract shapes during the vocalic portions of these three vocalization types, where volitional laughs were intermediate between spontaneous laughs and vowels. Inspection of local features within the vocal tract across the different vocalization types offers some additional support for Ruch and Ekman's predictions. We discuss our findings in light of a dual pathway hypothesis for the neural control of human volitional and spontaneous vocal behaviours, identifying tongue shape and velum lowering as potential biomarkers of spontaneous laughter to be investigated in future research. This article is part of the theme issue 'Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience'.
Collapse
Affiliation(s)
- Michel Belyk
- Department of Psychology, Edge Hill University, Ormskirk L39 4QP, UK
- Department of Speech, Hearing and Phonetic Sciences, University College London, London WC1N 1PF, UK
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London WC1N 1PF, UK
| |
Collapse
|
4
|
The neural basis of authenticity recognition in laughter and crying. Sci Rep 2021; 11:23750. [PMID: 34887461 PMCID: PMC8660868 DOI: 10.1038/s41598-021-03131-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 11/22/2021] [Indexed: 01/28/2023] Open
Abstract
Deciding whether others' emotions are genuine is essential for successful communication and social relationships. While previous fMRI studies suggested that differentiation between authentic and acted emotional expressions involves higher-order brain areas, the time course of authenticity discrimination is still unknown. To address this gap, we tested the impact of authenticity discrimination on event-related potentials (ERPs) related to emotion, motivational salience, and higher-order cognitive processing (N100, P200 and late positive complex, the LPC), using vocalised non-verbal expressions of sadness (crying) and happiness (laughter) in a 32-participant, within-subject study. Using a repeated measures 2-factor (authenticity, emotion) ANOVA, we show that N100's amplitude was larger in response to authentic than acted vocalisations, particularly in cries, while P200's was larger in response to acted vocalisations, particularly in laughs. We suggest these results point to two different mechanisms: (1) a larger N100 in response to authentic vocalisations is consistent with its link to emotional content and arousal (putatively larger amplitude for genuine emotional expressions); (2) a larger P200 in response to acted ones is in line with evidence relating it to motivational salience (putatively larger for ambiguous emotional expressions). Complementarily, a significant main effect of emotion was found on P200 and LPC amplitudes, in that the two were larger for laughs than cries, regardless of authenticity. Overall, we provide the first electroencephalographic examination of authenticity discrimination and propose that authenticity processing of others' vocalisations is initiated early, along that of their emotional content or category, attesting for its evolutionary relevance for trust and bond formation.
Collapse
|
5
|
Lavan N, Collins MRN, Miah JFM. Audiovisual identity perception from naturally-varying stimuli is driven by visual information. Br J Psychol 2021; 113:248-263. [PMID: 34490897 DOI: 10.1111/bjop.12531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 07/19/2021] [Indexed: 11/30/2022]
Abstract
Identity perception often takes place in multimodal settings, where perceivers have access to both visual (face) and auditory (voice) information. Despite this, identity perception is usually studied in unimodal contexts, where face and voice identity perception are modelled independently from one another. In this study, we asked whether and how much auditory and visual information contribute to audiovisual identity perception from naturally-varying stimuli. In a between-subjects design, participants completed an identity sorting task with either dynamic video-only, audio-only or dynamic audiovisual stimuli. In this task, participants were asked to sort multiple, naturally-varying stimuli from three different people by perceived identity. We found that identity perception was more accurate for video-only and audiovisual stimuli compared with audio-only stimuli. Interestingly, there was no difference in accuracy between video-only and audiovisual stimuli. Auditory information nonetheless played a role alongside visual information as audiovisual identity judgements per stimulus could be predicted from both auditory and visual identity judgements, respectively. While the relationship was stronger for visual information and audiovisual information, auditory information still uniquely explained a significant portion of the variance in audiovisual identity judgements. Our findings thus align with previous theoretical and empirical work that proposes that, compared with faces, voices are an important but relatively less salient and a weaker cue to identity perception. We expand on this work to show that, at least in the context of this study, having access to voices in addition to faces does not result in better identity perception accuracy.
Collapse
Affiliation(s)
- Nadine Lavan
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, UK
| | - Madeleine Rose Niamh Collins
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, UK
| | - Jannatul Firdaus Monisha Miah
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, UK
| |
Collapse
|
6
|
Pre-SMA activation and the perception of contagiousness and authenticity in laughter sounds. Cortex 2021; 143:57-68. [PMID: 34388558 DOI: 10.1016/j.cortex.2021.06.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 05/12/2021] [Accepted: 06/18/2021] [Indexed: 12/14/2022]
Abstract
Functional near-infrared spectroscopy and behavioural methods were used to examine the neural basis of the behavioural contagion and authenticity of laughter. We demonstrate that the processing of laughter sounds recruits networks previously shown to be related to empathy and auditory-motor mirror networks. Additionally, we found that the differences in the levels of activation in response to volitional and spontaneous laughter could predict an individual's perception of how contagious they found the laughter to be.
Collapse
|
7
|
Engelberg JW, Gouzoules H. The credibility of acted screams: Implications for emotional communication research. Q J Exp Psychol (Hove) 2018; 72:1889-1902. [PMID: 30514163 DOI: 10.1177/1747021818816307] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Researchers have long relied on acted material to study emotional expression and perception in humans. It has been suggested, however, that certain aspects of natural expressions are difficult or impossible to produce voluntarily outside of their associated emotional contexts, and that acted expressions tend to be overly intense caricatures. From an evolutionary perspective, listeners' abilities to distinguish acted from natural expressions likely depend on the type of expression in question, the costs entailed in its production, and elements of receiver psychology. Here, we investigated these issues as they relate to human screams. We also examined whether listeners' abilities to distinguish acted from natural screams might vary as a function of individual differences in emotional processing and empathy. Using a forced-choice categorization task, we found that listeners could not distinguish acted from natural exemplars, suggesting that actors can produce dramatisations of screams resembling natural vocalisations. Intensity ratings did not differ between acted and natural screams, nor did individual differences in emotional processing significantly predict performance. Scream duration predicted both the probability that an exemplar was categorised as acted and the probability that participants classified that scream accurately. These findings are discussed with respect to potential evolutionary implications and their practical relevance to future research using acted screams.
Collapse
|
8
|
|
9
|
Neural correlates of the affective properties of spontaneous and volitional laughter types. Neuropsychologia 2016; 95:30-39. [PMID: 27940151 DOI: 10.1016/j.neuropsychologia.2016.12.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 12/06/2016] [Accepted: 12/07/2016] [Indexed: 11/23/2022]
Abstract
Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceived affective properties (arousal and valence) that are similar across spontaneous and volitional types (Bryant & Aktipis, 2014; Lavan et al., 2016). In the current study, we explored the neural correlates of such relationships by measuring modulations of the BOLD response in the presence of itemwise variability in the subjective affective properties of spontaneous and volitional laughter. Across all laughs, and within spontaneous and volitional sets, we consistently observed linear increases in the response of bilateral auditory cortices (including Heschl's gyrus and superior temporal gyrus [STG]) associated with higher ratings of perceived arousal, valence and authenticity. Areas in the anterior medial prefrontal cortex (amPFC) showed negative linear correlations with valence and authenticity ratings across the full set of spontaneous and volitional laughs; in line with previous research (McGettigan et al., 2015; Szameitat et al., 2010), we suggest that this reflects increased engagement of these regions in response to laughter of greater social ambiguity. Strikingly, an investigation of higher-order relationships between the entire laughter set and the neural response revealed a positive quadratic profile of the BOLD response in right-dominant STG (extending onto the dorsal bank of the STS), where this region responded most strongly to laughs rated at the extremes of the authenticity scale. While previous studies claimed a role for right STG in bipolar representation of emotional valence, we instead argue that this may in fact exhibit a relatively categorical response to emotional signals, whether positive or negative.
Collapse
|