1
|
Temudo S, Pinheiro AP. What Is Faster than Where in Vocal Emotional Perception. J Cogn Neurosci 2025; 37:239-265. [PMID: 39348115 DOI: 10.1162/jocn_a_02251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Voices carry a vast amount of information about speakers (e.g., emotional state; spatial location). Neuroimaging studies postulate that spatial ("where") and emotional ("what") cues are processed by partially independent processing streams. Although behavioral evidence reveals interactions between emotion and space, the temporal dynamics of these processes in the brain and its modulation by attention remain unknown. We investigated whether and how spatial and emotional features interact during voice processing as a function of attention focus. Spatialized nonverbal vocalizations differing in valence (neutral, amusement, anger) were presented at different locations around the head, whereas listeners discriminated either the spatial location or emotional quality of the voice. Neural activity was measured with ERPs of the EEG. Affective ratings were collected at the end of the EEG session. Emotional vocalizations elicited decreased N1 but increased P2 and late positive potential amplitudes. Interactions of space and emotion occurred at the salience detection stage: neutral vocalizations presented at right (vs. left) locations elicited increased P2 amplitudes, but no such differences were observed for emotional vocalizations. When task instructions involved emotion categorization, the P2 was increased for vocalizations presented at front (vs. back) locations. Behaviorally, only valence and arousal ratings showed emotion-space interactions. These findings suggest that emotional representations are activated earlier than spatial representations in voice processing. The perceptual prioritization of emotional cues occurred irrespective of task instructions but was not paralleled by an augmented stimulus representation in space. These findings support the differential responding to emotional information by auditory processing pathways.
Collapse
|
2
|
Olszanowski M, Frankowska N, Tołopiło A. "Rear bias" in spatial auditory perception: Attentional and affective vigilance to sounds occurring outside the visual field. Psychophysiology 2023; 60:e14377. [PMID: 37357967 DOI: 10.1111/psyp.14377] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 05/11/2023] [Accepted: 05/14/2023] [Indexed: 06/27/2023]
Abstract
Presented studies explored the rear bias phenomenon, that is, the attentional and affective bias to sounds occurring behind the listener. Physiological and psychological reactions (i.e., fEMG, EDA/SCR, Simple Reaction Task-SRT, and self-assessments of affect-related states) were measured in response to tones of different frequencies (Study 1) and emotional vocalizations (Study 2) presented in rear and front spatial locations. Results showed that emotional vocalizations, when located in the back, facilitate reactions related to attention orientation (i.e., auricularis muscle response and simple reaction times) and evoke higher arousal-both physiological (as measured by SCR) and psychological (self-assessment scale). Importantly, observed asymmetries were larger for negative and threat-related signals (e.g., anger) than positive/nonthreatening ones (e.g., achievement). By contrast, there were only small differences for the relatively higher frequency tones. The observed relationships are discussed in terms of one of the postulated auditory system's functions, which is monitoring of the environment in order to quickly detect potential threats that occur outside of the visual field (e.g., behind one's back).
Collapse
Affiliation(s)
- Michal Olszanowski
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| | - Natalia Frankowska
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| | - Aleksandra Tołopiło
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| |
Collapse
|
3
|
Pinheiro AP, Sarzedas J, Roberto MS, Kotz SA. Attention and emotion shape self-voice prioritization in speech processing. Cortex 2023; 158:83-95. [PMID: 36473276 DOI: 10.1016/j.cortex.2022.10.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 09/27/2022] [Accepted: 10/06/2022] [Indexed: 01/18/2023]
Abstract
Both self-voice and emotional speech are salient signals that are prioritized in perception. Surprisingly, self-voice perception has been investigated to a lesser extent than the self-face. Therefore, it remains to be clarified whether self-voice prioritization is boosted by emotion, and whether self-relevance and emotion interact differently when attention is focused on who is speaking vs. what is being said. Thirty participants listened to 210 prerecorded words spoken in one's own or an unfamiliar voice and differing in emotional valence in two tasks, manipulating the attention focus on either speaker identity or speech emotion. Event-related potentials (ERP) of the electroencephalogram (EEG) informed on the temporal dynamics of self-relevance, emotion, and attention effects. Words spoken in one's own voice elicited a larger N1 and Late Positive Potential (LPP), but smaller N400. Identity and emotion interactively modulated the P2 (self-positivity bias) and LPP (self-negativity bias). Attention to speaker identity modulated more strongly ERP responses within 600 ms post-word onset (N1, P2, N400), whereas attention to speech emotion altered the late component (LPP). However, attention did not modulate the interaction of self-relevance and emotion. These findings suggest that the self-voice is prioritized for neural processing at early sensory stages, and that both emotion and attention shape self-voice prioritization in speech processing. They also confirm involuntary processing of salient signals (self-relevance and emotion) even in situations in which attention is deliberately directed away from those cues. These findings have important implications for a better understanding of symptoms thought to arise from aberrant self-voice monitoring such as auditory verbal hallucinations.
Collapse
Affiliation(s)
- Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal; Basic and Applied NeuroDynamics Lab, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sonja A Kotz
- Basic and Applied NeuroDynamics Lab, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
4
|
Pinheiro AP, Anikin A, Conde T, Sarzedas J, Chen S, Scott SK, Lima CF. Emotional authenticity modulates affective and social trait inferences from voices. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200402. [PMID: 34719249 PMCID: PMC8558771 DOI: 10.1098/rstb.2020.0402] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2021] [Indexed: 01/31/2023] Open
Abstract
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Ana P. Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche em Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, 42023 Saint-Etienne, France
- Division of Cognitive Science, Lund University, 221 00 Lund, Sweden
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - Sinead Chen
- National Taiwan University, Taipei City, 10617 Taiwan
| | - Sophie K. Scott
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK
| | - César F. Lima
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK
- Instituto Universitário de Lisboa (ISCTE-IUL), Avenida das Forças Armadas, 1649-026 Lisboa, Portugal
| |
Collapse
|
5
|
Li S, Jia R, Hu W, Luo J, Sun R. When and how does ambidextrous leadership influences voice? The roles of leader-subordinate gender similarity and employee emotional expression. INTERNATIONAL JOURNAL OF HUMAN RESOURCE MANAGEMENT 2021. [DOI: 10.1080/09585192.2021.1991433] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Shuwen Li
- School of Economics and Management, Dalian University of Technology, Dalian, China
| | - Ruiqian Jia
- School of Economics and Management, Dalian University of Technology, Dalian, China
| | - Wenan Hu
- School of Economics and Management, Tongji University, Shangha, China
| | - Jinlian Luo
- Institute of Talent Development Strategy, Shandong University, Jinan, China
| | - Rui Sun
- Chinese Academy of Personnel Science, Beijing, China
| |
Collapse
|
6
|
Zhong X, Yang Z, Yu S, Song H, Gu Z. Comparison of sound location variations in free and reverberant fields: An event-related potential study. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:EL14. [PMID: 32752752 DOI: 10.1121/10.0001489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 06/08/2020] [Indexed: 06/11/2023]
Abstract
This study compares event-related potentials (ERPs) elicited by variations of sound location in free and reverberant fields. The virtual sound sources located at azimuths 0°-40° were synthesized with head-related transfer functions and binaural room impulse responses for free and reverberant fields, respectively. The sound stimulus at 0° was assigned as standard in the oddball paradigm. Results show that the P3 amplitude is larger in the free field and acoustical conditions have no significant effect on the amplitudes of N2 and mismatch negativity. Moreover, a linear relationship between sound angle and amplitude of ERP components is observed.
Collapse
Affiliation(s)
- Xiaoli Zhong
- School of Physics and Optoelectronics, South China University of Technology, Guangzhou, 510641, People's Republic of China
| | - Zihui Yang
- School of Physics and Optoelectronics, South China University of Technology, Guangzhou, 510641, People's Republic of China
| | - Shengfeng Yu
- School of Physics and Optoelectronics, South China University of Technology, Guangzhou, 510641, People's Republic of China
| | - Hao Song
- School of Management, Guangdong University of Technology, Guangzhou, 510520, People's Republic of China
| | - Zhenghui Gu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510641, People's Republic of , , , ,
| |
Collapse
|