1
|
Morningstar M, Billetdeaux KA, Mattson WI, Gilbert AC, Nelson EE, Hoskinson KR. Neural response to vocal emotional intensity in youth. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2025; 25:454-470. [PMID: 39300012 DOI: 10.3758/s13415-024-01224-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/24/2024] [Indexed: 09/22/2024]
Abstract
Previous research has identified regions of the brain that are sensitive to emotional intensity in faces, with some evidence for developmental differences in this pattern of response. However, comparable understanding of how the brain tracks linear variations in emotional prosody is limited-especially in youth samples. The current study used novel stimuli (morphing emotional prosody from neutral to anger/happiness in linear increments) to investigate whether neural response to vocal emotion was parametrically modulated by emotional intensity and whether there were age-related changes in this effect. Participants aged 8-21 years (n = 56, 52% female) completed a vocal emotion recognition task, in which they identified the intended emotion in morphed recordings of vocal prosody, while undergoing functional magnetic resonance imaging. Parametric analyses of whole-brain response to morphed stimuli found that activation in the bilateral superior temporal gyrus (STG) scaled to emotional intensity in angry (but not happy) voices. Multivariate region-of-interest analyses revealed the same pattern in the right amygdala. Sensitivity to emotional intensity did not vary by participants' age. These findings provide evidence for the linear parameterization of emotional intensity in angry vocal prosody within the bilateral STG and right amygdala. Although findings should be replicated, the current results also suggest that this pattern of neural sensitivity may not be subject to strong developmental influences.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3L3, Canada.
- Centre for Neuroscience Studies, Queen's University, Kingston, Canada.
| | - K A Billetdeaux
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - W I Mattson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - A C Gilbert
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
- Centre for Research on Brain, Language, and Music, Montreal, Canada
| | - E E Nelson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH, USA
| | - K R Hoskinson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
2
|
Hashimoto RI, Okada R, Aoki R, Nakamura M, Ohta H, Itahashi T. Functional alterations of lateral temporal cortex for processing voice prosody in adults with autism spectrum disorder. Cereb Cortex 2024; 34:bhae363. [PMID: 39270675 DOI: 10.1093/cercor/bhae363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 08/17/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024] Open
Abstract
The human auditory system includes discrete cortical patches and selective regions for processing voice information, including emotional prosody. Although behavioral evidence indicates individuals with autism spectrum disorder (ASD) have difficulties in recognizing emotional prosody, it remains understudied whether and how localized voice patches (VPs) and other voice-sensitive regions are functionally altered in processing prosody. This fMRI study investigated neural responses to prosodic voices in 25 adult males with ASD and 33 controls using voices of anger, sadness, and happiness with varying degrees of emotion. We used a functional region-of-interest analysis with an independent voice localizer to identify multiple VPs from combined ASD and control data. We observed a general response reduction to prosodic voices in specific VPs of left posterior temporal VP (TVP) and right middle TVP. Reduced cortical responses in right middle TVP were consistently correlated with the severity of autistic symptoms for all examined emotional prosodies. Moreover, representation similarity analysis revealed the reduced effect of emotional intensity in multivoxel activation patterns in left anterior superior temporal cortex only for sad prosody. These results indicate reduced response magnitudes to voice prosodies in specific TVPs and altered emotion intensity-dependent multivoxel activation patterns in adult ASDs, potentially underlying their socio-communicative difficulties.
Collapse
Affiliation(s)
- Ryu-Ichiro Hashimoto
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan
| | - Rieko Okada
- Faculty of Intercultural Japanese Studies, Otemae University, 6-42 Ochayasho-cho, Nishinomiya-shi Hyogo 662-8552, Japan
| | - Ryuta Aoki
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, 54 Shogoin-Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| | - Motoaki Nakamura
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| | - Haruhisa Ohta
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| | - Takashi Itahashi
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| |
Collapse
|
3
|
Ziereis A, Schacht A. Gender congruence and emotion effects in cross-modal associative learning: Insights from ERPs and pupillary responses. Psychophysiology 2023; 60:e14380. [PMID: 37387451 DOI: 10.1111/psyp.14380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 05/01/2023] [Accepted: 06/17/2023] [Indexed: 07/01/2023]
Abstract
Social and emotional cues from faces and voices are highly relevant and have been reliably demonstrated to attract attention involuntarily. However, there are mixed findings as to which degree associating emotional valence to faces occurs automatically. In the present study, we tested whether inherently neutral faces gain additional relevance by being conditioned with either positive, negative, or neutral vocal affect bursts. During learning, participants performed a gender-matching task on face-voice pairs without explicit emotion judgments of the voices. In the test session on a subsequent day, only the previously associated faces were presented and had to be categorized regarding gender. We analyzed event-related potentials (ERPs), pupil diameter, and response times (RTs) of N = 32 subjects. Emotion effects were found in auditory ERPs and RTs during the learning session, suggesting that task-irrelevant emotion was automatically processed. However, ERPs time-locked to the conditioned faces were mainly modulated by the task-relevant information, that is, the gender congruence of the face and voice, but not by emotion. Importantly, these ERP and RT effects of learned congruence were not limited to learning but extended to the test session, that is, after removing the auditory stimuli. These findings indicate successful associative learning in our paradigm, but it did not extend to the task-irrelevant dimension of emotional relevance. Therefore, cross-modal associations of emotional relevance may not be completely automatic, even though the emotion was processed in the voice.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| |
Collapse
|
4
|
Grisendi T, Clarke S, Da Costa S. Emotional sounds in space: asymmetrical representation within early-stage auditory areas. Front Neurosci 2023; 17:1164334. [PMID: 37274197 PMCID: PMC10235458 DOI: 10.3389/fnins.2023.1164334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 04/07/2023] [Indexed: 06/06/2023] Open
Abstract
Evidence from behavioral studies suggests that the spatial origin of sounds may influence the perception of emotional valence. Using 7T fMRI we have investigated the impact of the categories of sound (vocalizations; non-vocalizations), emotional valence (positive, neutral, negative) and spatial origin (left, center, right) on the encoding in early-stage auditory areas and in the voice area. The combination of these different characteristics resulted in a total of 18 conditions (2 categories x 3 valences x 3 lateralizations), which were presented in a pseudo-randomized order in blocks of 11 different sounds (of the same condition) in 12 distinct runs of 6 min. In addition, two localizers, i.e., tonotopy mapping; human vocalizations, were used to define regions of interest. A three-way repeated measure ANOVA on the BOLD responses revealed bilateral significant effects and interactions in the primary auditory cortex, the lateral early-stage auditory areas, and the voice area. Positive vocalizations presented on the left side yielded greater activity in the ipsilateral and contralateral primary auditory cortex than did neutral or negative vocalizations or any other stimuli at any of the three positions. Right, but not left area L3 responded more strongly to (i) positive vocalizations presented ipsi- or contralaterally than to neutral or negative vocalizations presented at the same positions; and (ii) to neutral than positive or negative non-vocalizations presented contralaterally. Furthermore, comparison with a previous study indicates that spatial cues may render emotional valence more salient within the early-stage auditory areas.
Collapse
Affiliation(s)
- Tiffany Grisendi
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Sandra Da Costa
- Centre d’Imagerie Biomédicale, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
5
|
Vos S, Collignon O, Boets B. The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG. Brain Sci 2023; 13:brainsci13020162. [PMID: 36831705 PMCID: PMC9954097 DOI: 10.3390/brainsci13020162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 01/13/2023] [Accepted: 01/16/2023] [Indexed: 01/20/2023] Open
Abstract
Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at a 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at a 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features. Eventually, here, we present a new database for vocal emotion research with short emotional utterances (EVID) together with an innovative frequency-tagging EEG paradigm for implicit vocal emotion discrimination.
Collapse
Affiliation(s)
- Silke Vos
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, 3000 Leuven, Belgium
- Leuven Brain Institute (LBI), KU Leuven, 3000 Leuven, Belgium
- Correspondence: ; Tel.: +32-16-37-76-83
| | - Olivier Collignon
- Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, 1348 Louvain-La-Neuve, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, 1007 Lausanne and 1950 Sion, Switzerland
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, 3000 Leuven, Belgium
- Leuven Brain Institute (LBI), KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
6
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
7
|
Morningstar M, Mattson WI, Nelson EE. Longitudinal Change in Neural Response to Vocal Emotion in Adolescence. Soc Cogn Affect Neurosci 2022; 17:890-903. [PMID: 35323933 PMCID: PMC9527472 DOI: 10.1093/scan/nsac021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 02/25/2022] [Accepted: 03/21/2022] [Indexed: 01/09/2023] Open
Abstract
Adolescence is associated with maturation of function within neural networks supporting the processing of social information. Previous longitudinal studies have established developmental influences on youth’s neural response to facial displays of emotion. Given the increasing recognition of the importance of non-facial cues to social communication, we build on existing work by examining longitudinal change in neural response to vocal expressions of emotion in 8- to 19-year-old youth. Participants completed a vocal emotion recognition task at two timepoints (1 year apart) while undergoing functional magnetic resonance imaging. The right inferior frontal gyrus, right dorsal striatum and right precentral gyrus showed decreases in activation to emotional voices across timepoints, which may reflect focalization of response in these areas. Activation in the dorsomedial prefrontal cortex was positively associated with age but was stable across timepoints. In addition, the slope of change across visits varied as a function of participants’ age in the right temporo-parietal junction (TPJ): this pattern of activation across timepoints and age may reflect ongoing specialization of function across childhood and adolescence. Decreased activation in the striatum and TPJ across timepoints was associated with better emotion recognition accuracy. Findings suggest that specialization of function in social cognitive networks may support the growth of vocal emotion recognition skills across adolescence.
Collapse
Affiliation(s)
- Michele Morningstar
- Correspondence should be addressed to Michele Morningstar, Department of Psychology, Queen’s University, 62 Arch Street, Kingston, ON K7L 3L3, Canada. E-mail:
| | - Whitney I Mattson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
| | - Eric E Nelson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH 43205, USA
| |
Collapse
|
8
|
Hwang Y, Lee KH, Kim N, Lee J, Lee HY, Jeon JE, Lee YJ, Kim SJ. Cognitive Appraisal of Sleep and Brain Activation in Response to Sleep-Related Sounds in Healthy Adults. Nat Sci Sleep 2022; 14:1407-1416. [PMID: 35996417 PMCID: PMC9391942 DOI: 10.2147/nss.s359242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 08/03/2022] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Sounds play important roles in promoting and disrupting sleep. How our brain processes sleep-related sounds and individual differences in processing sleep-related sounds must be determined to understand the role of sound in sleep. We investigated neural responses to sleep-related sounds and their associations with cognitive appraisals of sleep. PARTICIPANTS AND METHODS Forty-four healthy adults heard sleep-related and neutral sounds during functional magnetic resonance imaging using a 3T scanner. They also completed the Dysfunctional Beliefs and Attitudes about Sleep (DBAS) questionnaire, which was used to assess cognitive appraisals of sleep. We conducted a voxel-wise whole-brain analysis to compare brain activation in response to sleep-related and neutral sounds. We also examined the association between the DBAS score and brain activity in response to sleep-related sounds (vs neutral sounds) using region of interest (ROI) and whole-brain correlation analyses. The ROIs included the anterior cingulate cortex (ACC), anterior insula (AI), and amygdala. RESULTS The whole-brain analysis revealed increased activation in the temporal regions and decreased activation in the ACC in response to sleep-related sounds compared to neutral sounds. The ROI and whole-brain correlation analyses showed that higher DBAS scores, indicating a negative appraisal of sleep, were significantly correlated with increased activation of the ACC, right medial prefrontal cortex, and brainstem in response to sleep-related sounds. CONCLUSION These results indicate that the temporal cortex and ACC, which are implicated in affective sound processing, may play important roles in the processing of sleep-related sounds. The positive association between the neural responses to sleep-related sounds and DBAS scores suggest that negative and dysfunctional appraisals of sleep may be an important factor in individual differences in the processing of sleep-related sounds.
Collapse
Affiliation(s)
- Yunjee Hwang
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Kyung Hwa Lee
- Department of Psychiatry and Center for Sleep and Chronobiology, Seoul National University, College of Medicine and Hospital, Seoul, Republic of Korea.,Division of Child and Adolescent Psychiatry, Department of Psychiatry, Seoul National University Hospital, Seoul, Republic of Korea
| | - Nambeom Kim
- Department of Biomedical Engineering Research Center, Gachon University, Incheon, Republic of Korea
| | - Jooyoung Lee
- Department of Psychiatry, Sungkyunkwan University College of Medicine, Samsung Medical Center, Seoul, Republic of Korea
| | - Ha Young Lee
- Department of Psychiatry and Center for Sleep and Chronobiology, Seoul National University, College of Medicine and Hospital, Seoul, Republic of Korea
| | - Jeong Eun Jeon
- Department of Psychiatry and Center for Sleep and Chronobiology, Seoul National University, College of Medicine and Hospital, Seoul, Republic of Korea
| | - Yu Jin Lee
- Department of Psychiatry and Center for Sleep and Chronobiology, Seoul National University, College of Medicine and Hospital, Seoul, Republic of Korea
| | - Seog Ju Kim
- Department of Psychiatry, Sungkyunkwan University College of Medicine, Samsung Medical Center, Seoul, Republic of Korea
| |
Collapse
|
9
|
Durfee AZ, Sheppard SM, Blake ML, Hillis AE. Lesion loci of impaired affective prosody: A systematic review of evidence from stroke. Brain Cogn 2021; 152:105759. [PMID: 34118500 PMCID: PMC8324538 DOI: 10.1016/j.bandc.2021.105759] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 05/06/2021] [Accepted: 05/24/2021] [Indexed: 02/06/2023]
Abstract
Affective prosody, or the changes in rate, rhythm, pitch, and loudness that convey emotion, has long been implicated as a function of the right hemisphere (RH), yet there is a dearth of literature identifying the specific neural regions associated with its processing. The current systematic review aimed to evaluate the evidence on affective prosody localization in the RH. One hundred and ninety articles from 1970 to February 2020 investigating affective prosody comprehension and production in patients with focal brain damage were identified via database searches. Eleven articles met inclusion criteria, passed quality reviews, and were analyzed for affective prosody localization. Acute, subacute, and chronic lesions demonstrated similar profile characteristics. Localized right antero-superior (i.e., dorsal stream) regions contributed to affective prosody production impairments, whereas damage to more postero-lateral (i.e., ventral stream) regions resulted in affective prosody comprehension deficits. This review provides support that distinct RH regions are vital for affective prosody comprehension and production, aligning with literature reporting RH activation for affective prosody processing in healthy adults as well. The impact of study design on resulting interpretations is discussed.
Collapse
Affiliation(s)
- Alexandra Zezinka Durfee
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States.
| | - Shannon M Sheppard
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Communication Sciences and Disorders, Chapman University Crean College of Health and Behavioral Sciences, Irvine, CA 92618, United States
| | - Margaret L Blake
- Department of Communication Sciences and Disorders, University of Houston College of Liberal Arts and Social Sciences, Houston, TX 77204, United States
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, United States
| |
Collapse
|
10
|
Sheppard SM, Meier EL, Zezinka Durfee A, Walker A, Shea J, Hillis AE. Characterizing subtypes and neural correlates of receptive aprosodia in acute right hemisphere stroke. Cortex 2021; 141:36-54. [PMID: 34029857 PMCID: PMC8489691 DOI: 10.1016/j.cortex.2021.04.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 03/20/2021] [Accepted: 04/09/2021] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Speakers naturally produce prosodic variations depending on their emotional state. Receptive prosody has several processing stages. We aimed to conduct lesion-symptom mapping to determine whether damage (core infarct or hypoperfusion) to specific brain areas was associated with receptive aprosodia or with impairment at different processing stages in individuals with acute right hemisphere stroke. We also aimed to determine whether different subtypes of receptive aprosodia exist that are characterized by distinctive behavioral performance patterns. METHODS Twenty patients with receptive aprosodia following right hemisphere ischemic stroke were enrolled within five days of stroke; clinical imaging was acquired. Participants completed tests of receptive emotional prosody, and tests of each stage of prosodic processing (Stage 1: acoustic analysis; Stage 2: analyzing abstract representations of acoustic characteristics that convey emotion; Stage 3: semantic processing). Emotional facial recognition was also assessed. LASSO regression was used to identify predictors of performance on each behavioral task. Predictors entered into each model included 14 right hemisphere regions, hypoperfusion in four vascular territories as measured using FLAIR hyperintense vessel ratings, lesion volume, age, and education. A k-medoid cluster analysis was used to identify different subtypes of receptive aprosodia based on performance on the behavioral tasks. RESULTS Impaired receptive emotional prosody and impaired emotional facial expression recognition were both predicted by greater percent damage to the caudate. The k-medoid cluster analysis identified three different subtypes of aprosodia. One group was primarily impaired on Stage 1 processing and primarily had frontotemporal lesions. The second group had a domain-general emotion recognition impairment and maximal lesion overlap in subcortical areas. Finally, the third group was characterized by a Stage 2 processing deficit and had lesion overlap in posterior regions. CONCLUSIONS Subcortical structures, particularly the caudate, play an important role in emotional prosody comprehension. Receptive aprosodia can result from impairments at different processing stages.
Collapse
Affiliation(s)
- Shannon M Sheppard
- Department of Communication Sciences & Disorders, Chapman University, Irvine, CA, USA; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Erin L Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | - Alex Walker
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jennifer Shea
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
11
|
Zhao C, Schiessl I, Wan MW, Chronaki G, Abel KM. Development of the neural processing of vocal emotion during the first year of life. Child Neuropsychol 2020; 27:333-350. [DOI: 10.1080/09297049.2020.1853090] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Chen Zhao
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| | - Ingo Schiessl
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
- Geoffrey Jefferson Brain Research Centre, The Manchester Academic Health Science Centre, Northern Care Alliance NHS Group, University of Manchester, Manchester, UK
| | - Ming Wai Wan
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| | - Georgia Chronaki
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
- Developmental Cognitive Neuroscience (DCN) Laboratory, School of Psychology,Faculty of Science and Technology, University of Central Lancashire, Preston, UK
| | - Kathryn M. Abel
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, UK
| |
Collapse
|
12
|
Sonderfeld M, Mathiak K, Häring GS, Schmidt S, Habel U, Gur R, Klasen M. Supramodal neural networks support top-down processing of social signals. Hum Brain Mapp 2020; 42:676-689. [PMID: 33073911 PMCID: PMC7814753 DOI: 10.1002/hbm.25252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 08/08/2020] [Accepted: 09/29/2020] [Indexed: 12/17/2022] Open
Abstract
The perception of facial and vocal stimuli is driven by sensory input and cognitive top‐down influences. Important top‐down influences are attentional focus and supramodal social memory representations. The present study investigated the neural networks underlying these top‐down processes and their role in social stimulus classification. In a neuroimaging study with 45 healthy participants, we employed a social adaptation of the Implicit Association Test. Attentional focus was modified via the classification task, which compared two domains of social perception (emotion and gender), using the exactly same stimulus set. Supramodal memory representations were addressed via congruency of the target categories for the classification of auditory and visual social stimuli (voices and faces). Functional magnetic resonance imaging identified attention‐specific and supramodal networks. Emotion classification networks included bilateral anterior insula, pre‐supplementary motor area, and right inferior frontal gyrus. They were pure attention‐driven and independent from stimulus modality or congruency of the target concepts. No neural contribution of supramodal memory representations could be revealed for emotion classification. In contrast, gender classification relied on supramodal memory representations in rostral anterior cingulate and ventromedial prefrontal cortices. In summary, different domains of social perception involve different top‐down processes which take place in clearly distinguishable neural networks.
Collapse
Affiliation(s)
- Melina Sonderfeld
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Gianna S Häring
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Sarah Schmidt
- Life & Brain - Institute for Experimental Epileptology and Cognition Research, Bonn, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Raquel Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany.,Interdisciplinary Training Centre for Medical Education and Patient Safety - AIXTRA, Medical Faculty, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
13
|
Affect-biased attention and predictive processing. Cognition 2020; 203:104370. [DOI: 10.1016/j.cognition.2020.104370] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 05/22/2020] [Accepted: 06/03/2020] [Indexed: 01/22/2023]
|
14
|
Where Sounds Occur Matters: Context Effects Influence Processing of Salient Vocalisations. Brain Sci 2020; 10:brainsci10070429. [PMID: 32640750 PMCID: PMC7407900 DOI: 10.3390/brainsci10070429] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 06/26/2020] [Accepted: 07/02/2020] [Indexed: 11/23/2022] Open
Abstract
The social context in which a salient human vocalisation is heard shapes the affective information it conveys. However, few studies have investigated how visual contextual cues lead to differential processing of such vocalisations. The prefrontal cortex (PFC) is implicated in processing of contextual information and evaluation of saliency of vocalisations. Using functional Near-Infrared Spectroscopy (fNIRS), we investigated PFC responses of young adults (N = 18) to emotive infant and adult vocalisations while they passively viewed the scenes of two categories of environmental contexts: a domestic environment (DE) and an outdoors environment (OE). Compared to a home setting (DE) which is associated with a fixed mental representation (e.g., expect seeing a living room in a typical house), the outdoor setting (OE) is more variable and less predictable, thus might demand greater processing effort. From our previous study in Azhari et al. (2018) that employed the same experimental paradigm, the OE context was found to elicit greater physiological arousal compared to the DE context. Similarly, we hypothesised that greater PFC activation will be observed when salient vocalisations are paired with the OE compared to the DE condition. Our finding supported this hypothesis: the left rostrolateral PFC, an area of the brain that facilitates relational integration, exhibited greater activation in the OE than DE condition which suggests that greater cognitive resources are required to process outdoor situational information together with salient vocalisations. The result from this study bears relevance in deepening our understanding of how contextual information differentially modulates the processing of salient vocalisations.
Collapse
|
15
|
Abstract
The processing of emotional nonlinguistic information in speech is defined as emotional prosody. This auditory nonlinguistic information is essential in the decoding of social interactions and in our capacity to adapt and react adequately by taking into account contextual information. An integrated model is proposed at the functional and brain levels, encompassing 5 main systems that involve cortical and subcortical neural networks relevant for the processing of emotional prosody in its major dimensions, including perception and sound organization; related action tendencies; and associated values that integrate complex social contexts and ambiguous situations.
Collapse
Affiliation(s)
- Didier Grandjean
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
16
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
17
|
Age-related differences in neural activation and functional connectivity during the processing of vocal prosody in adolescence. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1418-1432. [PMID: 31515750 DOI: 10.3758/s13415-019-00742-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The ability to recognize others' emotions based on vocal emotional prosody follows a protracted developmental trajectory during adolescence. However, little is known about the neural mechanisms supporting this maturation. The current study investigated age-related differences in neural activation during a vocal emotion recognition (ER) task. Listeners aged 8 to 19 years old completed the vocal ER task while undergoing functional magnetic resonance imaging. The task of categorizing vocal emotional prosody elicited activation primarily in temporal and frontal areas. Age was associated with a) greater activation in regions in the superior, middle, and inferior frontal gyri, b) greater functional connectivity between the left precentral and inferior frontal gyri and regions in the bilateral insula and temporo-parietal junction, and c) greater fractional anisotropy in the superior longitudinal fasciculus, which connects frontal areas to posterior temporo-parietal regions. Many of these age-related differences in brain activation and connectivity were associated with better performance on the ER task. Increased activation in, and connectivity between, areas typically involved in language processing and social cognition may facilitate the development of vocal ER skills in adolescence.
Collapse
|
18
|
Grisendi T, Reynaud O, Clarke S, Da Costa S. Processing pathways for emotional vocalizations. Brain Struct Funct 2019; 224:2487-2504. [DOI: 10.1007/s00429-019-01912-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 06/12/2019] [Indexed: 01/06/2023]
|
19
|
Koch K, Stegmaier S, Schwarz L, Erb M, Thomas M, Scheffler K, Wildgruber D, Nieratschker V, Ethofer T. CACNA1C risk variant affects microstructural connectivity of the amygdala. Neuroimage Clin 2019; 22:101774. [PMID: 30909026 PMCID: PMC6434179 DOI: 10.1016/j.nicl.2019.101774] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 01/29/2019] [Accepted: 03/10/2019] [Indexed: 11/28/2022]
Abstract
Deficits in perception of emotional prosody have been described in patients with affective disorders at behavioral and neural level. In the current study, we use an imaging genetics approach to examine the impact of CACNA1C, one of the most promising genetic risk factors for psychiatric disorders, on prosody processing on a behavioral, functional and microstructural level. Using functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) we examined key areas involved in prosody processing, i.e. the amygdala and voice areas, in a healthy population. We found stronger activation to emotional than neutral prosody in the voice areas and the amygdala, but CACNA1C rs1006737 genotype had no influence on fMRI activity. However, significant microstructural differences (i.e. mean diffusivity) between CACNA1C rs1006737 risk allele carriers and non carriers were found in the amygdala, but not the voice areas. These modifications in brain architecture associated with CACNA1C might reflect a neurobiological marker predisposing to affective disorders and concomitant alterations in emotion perception.
Collapse
Affiliation(s)
- Katharina Koch
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany.
| | - Sophia Stegmaier
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Lena Schwarz
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Michael Erb
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| | - Mara Thomas
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany; Max-Planck-Institute for Biological Cybernetics, University of Tuebingen, Tuebingen, Germany
| | - Dirk Wildgruber
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Vanessa Nieratschker
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany; Werner Reichardt Center for Integrative Neuroscience, University of Tuebingen, Tuebingen, Germany
| | - Thomas Ethofer
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany; Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
20
|
Zhao C, Chronaki G, Schiessl I, Wan MW, Abel KM. Is infant neural sensitivity to vocal emotion associated with mother-infant relational experience? PLoS One 2019; 14:e0212205. [PMID: 30811431 PMCID: PMC6392422 DOI: 10.1371/journal.pone.0212205] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 01/29/2019] [Indexed: 12/20/2022] Open
Abstract
An early understanding of others' vocal emotions provides infants with a distinct advantage for eliciting appropriate care from caregivers and for navigating their social world. Consistent with this notion, an emerging literature suggests that a temporal cortical response to the prosody of emotional speech is observable in the first year of life. Furthermore, neural specialisation to vocal emotion in infancy may vary according to early experience. Neural sensitivity to emotional non-speech vocalisations was investigated in 29 six-month-old infants using near-infrared spectroscopy (fNIRS). Both angry and happy vocalisations evoked increased activation in the temporal cortices (relative to neutral and angry vocalisations respectively), and the strength of the angry minus neutral effect was positively associated with the degree of directiveness in the mothers' play interactions with their infant. This first fNIRS study of infant vocal emotion processing implicates bilateral temporal mechanisms similar to those found in adults and suggests that infants who experience more directive caregiving or social play may more strongly or preferentially process vocal anger by six months of age.
Collapse
Affiliation(s)
- Chen Zhao
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Georgia Chronaki
- Developmental Cognitive Neuroscience (DCN) Laboratory, School of Psychology, University of Central Lancashire, Preston, United Kingdom
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
- Developmental Brain-Behaviour Laboratory, Psychology, University of Southampton, United Kingdom
| | - Ingo Schiessl
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Ming Wai Wan
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Kathryn M. Abel
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| |
Collapse
|
21
|
Yokosawa K, Murakami Y, Sato H. Appearance and modulation of a reactive temporal-lobe 8-10-Hz tau-rhythm. Neurosci Res 2019; 150:44-50. [PMID: 30768949 DOI: 10.1016/j.neures.2019.02.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 01/27/2019] [Accepted: 02/06/2019] [Indexed: 11/24/2022]
Abstract
Spontaneous 8- to 10-Hz "tau-rhythm" in magnetoencephalographic (MEG) recordings has been reported to originate in the auditory cortex and be suppressed by sound. For unknown reasons however, tau-rhythm is often difficult to detect. In this study, we sought to characterize its emergence and auditory reactivity. Using a 306-channel MEG on 26 right-handed participants, we delivered six-second-long, natural, monaural sounds with pleasant, unpleasant, or neutral emotional valence. In eight participants, a clear, sound-related bilateral suppression of 8-10 Hz tau-rhythm occurred in the temporal areas, close to the source of the 100-ms auditory response. Moreover, these eight "tau subjects" exhibited significantly larger temporal-lobe theta-band (4-8 Hz) power over the entire experimental period compared to the remaining 18 "non-tau subjects". As it is known that larger theta power is one of signs of drowsiness, this result is consistent with a previously proposed idea that tau-rhythm emerges during drowsiness. Tau-rhythm was furthermore significantly affected by emotional valence in the right hemisphere, where it was respectively suppressed by unpleasant and neutral sounds 8% and 6% more than by pleasant sounds, significantly. Altogether, our results reveal characteristics of tau-rhythm appearance and modulation which have hitherto been difficult to detect non-invasively.
Collapse
Affiliation(s)
- Koichi Yokosawa
- Faculty of Health Sciences, Hokkaido University, Sapporo, 060-0812, Hokkaido, Japan; Brain Research Unit, O.V. Lounasmaa Laboratory, and MEG Core, Aalto NeuroImaging, School of Science, Aalto University, PO BOX 15100, 00076 AALTO, Finland.
| | - Yui Murakami
- Graduate School of Health Sciences, Hokkaido University, Sapporo, 060-0812, Hokkaido, Japan; Faculty of Human Science, Department of Occupational Therapy, Hokkaido Bunkyo University, Eniwa, 061-1449, Hokkaido, Japan
| | - Hiroaki Sato
- Department of Health Sciences, School of Medicine, Hokkaido University, Sapporo, 060-0812, Hokkaido, Japan
| |
Collapse
|
22
|
Lindström R, Lepistö-Paisley T, Makkonen T, Reinvall O, Nieminen-von Wendt T, Alén R, Kujala T. Atypical perceptual and neural processing of emotional prosodic changes in children with autism spectrum disorders. Clin Neurophysiol 2018; 129:2411-2420. [PMID: 30278390 DOI: 10.1016/j.clinph.2018.08.018] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/20/2018] [Accepted: 08/22/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The present study explored the processing of emotional speech prosody in school-aged children with autism spectrum disorders (ASD) but without marked language impairments (children with ASD [no LI]). METHODS The mismatch negativity (MMN)/the late discriminative negativity (LDN), reflecting pre-attentive auditory discrimination processes, and the P3a, indexing involuntary orienting to attention-catching changes, were recorded to natural word stimuli uttered with different emotional connotations (neutral, sad, scornful and commanding). Perceptual prosody discrimination was addressed with a behavioral sound-discrimination test. RESULTS Overall, children with ASD (no LI) were slower in behaviorally discriminating prosodic features of speech stimuli than typically developed control children. Further, smaller standard-stimulus event related potentials (ERPs) and MMN/LDNs were found in children with ASD (no LI) than in controls. In addition, the amplitude of the P3a was diminished and differentially distributed on the scalp in children with ASD (no LI) than in control children. CONCLUSIONS Processing of words and changes in emotional speech prosody is impaired at various levels of information processing in school-aged children with ASD (no LI). SIGNIFICANCE The results suggest that low-level speech sound discrimination and orienting deficits might contribute to emotional speech prosody processing impairments observed in ASD.
Collapse
Affiliation(s)
- R Lindström
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| | - T Lepistö-Paisley
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Pediatric Neurology, Helsinki University Hospital, Helsinki, Finland
| | - T Makkonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - O Reinvall
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Pediatric Neurology, Helsinki University Hospital, Helsinki, Finland
| | - T Nieminen-von Wendt
- Neuropsychiatric Rehabilitation and Medical Centre NeuroMental, Helsinki, Finland
| | - R Alén
- Department of Child Neurology, Central Finland Central Hospital, Jyväskylä, Finland
| | - T Kujala
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| |
Collapse
|
23
|
DAVID: An open-source platform for real-time transformation of infra-segmental emotional cues in running speech. Behav Res Methods 2018; 50:323-343. [PMID: 28374144 PMCID: PMC5809549 DOI: 10.3758/s13428-017-0873-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
We present an open-source software platform that transforms emotional cues expressed by speech signals using audio effects like pitch shifting, inflection, vibrato, and filtering. The emotional transformations can be applied to any audio file, but can also run in real time, using live input from a microphone, with less than 20-ms latency. We anticipate that this tool will be useful for the study of emotions in psychology and neuroscience, because it enables a high level of control over the acoustical and emotional content of experimental stimuli in a variety of laboratory situations, including real-time social situations. We present here results of a series of validation experiments aiming to position the tool against several methodological requirements: that transformed emotions be recognized at above-chance levels, valid in several languages (French, English, Swedish, and Japanese) and with a naturalness comparable to natural speech.
Collapse
|
24
|
Morningstar M, Nelson EE, Dirks MA. Maturation of vocal emotion recognition: Insights from the developmental and neuroimaging literature. Neurosci Biobehav Rev 2018; 90:221-230. [DOI: 10.1016/j.neubiorev.2018.04.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 03/16/2018] [Accepted: 04/24/2018] [Indexed: 01/05/2023]
|
25
|
Martin LM, García-Rosales F, Beetz MJ, Hechavarría JC. Processing of temporally patterned sounds in the auditory cortex of Seba's short-tailed bat,Carollia perspicillata. Eur J Neurosci 2018; 46:2365-2379. [PMID: 28921742 DOI: 10.1111/ejn.13702] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 09/06/2017] [Accepted: 09/07/2017] [Indexed: 11/29/2022]
Abstract
This article presents a characterization of cortical responses to artificial and natural temporally patterned sounds in the bat species Carollia perspicillata, a species that produces vocalizations at rates above 50 Hz. Multi-unit activity was recorded in three different experiments. In the first experiment, amplitude-modulated (AM) pure tones were used as stimuli to drive auditory cortex (AC) units. AC units of both ketamine-anesthetized and awake bats could lock their spikes to every cycle of the stimulus modulation envelope, but only if the modulation frequency was below 22 Hz. In the second experiment, two identical communication syllables were presented at variable intervals. Suppressed responses to the lagging syllable were observed, unless the second syllable followed the first one with a delay of at least 80 ms (i.e., 12.5 Hz repetition rate). In the third experiment, natural distress vocalization sequences were used as stimuli to drive AC units. Distress sequences produced by C. perspicillata contain bouts of syllables repeated at intervals of ~60 ms (16 Hz). Within each bout, syllables are repeated at intervals as short as 14 ms (~71 Hz). Cortical units could follow the slow temporal modulation flow produced by the occurrence of multisyllabic bouts, but not the fast acoustic flow created by rapid syllable repetition within the bouts. Taken together, our results indicate that even in fast vocalizing animals, such as bats, cortical neurons can only track the temporal structure of acoustic streams modulated at frequencies lower than 22 Hz.
Collapse
Affiliation(s)
- Lisa M Martin
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - Julio C Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| |
Collapse
|
26
|
Aryani A, Hsu CT, Jacobs AM. The Sound of Words Evokes Affective Brain Responses. Brain Sci 2018; 8:brainsci8060094. [PMID: 29789504 PMCID: PMC6025608 DOI: 10.3390/brainsci8060094] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 05/17/2018] [Accepted: 05/21/2018] [Indexed: 12/19/2022] Open
Abstract
The long history of poetry and the arts, as well as recent empirical results suggest that the way a word sounds (e.g., soft vs. harsh) can convey affective information related to emotional responses (e.g., pleasantness vs. harshness). However, the neural correlates of the affective potential of the sound of words remain unknown. In an fMRI study involving passive listening, we focused on the affective dimension of arousal and presented words organized in two discrete groups of sublexical (i.e., sound) arousal (high vs. low), while controlling for lexical (i.e., semantic) arousal. Words sounding high arousing, compared to their low arousing counterparts, resulted in an enhanced BOLD signal in bilateral posterior insula, the right auditory and premotor cortex, and the right supramarginal gyrus. This finding provides first evidence on the neural correlates of affectivity in the sound of words. Given the similarity of this neural network to that of nonverbal emotional expressions and affective prosody, our results support a unifying view that suggests a core neural network underlying any type of affective sound processing.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, D⁻14195 Berlin, Germany.
| | - Chun-Ting Hsu
- Department of Psychology, Pennsylvania State University, PA 16802, USA.
| | - Arthur M Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, D⁻14195 Berlin, Germany.
- Centre for Cognitive Neuroscience Berlin (CCNB), Freie Universität Berlin, Habelschwerdter Allee 45, D⁻14195 Berlin, Germany.
| |
Collapse
|
27
|
Koch K, Stegmaier S, Schwarz L, Erb M, Reinl M, Scheffler K, Wildgruber D, Ethofer T. Neural correlates of processing emotional prosody in unipolar depression. Hum Brain Mapp 2018; 39:3419-3427. [PMID: 29682814 DOI: 10.1002/hbm.24185] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 03/15/2018] [Accepted: 04/09/2018] [Indexed: 12/11/2022] Open
Abstract
Major depressive disorder (MDD) is characterized by a biased emotion perception. In the auditory domain, MDD patients have been shown to exhibit attenuated processing of positive emotions expressed by speech melody (prosody). So far, no neuroimaging studies examining the neural basis of altered processing of emotional prosody in MDD are available. In this study, we addressed this issue by examining the emotion bias in MDD during evaluation of happy, neutral, and angry prosodic stimuli on a five-point Likert scale during functional magnetic resonance imaging (fMRI). As expected, MDD patients rated happy prosody less intense than healthy controls (HC). At neural level, stronger activation in the middle superior temporal gyrus (STG) and the amygdala was found in all participants when processing emotional as compared to neutral prosody. MDD patients exhibited an increased activation of the amygdala during processing prosody irrespective of valence while no significant differences between groups were found for the STG, indicating that altered processing of prosodic emotions in MDD occurs rather within the amygdala than in auditory areas. Concurring with the valence-specific behavioral effect of attenuated evaluation of positive prosodic stimuli, activation within the left amygdala of MDD patients correlated with ratings of happy, but not neutral or angry prosody. Our study provides first insights in the neural basis of reduced experience of positive information and an abnormally increased amygdala activity during prosody processing.
Collapse
Affiliation(s)
- Katharina Koch
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Sophia Stegmaier
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Lena Schwarz
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Michael Erb
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| | - Maren Reinl
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany.,Max-Planck-Institute for Biological Cybernetics, University of Tuebingen, Tuebingen, Germany
| | - Dirk Wildgruber
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Thomas Ethofer
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany.,Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
28
|
Neumann K, Euler HA, Kob M, Wolff von Gudenberg A, Giraud AL, Weissgerber T, Kell CA. Assisted and unassisted recession of functional anomalies associated with dysprosody in adults who stutter. JOURNAL OF FLUENCY DISORDERS 2018; 55:120-134. [PMID: 28958627 DOI: 10.1016/j.jfludis.2017.09.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 09/04/2017] [Accepted: 09/05/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE Speech in persons who stutter (PWS) is associated with disturbed prosody (speech melody and intonation), which may impact communication. The neural correlates of PWS' altered prosody during speaking are not known, neither is how a speech-restructuring therapy affects prosody at both a behavioral and a cerebral level. METHODS In this fMRI study, we explored group differences in brain activation associated with the production of different kinds of prosody in 13 male adults who stutter (AWS) before, directly after, and at least 1 year after an effective intensive fluency-shaping treatment, in 13 typically fluent-speaking control participants (CP), and in 13 males who had spontaneously recovered from stuttering during adulthood (RAWS), while sentences were read aloud with 'neutral', instructed emotional (happy), and linguistically driven (questioning) prosody. These activations were related to speech production acoustics. RESULTS During pre-treatment prosody generation, the pars orbitalis of the left inferior frontal gyrus and the left anterior insula were activated less in AWS than in CP. The degree of hypo-activation correlated with acoustic measures of dysprosody. Paralleling the near-normalization of free speech melody following fluency-shaping therapy, AWS normalized the inferior frontal hypo-activation, sooner after treatment for generating emotional than linguistic prosody. Unassisted recovery was associated with an additional recruitment of cerebellar resources. CONCLUSIONS Fluency shaping therapy may restructure prosody, which approaches that of typically fluent-speaking people. Such a process may benefit from additional training of instructed emotional and linguistic prosody by inducing plasticity in the inferior frontal region which has developed abnormally during childhood in PWS.
Collapse
Affiliation(s)
- Katrin Neumann
- Department of Phoniatrics and Pediatric Audiology, Clinic of Otorhinolaryngology, Head and Neck Surgery,St. Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany.
| | - Harald A Euler
- Department of Phoniatrics and Pediatric Audiology, Clinic of Otorhinolaryngology, Head and Neck Surgery,St. Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Malte Kob
- Erich-Thienhaus-Institute, University of Music Detmold, Detmold, Germany
| | | | - Anne-Lise Giraud
- Département des Neuroscience Fondamentales, Université de Genève, Switzerland
| | - Tobias Weissgerber
- Department of Audiological Acoustics, Clinic of Otorhinolaryngology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Christian A Kell
- Brain Imaging Center and Department of Neurology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
29
|
Riedel MC, Yanes JA, Ray KL, Eickhoff SB, Fox PT, Sutherland MT, Laird AR. Dissociable meta-analytic brain networks contribute to coordinated emotional processing. Hum Brain Mapp 2018; 39:2514-2531. [PMID: 29484767 DOI: 10.1002/hbm.24018] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2017] [Revised: 02/09/2018] [Accepted: 02/15/2018] [Indexed: 01/05/2023] Open
Abstract
Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks.
Collapse
Affiliation(s)
- Michael C Riedel
- Department of Physics, Florida International University, Miami, Florida
| | - Julio A Yanes
- Department of Psychology, Auburn University, Auburn, Alabama
| | - Kimberly L Ray
- Department of Psychology, University of Texas, Austin, Texas
| | - Simon B Eickhoff
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Medical Faculty, Heinrich-Heine University Düsseldorf, Düsseldorf, Germany
| | - Peter T Fox
- Research Imaging Institute, University of Texas Health Science Center, San Antonio, Texas.,South Texas Veterans Health Care System, San Antonio, Texas.,State Key Laboratory for Brain and Cognitive Sciences, University of Hong Kong, Hong Kong, China
| | | | - Angela R Laird
- Department of Physics, Florida International University, Miami, Florida
| |
Collapse
|
30
|
Speech Prosodies of Different Emotional Categories Activate Different Brain Regions in Adult Cortex: an fNIRS Study. Sci Rep 2018; 8:218. [PMID: 29317758 PMCID: PMC5760650 DOI: 10.1038/s41598-017-18683-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 12/14/2017] [Indexed: 11/12/2022] Open
Abstract
Emotional expressions of others embedded in speech prosodies are important for social interactions. This study used functional near-infrared spectroscopy to investigate how speech prosodies of different emotional categories are processed in the cortex. The results demonstrated several cerebral areas critical for emotional prosody processing. We confirmed that the superior temporal cortex, especially the right middle and posterior parts of superior temporal gyrus (BA 22/42), primarily works to discriminate between emotional and neutral prosodies. Furthermore, the results suggested that categorization of emotions occurs within a high-level brain region–the frontal cortex, since the brain activation patterns were distinct when positive (happy) were contrasted to negative (fearful and angry) prosody in the left middle part of inferior frontal gyrus (BA 45) and the frontal eye field (BA8), and when angry were contrasted to neutral prosody in bilateral orbital frontal regions (BA 10/11). These findings verified and extended previous fMRI findings in adult brain and also provided a “developed version” of brain activation for our following neonatal study.
Collapse
|
31
|
Carminati M, Fiori-Duharcourt N, Isel F. Neurophysiological differentiation between preattentive and attentive processing of emotional expressions on French vowels. Biol Psychol 2017; 132:55-63. [PMID: 29102707 DOI: 10.1016/j.biopsycho.2017.10.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Revised: 10/17/2017] [Accepted: 10/30/2017] [Indexed: 12/29/2022]
Abstract
The present electrophysiological study investigated the processing of emotional prosody by minimizing as much as possible the effect of emotional information conveyed by the lexical-semantic context. Emotionally colored French vowels (i.e., happiness, sadness, fear, and neutral) were presented in a mismatch negativity (MMN) oddball paradigm. Both the MMN, i.e., an event-related potential (ERP) component thought to reflect preattentive change detection, and the P3a, i.e., an ERP marker of involuntary orientation of attention toward deviant stimuli, were significantly modulated by the emotional deviants compared to the neutral ones. Critically, the largest amplitude (MMN, P3a) and the shortest peak latency (MMN) were observed for fear deviants, all other things being equal. Taken together, the present findings lend support to a sequential neurocognitive model of emotion processing (Scherer, 2001) which postulates, among other checks, a first stage of automatic emotion detection (MMN) followed by a second stage of subjective evaluation of the stimulus or event (P3a). Consistently with previous studies, our data suggest that among the six universal emotions, fear could have a special status probably because of its adaptive role in the evolution of the human species.
Collapse
Affiliation(s)
- Mathilde Carminati
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France.
| | - Nicole Fiori-Duharcourt
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France
| | - Frédéric Isel
- University Paris Nanterre - Paris Lumières, CNRS, UMR 7114 Models, Dynamics, Corpora, France
| |
Collapse
|
32
|
Auditory attention enhances processing of positive and negative words in inferior and superior prefrontal cortex. Cortex 2017; 96:31-45. [PMID: 28961524 DOI: 10.1016/j.cortex.2017.08.018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2016] [Revised: 03/07/2017] [Accepted: 08/08/2017] [Indexed: 11/20/2022]
Abstract
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed.
Collapse
|
33
|
Bestelmeyer PEG, Kotz SA, Belin P. Effects of emotional valence and arousal on the voice perception network. Soc Cogn Affect Neurosci 2017; 12:1351-1358. [PMID: 28449127 PMCID: PMC5597854 DOI: 10.1093/scan/nsx059] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 03/27/2017] [Accepted: 04/02/2017] [Indexed: 11/13/2022] Open
Abstract
Several theories conceptualise emotions along two main dimensions: valence (a continuum from negative to positive) and arousal (a continuum that varies from low to high). These dimensions are typically treated as independent in many neuroimaging experiments, yet recent behavioural findings suggest that they are actually interdependent. This result has impact on neuroimaging design, analysis and theoretical development. We were interested in determining the extent of this interdependence both behaviourally and neuroanatomically, as well as teasing apart any activation that is specific to each dimension. While we found extensive overlap in activation for each dimension in traditional emotion areas (bilateral insulae, orbitofrontal cortex, amygdalae), we also found activation specific to each dimension with characteristic relationships between modulations of these dimensions and BOLD signal change. Increases in arousal ratings were related to increased activations predominantly in voice-sensitive cortices after variance explained by valence had been removed. In contrast, emotions of extreme valence were related to increased activations in bilateral voice-sensitive cortices, hippocampi, anterior and midcingulum and medial orbito- and superior frontal regions after variance explained by arousal had been accounted for. Our results therefore do not support a complete segregation of brain structures underpinning the processing of affective dimensions.
Collapse
Affiliation(s)
| | - Sonja A. Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, The Netherlands
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Pascal Belin
- Institut des Neurosciences de La Timone, UMR 7289, CNRS & Université Aix-Marseille, Marseille, France
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
- International Laboratory for Brain, Music and Sound Research, University of Montréal & McGill University, Montréal, Canada
| |
Collapse
|
34
|
Young KS, Parsons CE, Stein A, Vuust P, Craske MG, Kringelbach ML. The neural basis of responsive caregiving behaviour: Investigating temporal dynamics within the parental brain. Behav Brain Res 2017; 325:105-116. [DOI: 10.1016/j.bbr.2016.09.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2016] [Revised: 09/01/2016] [Accepted: 09/05/2016] [Indexed: 02/09/2023]
|
35
|
Shdo SM, Ranasinghe KG, Gola KA, Mielke CJ, Sukhanov PV, Miller BL, Rankin KP. Deconstructing empathy: Neuroanatomical dissociations between affect sharing and prosocial motivation using a patient lesion model. Neuropsychologia 2017; 116:126-135. [PMID: 28209520 DOI: 10.1016/j.neuropsychologia.2017.02.010] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2016] [Revised: 02/11/2017] [Accepted: 02/11/2017] [Indexed: 01/10/2023]
Abstract
Affect sharing and prosocial motivation are integral parts of empathy that are conceptually and mechanistically distinct. We used a neurodegenerative disease (NDG) lesion model to examine the neural correlates of these two aspects of real-world empathic responding. The study enrolled 275 participants, including 44 healthy older controls and 231 patients diagnosed with one of five neurodegenerative diseases (75 Alzheimer's disease, 58 behavioral variant frontotemporal dementia (bvFTD), 42 semantic variant primary progressive aphasia (svPPA), 28 progressive supranuclear palsy, and 28 non-fluent variant primary progressive aphasia (nfvPPA). Informants completed the Revised Self-Monitoring Scale's Sensitivity to the Expressive Behavior of Others (RSMS-EX) subscale and the Interpersonal Reactivity Index's Empathic Concern (IRI-EC) subscale describing the typical empathic behavior of the participants in daily life. Using regression modeling of the voxel based morphometry of T1 brain scans prepared using SPM8 DARTEL-based preprocessing, we isolated the variance independently contributed by the affect sharing and the prosocial motivation elements of empathy as differentially measured by the two scales. We found that the affect sharing component uniquely correlated with volume in right>left medial and lateral temporal lobe structures, including the amygdala and insula, that support emotion recognition, emotion generation, and emotional awareness. Prosocial motivation, in contrast, involved structures such as the nucleus accumbens (NaCC), caudate head, and inferior frontal gyrus (IFG), which suggests that an individual must maintain the capacity to experience reward, to resolve ambiguity, and to inhibit their own emotional experience in order to effectively engage in spontaneous altruism as a component of their empathic response to others.
Collapse
Affiliation(s)
- Suzanne M Shdo
- Memory and Aging Center, University of California, San Francisco, USA
| | | | - Kelly A Gola
- Memory and Aging Center, University of California, San Francisco, USA
| | - Clinton J Mielke
- Memory and Aging Center, University of California, San Francisco, USA
| | - Paul V Sukhanov
- Memory and Aging Center, University of California, San Francisco, USA
| | - Bruce L Miller
- Memory and Aging Center, University of California, San Francisco, USA
| | | |
Collapse
|
36
|
Neural correlates of the affective properties of spontaneous and volitional laughter types. Neuropsychologia 2016; 95:30-39. [PMID: 27940151 DOI: 10.1016/j.neuropsychologia.2016.12.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 12/06/2016] [Accepted: 12/07/2016] [Indexed: 11/23/2022]
Abstract
Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceived affective properties (arousal and valence) that are similar across spontaneous and volitional types (Bryant & Aktipis, 2014; Lavan et al., 2016). In the current study, we explored the neural correlates of such relationships by measuring modulations of the BOLD response in the presence of itemwise variability in the subjective affective properties of spontaneous and volitional laughter. Across all laughs, and within spontaneous and volitional sets, we consistently observed linear increases in the response of bilateral auditory cortices (including Heschl's gyrus and superior temporal gyrus [STG]) associated with higher ratings of perceived arousal, valence and authenticity. Areas in the anterior medial prefrontal cortex (amPFC) showed negative linear correlations with valence and authenticity ratings across the full set of spontaneous and volitional laughs; in line with previous research (McGettigan et al., 2015; Szameitat et al., 2010), we suggest that this reflects increased engagement of these regions in response to laughter of greater social ambiguity. Strikingly, an investigation of higher-order relationships between the entire laughter set and the neural response revealed a positive quadratic profile of the BOLD response in right-dominant STG (extending onto the dorsal bank of the STS), where this region responded most strongly to laughs rated at the extremes of the authenticity scale. While previous studies claimed a role for right STG in bipolar representation of emotional valence, we instead argue that this may in fact exhibit a relatively categorical response to emotional signals, whether positive or negative.
Collapse
|
37
|
Tseng HH, Roiser JP, Modinos G, Falkenberg I, Samson C, McGuire P, Allen P. Corticolimbic dysfunction during facial and prosodic emotional recognition in first-episode psychosis patients and individuals at ultra-high risk. Neuroimage Clin 2016; 12:645-654. [PMID: 27747152 PMCID: PMC5053033 DOI: 10.1016/j.nicl.2016.09.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2016] [Revised: 08/22/2016] [Accepted: 09/06/2016] [Indexed: 01/17/2023]
Abstract
Emotional processing dysfunction is widely reported in patients with chronic schizophrenia and first-episode psychosis (FEP), and has been linked to functional abnormalities of corticolimbic regions. However, corticolimbic dysfunction is less studied in people at ultra-high risk for psychosis (UHR), particularly during processing prosodic voices. We examined corticolimbic response during an emotion recognition task in 18 UHR participants and compared them with 18 FEP patients and 21 healthy controls (HC). Emotional recognition accuracy and corticolimbic response were measured during functional magnetic resonance imaging (fMRI) using emotional dynamic facial and prosodic voice stimuli. Relative to HC, both UHR and FEP groups showed impaired overall emotion recognition accuracy. Whilst during face trials, both UHR and FEP groups did not show significant differences in brain activation relative to HC, during voice trials, FEP patients showed reduced activation across corticolimbic networks including the amygdala. UHR participants showed a trend for increased response in the caudate nucleus during the processing of emotionally valenced prosodic voices relative to HC. The results indicate that corticolimbic dysfunction seen in FEP patients is also present, albeit to a lesser extent, in an UHR cohort, and may represent a neural substrate for emotional processing difficulties prior to the onset of florid psychosis.
Collapse
Affiliation(s)
- Huai-Hsuan Tseng
- Institute of Psychiatry, King's College London, United Kingdom
- Department of Psychiatry, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Jonathan P. Roiser
- Institute of Cognitive Neuroscience, University College London, United Kingdom
| | - Gemma Modinos
- Institute of Psychiatry, King's College London, United Kingdom
| | - Irina Falkenberg
- Institute of Psychiatry, King's College London, United Kingdom
- Philipps-University Marburg, Marburg, Germany
| | - Carly Samson
- Institute of Psychiatry, King's College London, United Kingdom
| | - Philip McGuire
- Institute of Psychiatry, King's College London, United Kingdom
| | - Paul Allen
- Institute of Psychiatry, King's College London, United Kingdom
- Department of Psychology, University of Roehampton, London, United Kingdom
| |
Collapse
|
38
|
The sound of emotions-Towards a unifying neural network perspective of affective sound processing. Neurosci Biobehav Rev 2016; 68:96-110. [PMID: 27189782 DOI: 10.1016/j.neubiorev.2016.05.002] [Citation(s) in RCA: 117] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/01/2016] [Accepted: 05/04/2016] [Indexed: 12/15/2022]
Abstract
Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.
Collapse
|
39
|
Frühholz S, van der Zwaag W, Saenz M, Belin P, Schobert AK, Vuilleumier P, Grandjean D. Neural decoding of discriminative auditory object features depends on their socio-affective valence. Soc Cogn Affect Neurosci 2016; 11:1638-49. [PMID: 27217117 DOI: 10.1093/scan/nsw066] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 05/11/2016] [Indexed: 11/12/2022] Open
Abstract
Human voices consist of specific patterns of acoustic features that are considerably enhanced during affective vocalizations. These acoustic features are presumably used by listeners to accurately discriminate between acoustically or emotionally similar vocalizations. Here we used high-field 7T functional magnetic resonance imaging in human listeners together with a so-called experimental 'feature elimination approach' to investigate neural decoding of three important voice features of two affective valence categories (i.e. aggressive and joyful vocalizations). We found a valence-dependent sensitivity to vocal pitch (f0) dynamics and to spectral high-frequency cues already at the level of the auditory thalamus. Furthermore, pitch dynamics and harmonics-to-noise ratio (HNR) showed overlapping, but again valence-dependent sensitivity in tonotopic cortical fields during the neural decoding of aggressive and joyful vocalizations, respectively. For joyful vocalizations we also revealed sensitivity in the inferior frontal cortex (IFC) to the HNR and pitch dynamics. The data thus indicate that several auditory regions were sensitive to multiple, rather than single, discriminative voice features. Furthermore, some regions partly showed a valence-dependent hypersensitivity to certain features, such as pitch dynamic sensitivity in core auditory regions and in the IFC for aggressive vocalizations, and sensitivity to high-frequency cues in auditory belt and parabelt regions for joyful vocalizations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Department of Psychology, University of Zurich, 8050 Zurich, Switzerland Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| | - Wietske van der Zwaag
- Center for Biomedical Imaging, Ecole Polytechnique Fédérale de Lausanne 1015 Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Department of Clinical Neurosciences, CHUV, 1011 Lausanne, Switzerland Institute of Bioengineering, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
| | - Pascal Belin
- Department of Psychology, University of Glasgow, Glasgow G12 8QQ, UK
| | - Anne-Kathrin Schobert
- Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department Neuroscience, Medical School, University of Geneva, 1211 Geneva, Switzerland
| | - Patrik Vuilleumier
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department Neuroscience, Medical School, University of Geneva, 1211 Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, University of Geneva, Geneva 1205, Switzerland
| |
Collapse
|
40
|
Young KS, Parsons CE, Jegindoe Elmholdt EM, Woolrich MW, van Hartevelt TJ, Stevner ABA, Stein A, Kringelbach ML. Evidence for a Caregiving Instinct: Rapid Differentiation of Infant from Adult Vocalizations Using Magnetoencephalography. Cereb Cortex 2016; 26:1309-1321. [PMID: 26656998 PMCID: PMC4737615 DOI: 10.1093/cercor/bhv306] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Crying is the most salient vocal signal of distress. The cries of a newborn infant alert adult listeners and often elicit caregiving behavior. For the parent, rapid responding to an infant in distress is an adaptive behavior, functioning to ensure offspring survival. The ability to react rapidly requires quick recognition and evaluation of stimuli followed by a co-ordinated motor response. Previous neuroimaging research has demonstrated early specialized activity in response to infant faces. Using magnetoencephalography, we found similarly early (100-200 ms) differences in neural responses to infant and adult cry vocalizations in auditory, emotional, and motor cortical brain regions. We propose that this early differential activity may help to rapidly identify infant cries and engage affective and motor neural circuitry to promote adaptive behavioral responding, before conscious awareness. These differences were observed in adults who were not parents, perhaps indicative of a universal brain-based "caregiving instinct."
Collapse
Affiliation(s)
- Katherine S Young
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Psychology
| | - Christine E Parsons
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Else-Marie Jegindoe Elmholdt
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Mark W Woolrich
- Oxford Centre for Human Brain Activity (OHBA), University of Oxford, Oxford, UK
| | - Tim J van Hartevelt
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Angus B A Stevner
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Oxford Centre for Human Brain Activity (OHBA), University of Oxford, Oxford, UK
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Alan Stein
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Wits/MRC Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, University of Witwatersrand, Johannesburg, South Africa
| | - Morten L Kringelbach
- Section of Child and Adolescent Psychiatry, Department of Psychiatry
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, CA, USA
| |
Collapse
|
41
|
Pannese A, Grandjean D, Frühholz S. Subcortical processing in auditory communication. Hear Res 2015; 328:67-77. [DOI: 10.1016/j.heares.2015.07.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Revised: 06/23/2015] [Accepted: 07/01/2015] [Indexed: 12/21/2022]
|
42
|
Jessen S, Kotz SA. Affect differentially modulates brain activation in uni- and multisensory body-voice perception. Neuropsychologia 2015; 66:134-43. [DOI: 10.1016/j.neuropsychologia.2014.10.038] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2014] [Revised: 09/22/2014] [Accepted: 10/30/2014] [Indexed: 10/24/2022]
|
43
|
Zhang D, Liu Y, Hou X, Sun G, Cheng Y, Luo Y. Discrimination of fearful and angry emotional voices in sleeping human neonates: a study of the mismatch brain responses. Front Behav Neurosci 2014; 8:422. [PMID: 25538587 PMCID: PMC4255595 DOI: 10.3389/fnbeh.2014.00422] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 11/18/2014] [Indexed: 02/04/2023] Open
Abstract
Appropriate processing of human voices with different threat-related emotions is of evolutionarily adaptive value for the survival of individuals. Nevertheless, it is still not clear whether the sensitivity to threat-related information is present at birth. Using an odd-ball paradigm, the current study investigated the neural correlates underlying automatic processing of emotional voices of fear and anger in sleeping neonates. Event-related potential data showed that the fronto-central scalp distribution of the neonatal brain could discriminate fearful voices from angry voices; the mismatch response (MMR) was larger in response to the deviant stimuli of anger, compared with the standard stimuli of fear. Furthermore, this fear–anger MMR discrimination was observed only when neonates were in active sleep state. Although the neonates' sensitivity to threat-related voices is not likely associated with a conceptual understanding of fearful and angry emotions, this special discrimination in early life may provide a foundation for later emotion and social cognition development.
Collapse
Affiliation(s)
- Dandan Zhang
- Institute of Affective and Social Neuroscience, Shenzhen University Shenzhen, China ; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| | - Yunzhe Liu
- Institute of Affective and Social Neuroscience, Shenzhen University Shenzhen, China ; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital Beijing, China
| | - Guoyu Sun
- Department of Pediatrics, Peking University First Hospital Beijing, China
| | - Yawei Cheng
- Institute of Neuroscience, Yang-Ming University Taipei, Taiwan ; Department of Rehabilitation, Yang-Ming University Hospital Ilan, Taiwan
| | - Yuejia Luo
- Institute of Affective and Social Neuroscience, Shenzhen University Shenzhen, China
| |
Collapse
|
44
|
Abstract
Accents provide information about the speaker's geographical, socio-economic, and ethnic background. Research in applied psychology and sociolinguistics suggests that we generally prefer our own accent to other varieties of our native language and attribute more positive traits to it. Despite the widespread influence of accents on social interactions, educational and work settings the neural underpinnings of this social bias toward our own accent and, what may drive this bias, are unexplored. We measured brain activity while participants from two different geographical backgrounds listened passively to 3 English accent types embedded in an adaptation design. Cerebral activity in several regions, including bilateral amygdalae, revealed a significant interaction between the participants' own accent and the accent they listened to: while repetition of own accents elicited an enhanced neural response, repetition of the other group's accent resulted in reduced responses classically associated with adaptation. Our findings suggest that increased social relevance of, or greater emotional sensitivity to in-group accents, may underlie the own-accent bias. Our results provide a neural marker for the bias associated with accents, and show, for the first time, that the neural response to speech is partly shaped by the geographical background of the listener.
Collapse
Affiliation(s)
| | - Pascal Belin
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK International Laboratories for Brain, Music and Sound Research, Université de Montréal & McGill University, Montréal, Canada Institut des Neurosciences de La Timone, UMR 7289, CNRS & Aix-Marseille Université, Marseille, France
| | - D Robert Ladd
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK
| |
Collapse
|
45
|
Frühholz S, Trost W, Grandjean D. The role of the medial temporal limbic system in processing emotions in voice and music. Prog Neurobiol 2014; 123:1-17. [PMID: 25291405 DOI: 10.1016/j.pneurobio.2014.09.003] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 09/16/2014] [Accepted: 09/29/2014] [Indexed: 01/15/2023]
Abstract
Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| | - Wiebke Trost
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
46
|
Milesi V, Cekic S, Péron J, Frühholz S, Cristinzio C, Seeck M, Grandjean D. Multimodal emotion perception after anterior temporal lobectomy (ATL). Front Hum Neurosci 2014; 8:275. [PMID: 24839437 PMCID: PMC4017134 DOI: 10.3389/fnhum.2014.00275] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2013] [Accepted: 04/14/2014] [Indexed: 11/30/2022] Open
Abstract
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion.
Collapse
Affiliation(s)
- Valérie Milesi
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Sezen Cekic
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Julie Péron
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| | - Chiara Cristinzio
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland ; Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department of Neuroscience, Medical School, University of Geneva Geneva, Switzerland
| | - Margitta Seeck
- Epilepsy Unit, Department of Neurology, Geneva University Hospital Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva Geneva, Switzerland
| |
Collapse
|
47
|
Varvatsoulias G. Voice-Sensitive Areas in the Brain: A Single Participant Study Coupled With Brief Evolutionary Psychological Considerations. PSYCHOLOGICAL THOUGHT 2014. [DOI: 10.5964/psyct.v7i1.98] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|
48
|
Vocal emotion of humanoid robots: a study from brain mechanism. ScientificWorldJournal 2014; 2014:216341. [PMID: 24587712 PMCID: PMC3920811 DOI: 10.1155/2014/216341] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2013] [Accepted: 11/14/2013] [Indexed: 11/17/2022] Open
Abstract
Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings.
Collapse
|
49
|
Kastein HB, Kumar VA, Kandula S, Schmidt S. Auditory pre-experience modulates classification of affect intensity: evidence for the evaluation of call salience by a non-human mammal, the bat Megaderma lyra. Front Zool 2013; 10:75. [PMID: 24341839 PMCID: PMC3866277 DOI: 10.1186/1742-9994-10-75] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2013] [Accepted: 11/16/2013] [Indexed: 11/10/2022] Open
Abstract
INTRODUCTION Immediate responses towards emotional utterances in humans are determined by the acoustic structure and perceived relevance, i.e. salience, of the stimuli, and are controlled via a central feedback taking into account acoustic pre-experience. The present study explores whether the evaluation of stimulus salience in the acoustic communication of emotions is specifically human or has precursors in mammals. We created different pre-experiences by habituating bats (Megaderma lyra) to stimuli based on aggression, and response, calls from high or low intensity level agonistic interactions, respectively. Then we presented a test stimulus of opposite affect intensity of the same call type. We compared the modulation of response behaviour by affect intensity between the reciprocal experiments. RESULTS For aggression call stimuli, the bats responded to the dishabituation stimuli independent of affect intensity, emphasising the attention-grabbing function of this call type. For response call stimuli, the bats responded to a high affect intensity test stimulus after experiencing stimuli of low affect intensity, but transferred habituation to a low affect intensity test stimulus after experiencing stimuli of high affect intensity. This transfer of habituation was not due to over-habituation as the bats responded to a frequency-shifted control stimulus. A direct comparison confirmed the asymmetric response behaviour in the reciprocal experiments. CONCLUSIONS Thus, the present study provides not only evidence for a discrimination of affect intensity, but also for an evaluation of stimulus salience, suggesting that basic assessment mechanisms involved in the perception of emotion are an ancestral trait in mammals.
Collapse
Affiliation(s)
| | | | | | - Sabine Schmidt
- Institute of Zoology, University of Veterinary Medicine Hannover Foundation, Bünteweg 17, Hannover 30559, Germany.
| |
Collapse
|
50
|
Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study. PLoS One 2013; 8:e80284. [PMID: 24278270 PMCID: PMC3835909 DOI: 10.1371/journal.pone.0080284] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2012] [Accepted: 09/26/2013] [Indexed: 11/19/2022] Open
Abstract
To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.
Collapse
|