1
|
Panico F, Luciano SM, Salzillo A, Sagliano L, Trojano L. Investigating Cerebello-Frontal Circuits Associated with Emotional Prosody: A Double-Blind tDCS and fNIRS study. CEREBELLUM (LONDON, ENGLAND) 2024:10.1007/s12311-024-01741-7. [PMID: 39276299 DOI: 10.1007/s12311-024-01741-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/04/2024] [Indexed: 09/16/2024]
Abstract
The emotional and cognitive cerebellum has been explored by several studies in the past years. Recent evidence suggested the possible contribution of the cerebellum in processing emotional prosody, namely the ability to comprehend the emotional content of a given vocal utterance, likely mediated by anatomical and functional cerebello-prefrontal connections. In the present study, the involvement of a functional cerebello-prefrontal network in recognising emotional prosody was assessed by combining non-invasive anodal transcranial direct current stimulation (tDCS) over the right or the left cerebellum and functional Near Infrared Spectroscopy of the prefrontal cortex, in a double-blind within-subject experimental design on healthy participants. The results showed that right and, to a less extent, left cerebellar tDCS (as compared to sham stimulation) reduced neural activation in the prefrontal cortex while accuracy and reaction times at the vocal recognition task remained unchanged. These findings highlight functional properties of the cerebello-frontal connections and the psychophysiological effects of cerebellar brain stimulation, with possible clinical applications in psychiatric and neurological conditions.
Collapse
Affiliation(s)
- Francesco Panico
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy.
| | - Sharon Mara Luciano
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy
| | - Alessia Salzillo
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy
| | - Laura Sagliano
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy
| | - Luigi Trojano
- University of Campania "Luigi Vanvitelli", Viale Ellittico 31, 81100, Caserta, Italy
| |
Collapse
|
2
|
Antonioni A, Raho EM, Straudi S, Granieri E, Koch G, Fadiga L. The cerebellum and the Mirror Neuron System: A matter of inhibition? From neurophysiological evidence to neuromodulatory implications. A narrative review. Neurosci Biobehav Rev 2024; 164:105830. [PMID: 39069236 DOI: 10.1016/j.neubiorev.2024.105830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2024] [Revised: 07/20/2024] [Accepted: 07/24/2024] [Indexed: 07/30/2024]
Abstract
Mirror neurons show activity during both the execution (AE) and observation of actions (AO). The Mirror Neuron System (MNS) could be involved during motor imagery (MI) as well. Extensive research suggests that the cerebellum is interconnected with the MNS and may be critically involved in its activities. We gathered evidence on the cerebellum's role in MNS functions, both theoretically and experimentally. Evidence shows that the cerebellum plays a major role during AO and MI and that its lesions impair MNS functions likely because, by modulating the activity of cortical inhibitory interneurons with mirror properties, the cerebellum may contribute to visuomotor matching, which is fundamental for shaping mirror properties. Indeed, the cerebellum may strengthen sensory-motor patterns that minimise the discrepancy between predicted and actual outcome, both during AE and AO. Furthermore, through its connections with the hippocampus, the cerebellum might be involved in internal simulations of motor programs during MI. Finally, as cerebellar neuromodulation might improve its impact on MNS activity, we explored its potential neurophysiological and neurorehabilitation implications.
Collapse
Affiliation(s)
- Annibale Antonioni
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara 44121, Italy; Department of Neuroscience, Ferrara University Hospital, Ferrara 44124, Italy; Doctoral Program in Translational Neurosciences and Neurotechnologies, University of Ferrara, Ferrara 44121, Italy.
| | - Emanuela Maria Raho
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara 44121, Italy
| | - Sofia Straudi
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara 44121, Italy; Department of Neuroscience, Ferrara University Hospital, Ferrara 44124, Italy
| | - Enrico Granieri
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara 44121, Italy
| | - Giacomo Koch
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara 44121, Italy; Center for Translational Neurophysiology of Speech and Communication (CTNSC), Italian Institute of Technology (IIT), Ferrara 44121 , Italy; Non Invasive Brain Stimulation Unit, Istituto di Ricovero e Cura a Carattere Scientifico Santa Lucia, Rome 00179, Italy
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara 44121, Italy; Center for Translational Neurophysiology of Speech and Communication (CTNSC), Italian Institute of Technology (IIT), Ferrara 44121 , Italy
| |
Collapse
|
3
|
Liu M, Teng X, Jiang J. Instrumental music training relates to intensity assessment but not emotional prosody recognition in Mandarin. PLoS One 2024; 19:e0309432. [PMID: 39213300 PMCID: PMC11364251 DOI: 10.1371/journal.pone.0309432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024] Open
Abstract
Building on research demonstrating the benefits of music training for emotional prosody recognition in nontonal languages, this study delves into its unexplored influence on tonal languages. In tonal languages, the acoustic similarity between lexical tones and music, along with the dual role of pitch in conveying lexical and affective meanings, create a unique interplay. We evaluated 72 participants, half of whom had extensive instrumental music training, with the other half serving as demographically matched controls. All participants completed an online test consisting of 210 Chinese pseudosentences, each designed to express one of five emotions: happiness, sadness, fear, anger, or neutrality. Our robust statistical analyses, which included effect size estimates and Bayesian factors, revealed that music and nonmusic groups exhibit similar abilities in identifying the emotional prosody of various emotions. However, the music group attributed higher intensity ratings to emotional prosodies of happiness, fear, and anger compared to the nonmusic group. These findings suggest that while instrumental music training is not related to emotional prosody recognition, it does appear to be related to perceived emotional intensity. This dissociation between emotion recognition and intensity evaluation adds a new piece to the puzzle of the complex relationship between music training and emotion perception in tonal languages.
Collapse
Affiliation(s)
- Mengting Liu
- Department of Art, Harbin Conservatory of Music, Harbin, China
| | - Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Jun Jiang
- Music College, Shanghai Normal University, Shanghai, China
| |
Collapse
|
4
|
Laukka P, Månsson KNT, Cortes DS, Manzouri A, Frick A, Fredborg W, Fischer H. Neural correlates of individual differences in multimodal emotion recognition ability. Cortex 2024; 175:1-11. [PMID: 38691922 DOI: 10.1016/j.cortex.2024.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/03/2024]
Abstract
Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.
Collapse
Affiliation(s)
- Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden; Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Kristoffer N T Månsson
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Psychology and Psychotherapy, Babeș-Bolyai University, Cluj-Napoca, Romania
| | - Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Amirhossein Manzouri
- Department of Psychology, Stockholm University, Stockholm, Sweden; Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Andreas Frick
- Department of Medical Sciences, Psychiatry, Uppsala University, Uppsala, Sweden
| | - William Fredborg
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden; Stockholm University Brain Imaging Centre (SUBIC), Stockholm University, Stockholm, Sweden; Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet and Stockholm University, Stockholm, Sweden
| |
Collapse
|
5
|
Huang YL, Chen TT, Dziobek I, Tseng HH. Mentalizing in a movie for the assessment of social cognition (MASC): the validation in a taiwanese sample. BMC Psychol 2023; 11:287. [PMID: 37740240 PMCID: PMC10517527 DOI: 10.1186/s40359-023-01321-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 09/11/2023] [Indexed: 09/24/2023] Open
Abstract
BACKGROUND The present study evaluated the psychometrics properties of a sensitive video-based test used in the evaluation of mentalizing skills, that is, the Movie for the Assessment of Social Cognition-Taiwanese version (MASC-TW). METHODS We recruited two independent samples of nonclinical participants (N = 167) and adult patients with schizophrenia (N = 41). The MASC-TW and two other social cognition measures, namely the Chinese version of Theory of Mind task (ToM) and the Taiwanese version of the Diagnostic Analysis of Nonverbal Accuracy-2 (DANAV-TW-2), and an executive function measure of the Wisconsin Card Sorting Test (WCST), were administered to both groups. RESULTS The MASC proved to be a reliable measure of mentalizing capacity, high Cronbach's α value of 0.87. The intraclass correlation coefficient for the MASC-TW total correct scores was 0.85 across three waves of data collection. Across the entire sample, the scores on the MASC-TW were significantly correlated with verbal and nonverbal scores for the ToM task and recognition of facial and prosodic emotion on the DANAV-TW-2. Both executive function and emotion recognition emerged as noteworthy predictors of mentalizing, indicating that these two variables might play crucial roles in the development of mentalizing capacities. Finally, a receiver operating characteristic analysis revealed that in patients with schizophrenia, the MASC was the most accurate discriminator of diagnostic groups, highlighting the validity of the MASC. CONCLUSIONS Overall, the MASC-TW is an ecologically valid and useful tool for assessing mentalizing abilities in a Taiwanese population.
Collapse
Affiliation(s)
- Yu-Lien Huang
- Department of Psychology, Chung Shan Medical University, Taichung, Taiwan.
| | - Tzu-Ting Chen
- Department of Psychology, Fo Guang University, Yilan, Taiwan
| | - Isabel Dziobek
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Huai-Hsuan Tseng
- Institute of Behavioral Medicine, College of Medicine, National Cheng Kung University, Tainan, Taiwan.
- Department of Psychiatry, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan.
| |
Collapse
|
6
|
Viacheslav I, Vartanov A, Bueva A, Bronov O. The emotional component of inner speech: A pilot exploratory fMRI study. Brain Cogn 2023; 165:105939. [PMID: 36549191 DOI: 10.1016/j.bandc.2022.105939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Inner speech is one of the most important human cognitive processes. Nevertheless, until now, many aspects of inner speech, particularly the emotional characteristics of inner speech, remain poorly understood. The main objectives of our study are to identify the neural substrate for the emotional (prosodic) dimension of inner speech and brain structures that control the suppression of expression in inner speech. To achieve these goals, a pilot exploratory fMRI study was carried out on 33 people. The subjects listened to pre-recorded phrases or individual words pronounced with different emotional connotations, after which they were internally spoken with the same emotion or with suppression of expression (neutral). The results show that there is an emotional component in inner speech, which is encoded by similar structures as in spoken speech. The unique role of the caudate nuclei in the suppression of expression in the inner speech was also shown.
Collapse
Affiliation(s)
| | | | | | - Oleg Bronov
- Federal State Budgetary Institution "National Medical and Surgical Center named after N.I. Pirogov", Russia
| |
Collapse
|
7
|
Thomasson M, Ceravolo L, Corradi-Dell’Acqua C, Mantelli A, Saj A, Assal F, Grandjean D, Péron J. Dysfunctional cerebello-cerebral network associated with vocal emotion recognition impairments. Cereb Cortex Commun 2023; 4:tgad002. [PMID: 36726795 PMCID: PMC9883615 DOI: 10.1093/texcom/tgad002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 01/13/2023] Open
Abstract
Vocal emotion recognition, a key determinant to analyzing a speaker's emotional state, is known to be impaired following cerebellar dysfunctions. Nevertheless, its possible functional integration in the large-scale brain network subtending emotional prosody recognition has yet to be explored. We administered an emotional prosody recognition task to patients with right versus left-hemispheric cerebellar lesions and a group of matched controls. We explored the lesional correlates of vocal emotion recognition in patients through a network-based analysis by combining a neuropsychological approach for lesion mapping with normative brain connectome data. Results revealed impaired recognition among patients for neutral or negative prosody, with poorer sadness recognition performances by patients with right cerebellar lesion. Network-based lesion-symptom mapping revealed that sadness recognition performances were linked to a network connecting the cerebellum with left frontal, temporal, and parietal cortices. Moreover, when focusing solely on a subgroup of patients with right cerebellar damage, sadness recognition performances were associated with a more restricted network connecting the cerebellum to the left parietal lobe. As the left hemisphere is known to be crucial for the processing of short segmental information, these results suggest that a corticocerebellar network operates on a fine temporal scale during vocal emotion decoding.
Collapse
Affiliation(s)
- Marine Thomasson
- Clinical and Experimental Neuropsychology Laboratory, Department of Psychology, University of Geneva, 40 bd du Pont d’Arve, Geneva 1205, Switzerland,Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology and Swiss Centre for Affective Sciences, University of Geneva, 40 bd du Pont d’Arve, Geneva 1205, Switzerland,Cognitive Neurology Unit, Department of Neurology, University Hospitals of Geneva, Rue Gabrielle-Perret-Gentil 4, Geneva 1205, Switzerland
| | - Leonardo Ceravolo
- Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology and Swiss Centre for Affective Sciences, University of Geneva, 40 bd du Pont d’Arve, Geneva 1205, Switzerland
| | - Corrado Corradi-Dell’Acqua
- Theory of Pain Laboratory, Department of Psychology, Faculty of Psychology and Educational Sciences (FPSE), University of Geneva, 40 bd du Pont d’Arve, Geneva 1205, Switzerland,Geneva Neuroscience Centre, University of Geneva, Rue Michel-Servet 1, Geneva 1206, Switzerland
| | - Amélie Mantelli
- Clinical and Experimental Neuropsychology Laboratory, Department of Psychology, University of Geneva, 40 bd du Pont d’Arve, Geneva 1205, Switzerland
| | - Arnaud Saj
- Department of Psychology, University of Montreal, Montreal, 90 avenue Vincent d'Indy Montréal, H2V 2S9 Montréal, Québec, Canada
| | - Frédéric Assal
- Cognitive Neurology Unit, Department of Neurology, University Hospitals of Geneva, Rue Gabrielle-Perret-Gentil 4, Geneva 1205, Switzerland,Faculty of Medicine, University of Geneva, Rue Michel-Servet 1, Geneva 1206, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology and Swiss Centre for Affective Sciences, University of Geneva, 40 bd du Pont d’Arve, Geneva 1205, Switzerland
| | - Julie Péron
- Corresponding author: Clinical and Experimental Neuropsychology Laboratory, Faculté de Psychologie et des Sciences de l’Education, Université de Genève, 40 bd du Pont d’Arve, Geneva 1205, Switzerland.
| |
Collapse
|
8
|
Billig AJ, Lad M, Sedley W, Griffiths TD. The hearing hippocampus. Prog Neurobiol 2022; 218:102326. [PMID: 35870677 PMCID: PMC10510040 DOI: 10.1016/j.pneurobio.2022.102326] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 06/08/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022]
Abstract
The hippocampus has a well-established role in spatial and episodic memory but a broader function has been proposed including aspects of perception and relational processing. Neural bases of sound analysis have been described in the pathway to auditory cortex, but wider networks supporting auditory cognition are still being established. We review what is known about the role of the hippocampus in processing auditory information, and how the hippocampus itself is shaped by sound. In examining imaging, recording, and lesion studies in species from rodents to humans, we uncover a hierarchy of hippocampal responses to sound including during passive exposure, active listening, and the learning of associations between sounds and other stimuli. We describe how the hippocampus' connectivity and computational architecture allow it to track and manipulate auditory information - whether in the form of speech, music, or environmental, emotional, or phantom sounds. Functional and structural correlates of auditory experience are also identified. The extent of auditory-hippocampal interactions is consistent with the view that the hippocampus makes broad contributions to perception and cognition, beyond spatial and episodic memory. More deeply understanding these interactions may unlock applications including entraining hippocampal rhythms to support cognition, and intervening in links between hearing loss and dementia.
Collapse
Affiliation(s)
| | - Meher Lad
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, USA
| |
Collapse
|
9
|
Newport EL, Seydell-Greenwald A, Landau B, Turkeltaub PE, Chambers CE, Martin KC, Rennert R, Giannetti M, Dromerick AW, Ichord RN, Carpenter JL, Berl MM, Gaillard WD. Language and developmental plasticity after perinatal stroke. Proc Natl Acad Sci U S A 2022; 119:e2207293119. [PMID: 36215488 PMCID: PMC9586296 DOI: 10.1073/pnas.2207293119] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
The mature human brain is lateralized for language, with the left hemisphere (LH) primarily responsible for sentence processing and the right hemisphere (RH) primarily responsible for processing suprasegmental aspects of language such as vocal emotion. However, it has long been hypothesized that in early life there is plasticity for language, allowing young children to acquire language in other cortical regions when LH areas are damaged. If true, what are the constraints on functional reorganization? Which areas of the brain can acquire language, and what happens to the functions these regions ordinarily perform? We address these questions by examining long-term outcomes in adolescents and young adults who, as infants, had a perinatal arterial ischemic stroke to the LH areas ordinarily subserving sentence processing. We compared them with their healthy age-matched siblings. All participants were tested on a battery of behavioral and functional imaging tasks. While stroke participants were impaired in some nonlinguistic cognitive abilities, their processing of sentences and of vocal emotion was normal and equal to that of their healthy siblings. In almost all, these abilities have both developed in the healthy RH. Our results provide insights into the remarkable ability of the young brain to reorganize language. Reorganization is highly constrained, with sentence processing almost always in the RH frontotemporal regions homotopic to their location in the healthy brain. This activation is somewhat segregated from RH emotion processing, suggesting that the two functions perform best when each has its own neural territory.
Collapse
Affiliation(s)
- Elissa L. Newport
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- 1To whom correspondence may be addressed.
| | - Anna Seydell-Greenwald
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Barbara Landau
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- cJohns Hopkins University, Baltimore, MD 21218
| | - Peter E. Turkeltaub
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Catherine E. Chambers
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Kelly C. Martin
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Rebecca Rennert
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Margot Giannetti
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Alexander W. Dromerick
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Rebecca N. Ichord
- dPerelman School of Medicine at the University of Pennsylvania and Children’s Hospital of Philadelphia, Philadelphia, PA 19104
| | | | - Madison M. Berl
- eChildren’s National Hospital and Center for Neuroscience, Washington, DC 20010
| | - William D. Gaillard
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- eChildren’s National Hospital and Center for Neuroscience, Washington, DC 20010
| |
Collapse
|
10
|
Crossed functional specialization between the basal ganglia and cerebellum during vocal emotion decoding: Insights from stroke and Parkinson’s disease. COGNITIVE, AFFECTIVE, & BEHAVIORAL NEUROSCIENCE 2022; 22:1030-1043. [PMID: 35474566 PMCID: PMC9458588 DOI: 10.3758/s13415-022-01000-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/21/2022] [Indexed: 11/08/2022]
Abstract
There is growing evidence that both the basal ganglia and the cerebellum play functional roles in emotion processing, either directly or indirectly, through their connections with cortical and subcortical structures. However, the lateralization of this complex processing in emotion recognition remains unclear. To address this issue, we investigated emotional prosody recognition in individuals with Parkinson’s disease (model of basal ganglia dysfunction) or cerebellar stroke patients, as well as in matched healthy controls (n = 24 in each group). We analysed performances according to the lateralization of the predominant brain degeneration/lesion. Results showed that a right (basal ganglia and cerebellar) hemispheric dysfunction was likely to induce greater deficits than a left one. Moreover, deficits following left hemispheric dysfunction were only observed in cerebellar stroke patients, and these deficits resembled those observed after degeneration of the right basal ganglia. Additional analyses taking disease duration / time since stroke into consideration revealed a worsening of performances in patients with predominantly right-sided lesions over time. These results point to the differential, but complementary, involvement of the cerebellum and basal ganglia in emotional prosody decoding, with a probable hemispheric specialization according to the level of cognitive integration.
Collapse
|
11
|
Elizalde Acevedo B, Olano MA, Bendersky M, Kochen S, Agüero Vera V, Chambeaud N, Gargiulo M, Sabatte J, Gargiulo Á, Alba-Ferrara L. Brain mapping of emotional prosody in patients with drug-resistant temporal epilepsy: An indicator of plasticity. Cortex 2022; 153:97-109. [PMID: 35635861 DOI: 10.1016/j.cortex.2022.04.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/17/2022] [Accepted: 04/13/2022] [Indexed: 12/01/2022]
Abstract
INTRODUCTION Emotional prosody, a suprasegmental component of language, is predominantly processed by right temporo-frontal areas of the cerebral cortex. In temporal lobe epilepsy (TLE), brain disturbances affecting prosody processing frequently occur. This research assesses compensatory brain mechanisms of prosody processing in refractory TLE using fMRI. METHODS Patients with focal unilateral epilepsy, right (RTLE) (N = 19), left (LTLE) (N = 19), and healthy controls (CTRL) (N = 20) were evaluated during a prosody decoding fMRI task. The stimuli consisted in spoken numbers with different tones of voice (joy, fear, anger, neutral and silent trials). Participants were instructed to label the emotion with a keypad. "Joy" was removed from the analysis due to a high degree of variability. A lateralization index (LI) was used to see individual differences in the interhemispheric activations of each participant. RESULTS Behaviorally, The LTLE and RTLE groups did not differ significantly from each other neither from CTRL. In Negative Emotions versus Baseline contrast, the whole sample analysis showed extensive activations in bilateral superior temporal gyrus, bilateral precentral and post-central gyrus, right putamen, and left cerebellar vermis. Compared to the LTLE and CTRL, RTLE activated similar areas, but to a lesser extent. The LI analysis revealed significant differences in hemispheric laterality of the temporal lobe and the parietal lobe between RTLE compared to LTLE and CTRL, being the RTLE group lateralized towards the left, unlike the other two groups. DISCUSSION The LI indicated that, since the CTRL and the LTLE groups recruited putative prosodic regions, the RTLE lateralized prosody processing towards the left, recruiting contralateral nodes, homotopic to the putative areas of the prosody. Considering that the groups did not differ in prosody task performance, the findings suggest that, in the RTLE group, alternative brain nodes were recruited for the task, demonstrating plasticity.
Collapse
Affiliation(s)
- Bautista Elizalde Acevedo
- Instituto de Investigaciones en Medicina Traslacional (IIMT), CONICET-Universidad Austral, Derqui-Pilar, Buenos Aires, Argentina; Departamento de Psicología, Facultad de Ciencias Biomédicas, Universidad Austral, Pilar, Buenos Aires, Argentina; Unidad Ejecutora para el Estudio de las Neurociencias y Sistemas Complejos (ENyS), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina.
| | - María A Olano
- Departamento de Psicología, Facultad de Ciencias Biomédicas, Universidad Austral, Pilar, Buenos Aires, Argentina
| | - Mariana Bendersky
- Unidad Ejecutora para el Estudio de las Neurociencias y Sistemas Complejos (ENyS), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina; Laboratorio de Anatomía Viviente, 3ra Cátedra de Anatomía Normal, Facultad de Medicina, Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Silvia Kochen
- Unidad Ejecutora para el Estudio de las Neurociencias y Sistemas Complejos (ENyS), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina
| | - Valentina Agüero Vera
- Departamento de Psicología, Facultad de Ciencias Biomédicas, Universidad Austral, Pilar, Buenos Aires, Argentina
| | - Nahuel Chambeaud
- Universidad de Buenos Aires, Facultad de Psicología, Buenos Aires, Argentina
| | - Mercedes Gargiulo
- Centro Integral de Salud Mental Argentino (CISMA), Buenos Aires, Argentina
| | - Juliana Sabatte
- Centro Integral de Salud Mental Argentino (CISMA), Buenos Aires, Argentina
| | - Ángel Gargiulo
- Centro Integral de Salud Mental Argentino (CISMA), Buenos Aires, Argentina
| | - Lucía Alba-Ferrara
- Departamento de Psicología, Facultad de Ciencias Biomédicas, Universidad Austral, Pilar, Buenos Aires, Argentina; Unidad Ejecutora para el Estudio de las Neurociencias y Sistemas Complejos (ENyS), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina
| |
Collapse
|
12
|
Morningstar M, Grannis C, Mattson WI, Nelson EE. Functional patterns of neural activation during vocal emotion recognition in youth with and without refractory epilepsy. Neuroimage Clin 2022; 34:102966. [PMID: 35182929 PMCID: PMC8859003 DOI: 10.1016/j.nicl.2022.102966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 01/12/2022] [Accepted: 02/11/2022] [Indexed: 01/10/2023]
Abstract
Epilepsy has been associated with deficits in the social cognitive ability to decode others' nonverbal cues to infer their emotional intent (emotion recognition). Studies have begun to identify potential neural correlates of these deficits, but have focused primarily on one type of nonverbal cue (facial expressions) to the detriment of other crucial social signals that inform the tenor of social interactions (e.g., tone of voice). Less is known about how individuals with epilepsy process these forms of social stimuli, with a particular gap in knowledge about representation of vocal cues in the developing brain. The current study compared vocal emotion recognition skills and functional patterns of neural activation to emotional voices in youth with and without refractory focal epilepsy. We made novel use of inter-subject pattern analysis to determine brain areas in which activation to emotional voices was predictive of epilepsy status. Results indicated that youth with epilepsy were comparatively less able to infer emotional intent in vocal expressions than their typically developing peers. Activation to vocal emotional expressions in regions of the mentalizing and/or default mode network (e.g., right temporo-parietal junction, right hippocampus, right medial prefrontal cortex, among others) differentiated youth with and without epilepsy. These results are consistent with emerging evidence that pediatric epilepsy is associated with altered function in neural networks subserving social cognitive abilities. Our results contribute to ongoing efforts to understand the neural markers of social cognitive deficits in pediatric epilepsy, in order to better tailor and funnel interventions to this group of youth at risk for poor social outcomes.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, Kingston, ON, Canada; Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States; Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, United States.
| | - C Grannis
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States
| | - W I Mattson
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States
| | - E E Nelson
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States; Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, United States
| |
Collapse
|
13
|
Morningstar M, Mattson WI, Nelson EE. Longitudinal Change in Neural Response to Vocal Emotion in Adolescence. Soc Cogn Affect Neurosci 2022; 17:890-903. [PMID: 35323933 PMCID: PMC9527472 DOI: 10.1093/scan/nsac021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 02/25/2022] [Accepted: 03/21/2022] [Indexed: 01/09/2023] Open
Abstract
Adolescence is associated with maturation of function within neural networks supporting the processing of social information. Previous longitudinal studies have established developmental influences on youth’s neural response to facial displays of emotion. Given the increasing recognition of the importance of non-facial cues to social communication, we build on existing work by examining longitudinal change in neural response to vocal expressions of emotion in 8- to 19-year-old youth. Participants completed a vocal emotion recognition task at two timepoints (1 year apart) while undergoing functional magnetic resonance imaging. The right inferior frontal gyrus, right dorsal striatum and right precentral gyrus showed decreases in activation to emotional voices across timepoints, which may reflect focalization of response in these areas. Activation in the dorsomedial prefrontal cortex was positively associated with age but was stable across timepoints. In addition, the slope of change across visits varied as a function of participants’ age in the right temporo-parietal junction (TPJ): this pattern of activation across timepoints and age may reflect ongoing specialization of function across childhood and adolescence. Decreased activation in the striatum and TPJ across timepoints was associated with better emotion recognition accuracy. Findings suggest that specialization of function in social cognitive networks may support the growth of vocal emotion recognition skills across adolescence.
Collapse
Affiliation(s)
- Michele Morningstar
- Correspondence should be addressed to Michele Morningstar, Department of Psychology, Queen’s University, 62 Arch Street, Kingston, ON K7L 3L3, Canada. E-mail:
| | - Whitney I Mattson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
| | - Eric E Nelson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH 43205, USA
| |
Collapse
|
14
|
Lin RZ, Marsh EB. Abnormal singing can identify patients with right hemisphere cortical strokes at risk for impaired prosody. Medicine (Baltimore) 2021; 100:e26280. [PMID: 34115027 PMCID: PMC8202571 DOI: 10.1097/md.0000000000026280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 05/21/2021] [Indexed: 01/04/2023] Open
Abstract
Despite lacking aphasia seen with left hemisphere (LH) infarcts involving the middle cerebral artery territory, right hemisphere (RH) strokes can result in significant difficulties in affective prosody. These impairments may be more difficult to identify but lead to significant communication problems.We determine if evaluation of singing can accurately identify stroke patients with cortical RH infarcts at risk for prosodic impairment who may benefit from rehabilitation.A prospective cohort of 36 patients evaluated with acute ischemic stroke was recruited. Participants underwent an experimental battery evaluating their singing, prosody comprehension, and prosody production. Singing samples were rated by 2 independent reviewers as subjectively "normal" or "abnormal," and analyzed for properties of the fundamental frequency. Relationships between infarct location, singing, and prosody performance were evaluated using t tests and chi-squared analysis.Eighty percent of participants with LH cortical strokes were unable to successfully complete any of the tasks due to severe aphasia. For the remainder, singing ratings corresponded to stroke location for 68% of patients. RH cortical strokes demonstrated a lower mean fundamental frequency while singing than those with subcortical infarcts (176.8 vs 130.4, P = 0.02). They also made more errors on tasks of prosody comprehension (28.6 vs 16.0, P < 0.001) and production (40.4 vs 18.4, P < 0.001).Patients with RH cortical infarcts are more likely to exhibit impaired prosody comprehension and production and demonstrate the poor variation of tone when singing compared to patients with subcortical infarcts. A simple singing screen is able to successfully identify patients with cortical lesions and potential prosodic deficits.
Collapse
Affiliation(s)
- Rebecca Z. Lin
- Department of Cognitive Science, Johns Hopkins University
| | - Elisabeth B. Marsh
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
15
|
Mariana B, Carolina L, Valeria A, Bautista EA, Silvia K, Lucía AF. Functional anatomy of idiomatic expressions. Brain Topogr 2021; 34:489-503. [PMID: 33948754 DOI: 10.1007/s10548-021-00843-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 04/21/2021] [Indexed: 10/21/2022]
Abstract
Idiomatic expressions (IE) are groups of words whose meaning is different from the sum of its components. Neural mechanisms underlying their processing are still debated, especially regarding lateralization, main structures involved, and whether this neural network is independent from the spoken language. To investigate the neural correlates of IE processing in healthy Spanish speakers.Twenty one native speakers of Spanish were asked to select one of 4 possible meanings for IE or literal sentences. fMRI scans were performed in a 3.0T scanner and processed by SPM 12 comparing IE vs. literal sentences. Laterality indices were calculated at the group level. IE activated a bilateral, slightly right-sided network comprising the pars triangularis and areas 9 and 10. In the left hemisphere (LH): the pars orbitalis, superior frontal, angular and fusiform gyrus. In the right hemisphere (RH): anterior insula, middle frontal, and superior temporal gyrus. This network reveals the importance of the RH, besides traditional LH areas, to comprehend IE. This agrees with the semantic coding model: the LH activates narrow semantic fields choosing one single meaning and ignoring others, and the RH detects distant semantic relationships, activating diffuse semantic fields. It is also in line with the configuration hypothesis: both meanings, literal and figurative, are executed simultaneously, until the literal meaning is definitively rejected and the figurative one is accepted. Processing IE requires the activation of fronto-temporal networks in both hemispheres. The results concur with previous studies in other languages, so these networks are independent from the spoken language. Understanding these mechanisms sheds light on IE processing difficulties in different clinical populations and must be considered when planning resective surgery.
Collapse
Affiliation(s)
- Bendersky Mariana
- Living Anatomy Laboratory, 3rd Normal Anatomy Department, School of Medicine, Buenos Aires University, Paraguay 2155, Buenos Aires, Argentina. .,ENyS (Studies in Neurosciences and Complex Systems), National Scientific and Technical Research Council (CONICET), National University A. Jauretche (UNAJ), El Cruce Hospital Néstor Kirchner, Avenue Calchaquí 5402, Florencio Varela, Buenos Aires, Argentina.
| | - Lomlomdjian Carolina
- ENyS (Studies in Neurosciences and Complex Systems), National Scientific and Technical Research Council (CONICET), National University A. Jauretche (UNAJ), El Cruce Hospital Néstor Kirchner, Avenue Calchaquí 5402, Florencio Varela, Buenos Aires, Argentina.,Department of Neurology, Hospital Austral, Pilar, Argentina
| | - Abusamra Valeria
- School of Philosophy and Literature, National Scientific and Technical Research Council-Argentina (CONICET), Buenos Aires University, Puan 480, Buenos Aires, Argentina
| | - Elizalde Acevedo Bautista
- ENyS (Studies in Neurosciences and Complex Systems), National Scientific and Technical Research Council (CONICET), National University A. Jauretche (UNAJ), El Cruce Hospital Néstor Kirchner, Avenue Calchaquí 5402, Florencio Varela, Buenos Aires, Argentina.,Faculty of Biomedical Science, Austral University, Mariano Acosta 1611, Pilar, Buenos Aires, Argentina.,IIMT (Instituto de Investigaciones en Medicina Traslacional), CONICET-Austral University, Derqui-Pilar, Buenos Aires, Argentina
| | - Kochen Silvia
- ENyS (Studies in Neurosciences and Complex Systems), National Scientific and Technical Research Council (CONICET), National University A. Jauretche (UNAJ), El Cruce Hospital Néstor Kirchner, Avenue Calchaquí 5402, Florencio Varela, Buenos Aires, Argentina
| | - Alba-Ferrara Lucía
- ENyS (Studies in Neurosciences and Complex Systems), National Scientific and Technical Research Council (CONICET), National University A. Jauretche (UNAJ), El Cruce Hospital Néstor Kirchner, Avenue Calchaquí 5402, Florencio Varela, Buenos Aires, Argentina.,Faculty of Biomedical Science, Austral University, Mariano Acosta 1611, Pilar, Buenos Aires, Argentina
| |
Collapse
|
16
|
Thomasson M, Benis D, Saj A, Voruz P, Ronchi R, Grandjean D, Assal F, Péron J. Sensory contribution to vocal emotion deficit in patients with cerebellar stroke. Neuroimage Clin 2021; 31:102690. [PMID: 34000647 PMCID: PMC8138671 DOI: 10.1016/j.nicl.2021.102690] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 04/11/2021] [Accepted: 04/29/2021] [Indexed: 11/29/2022]
Abstract
In recent years, there has been increasing evidence of cerebellar involvement in emotion processing. Difficulties in the recognition of emotion from voices (i.e., emotional prosody) have been observed following cerebellar stroke. However, the interplay between sensory and higher-order cognitive dysfunction in these deficits, as well as possible hemispheric specialization for emotional prosody processing, has yet to be elucidated. We investigated the emotional prosody recognition performances of patients with right versus left cerebellar lesions, as well as of matched controls, entering the acoustic features of the stimuli in our statistical model. We also explored the cerebellar lesion-behavior relationship, using voxel-based lesion-symptom mapping. Results revealed impairment of vocal emotion recognition in both patient subgroups, particularly for neutral or negative prosody, with a higher number of misattributions in patients with right-hemispheric stroke. Voxel-based lesion-symptom mapping showed that some emotional misattributions correlated with lesions in the right Lobules VIIb and VIII and right Crus I and II. Furthermore, a significant proportion of the variance in this misattribution was explained by acoustic features such as pitch, loudness, and spectral aspects. These results point to bilateral posterior cerebellar involvement in both the sensory and cognitive processing of emotions.
Collapse
Affiliation(s)
- Marine Thomasson
- Clinical and Experimental Neuropsychology Laboratory, Department of Psychology and Educational Sciences, University of Geneva, 1205 Geneva, Switzerland; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, 1205 Geneva, Switzerland; Cognitive Neurology Unit, Department of Neurology, University Hospitals of Geneva, 1205 Geneva, Switzerland
| | - Damien Benis
- Clinical and Experimental Neuropsychology Laboratory, Department of Psychology and Educational Sciences, University of Geneva, 1205 Geneva, Switzerland; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, 1205 Geneva, Switzerland
| | - Arnaud Saj
- Department of Psychology, University of Montreal, 2900 Montreal, QC, Canada
| | - Philippe Voruz
- Clinical and Experimental Neuropsychology Laboratory, Department of Psychology and Educational Sciences, University of Geneva, 1205 Geneva, Switzerland; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, 1205 Geneva, Switzerland
| | - Roberta Ronchi
- Cognitive Neurology Unit, Department of Neurology, University Hospitals of Geneva, 1205 Geneva, Switzerland; Laboratory of Behavioral Neurology and Imaging of Cognition, Department of Neuroscience, University Medical Center, University of Geneva, 1205 Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, 1205 Geneva, Switzerland
| | - Frédéric Assal
- Cognitive Neurology Unit, Department of Neurology, University Hospitals of Geneva, 1205 Geneva, Switzerland; Faculty of Medicine, University of Geneva, 1205 Geneva, Switzerland
| | - Julie Péron
- Clinical and Experimental Neuropsychology Laboratory, Department of Psychology and Educational Sciences, University of Geneva, 1205 Geneva, Switzerland; Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology and Swiss Center for Affective Sciences, University of Geneva, 1205 Geneva, Switzerland; Cognitive Neurology Unit, Department of Neurology, University Hospitals of Geneva, 1205 Geneva, Switzerland.
| |
Collapse
|
17
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
18
|
Olano MA, Elizalde Acevedo B, Chambeaud N, Acuña A, Marcó M, Kochen S, Alba-Ferrara L. Emotional salience enhances intelligibility in adverse acoustic conditions. Neuropsychologia 2020; 147:107580. [DOI: 10.1016/j.neuropsychologia.2020.107580] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 05/29/2020] [Accepted: 08/03/2020] [Indexed: 11/30/2022]
|
19
|
Dricu M, Frühholz S. A neurocognitive model of perceptual decision-making on emotional signals. Hum Brain Mapp 2020; 41:1532-1556. [PMID: 31868310 PMCID: PMC7267943 DOI: 10.1002/hbm.24893] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/18/2019] [Accepted: 11/29/2019] [Indexed: 01/09/2023] Open
Abstract
Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.
Collapse
Affiliation(s)
- Mihai Dricu
- Department of PsychologyUniversity of BernBernSwitzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center Zurich (ZNZ)University of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
20
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
21
|
Age-related differences in neural activation and functional connectivity during the processing of vocal prosody in adolescence. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1418-1432. [PMID: 31515750 DOI: 10.3758/s13415-019-00742-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The ability to recognize others' emotions based on vocal emotional prosody follows a protracted developmental trajectory during adolescence. However, little is known about the neural mechanisms supporting this maturation. The current study investigated age-related differences in neural activation during a vocal emotion recognition (ER) task. Listeners aged 8 to 19 years old completed the vocal ER task while undergoing functional magnetic resonance imaging. The task of categorizing vocal emotional prosody elicited activation primarily in temporal and frontal areas. Age was associated with a) greater activation in regions in the superior, middle, and inferior frontal gyri, b) greater functional connectivity between the left precentral and inferior frontal gyri and regions in the bilateral insula and temporo-parietal junction, and c) greater fractional anisotropy in the superior longitudinal fasciculus, which connects frontal areas to posterior temporo-parietal regions. Many of these age-related differences in brain activation and connectivity were associated with better performance on the ER task. Increased activation in, and connectivity between, areas typically involved in language processing and social cognition may facilitate the development of vocal ER skills in adolescence.
Collapse
|
22
|
Saffarian A, Shavaki YA, Shahidi GA, Jafari Z. Effect of Parkinson Disease on Emotion Perception Using the Persian Affective Voices Test. J Voice 2019; 33:580.e1-580.e9. [DOI: 10.1016/j.jvoice.2018.01.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 01/16/2018] [Indexed: 12/01/2022]
|
23
|
Emotional prosody Stroop effect in Hindi: An event related potential study. PROGRESS IN BRAIN RESEARCH 2019. [PMID: 31196434 DOI: 10.1016/bs.pbr.2019.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Prosody processing is an important aspect of language comprehension. Previous research on emotional word-prosody conflict has shown that participants are worse when emotional prosody and word meaning are incongruent. Studies with event-related potentials have shown a congruency effect in N400 component. There has been no study on emotional processing in Hindi language in the context of conflict between emotional word meaning and prosody. We used happy and angry words spoken using happy and angry prosody. Participants had to identify whether the word had a happy or angry word meaning. The results showed a congruency effect with worse performance in incongruent trials indicating an emotional Stroop effect in Hindi. The ERP results showed that prosody information is detected very early, which can be seen in the N1 component. In addition, there was a congruency effect in N400. The results show that prosody is processed very early and emotional meaning-prosody congruency effect is obtained with Hindi. Further studies would be needed to investigate similarities and differences in cognitive control associated with language processing.
Collapse
|
24
|
Zhao C, Chronaki G, Schiessl I, Wan MW, Abel KM. Is infant neural sensitivity to vocal emotion associated with mother-infant relational experience? PLoS One 2019; 14:e0212205. [PMID: 30811431 PMCID: PMC6392422 DOI: 10.1371/journal.pone.0212205] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 01/29/2019] [Indexed: 12/20/2022] Open
Abstract
An early understanding of others' vocal emotions provides infants with a distinct advantage for eliciting appropriate care from caregivers and for navigating their social world. Consistent with this notion, an emerging literature suggests that a temporal cortical response to the prosody of emotional speech is observable in the first year of life. Furthermore, neural specialisation to vocal emotion in infancy may vary according to early experience. Neural sensitivity to emotional non-speech vocalisations was investigated in 29 six-month-old infants using near-infrared spectroscopy (fNIRS). Both angry and happy vocalisations evoked increased activation in the temporal cortices (relative to neutral and angry vocalisations respectively), and the strength of the angry minus neutral effect was positively associated with the degree of directiveness in the mothers' play interactions with their infant. This first fNIRS study of infant vocal emotion processing implicates bilateral temporal mechanisms similar to those found in adults and suggests that infants who experience more directive caregiving or social play may more strongly or preferentially process vocal anger by six months of age.
Collapse
Affiliation(s)
- Chen Zhao
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Georgia Chronaki
- Developmental Cognitive Neuroscience (DCN) Laboratory, School of Psychology, University of Central Lancashire, Preston, United Kingdom
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
- Developmental Brain-Behaviour Laboratory, Psychology, University of Southampton, United Kingdom
| | - Ingo Schiessl
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Ming Wai Wan
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Kathryn M. Abel
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| |
Collapse
|
25
|
Schelinski S, von Kriegstein K. The Relation Between Vocal Pitch and Vocal Emotion Recognition Abilities in People with Autism Spectrum Disorder and Typical Development. J Autism Dev Disord 2019; 49:68-82. [PMID: 30022285 PMCID: PMC6331502 DOI: 10.1007/s10803-018-3681-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
We tested the relation between vocal emotion and vocal pitch perception abilities in adults with high-functioning autism spectrum disorder (ASD) and pairwise matched adults with typical development. The ASD group had impaired vocal but typical non-vocal pitch and vocal timbre perception abilities. The ASD group showed less accurate vocal emotion perception than the comparison group and vocal emotion perception abilities were correlated with traits and symptoms associated with ASD. Vocal pitch and vocal emotion perception abilities were significantly correlated in the comparison group only. Our results suggest that vocal emotion recognition difficulties in ASD might not only be based on difficulties with complex social tasks, but also on difficulties with processing of basic sensory features, such as vocal pitch.
Collapse
Affiliation(s)
- Stefanie Schelinski
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
- Technische Universität Dresden, Faculty of Psychology, Bamberger Straße 7, 01187 Dresden, Germany
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
- Technische Universität Dresden, Faculty of Psychology, Bamberger Straße 7, 01187 Dresden, Germany
| |
Collapse
|
26
|
Liang B, Du Y. The Functional Neuroanatomy of Lexical Tone Perception: An Activation Likelihood Estimation Meta-Analysis. Front Neurosci 2018; 12:495. [PMID: 30087589 PMCID: PMC6066585 DOI: 10.3389/fnins.2018.00495] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 07/02/2018] [Indexed: 11/13/2022] Open
Abstract
In tonal language such as Chinese, lexical tone serves as a phonemic feature in determining word meaning. Meanwhile, it is close to prosody in terms of suprasegmental pitch variations and larynx-based articulation. The important yet mixed nature of lexical tone has evoked considerable studies, but no consensus has been reached on its functional neuroanatomy. This meta-analysis aimed at uncovering the neural network of lexical tone perception in comparison with that of phoneme and prosody in a unified framework. Independent Activation Likelihood Estimation meta-analyses were conducted for different linguistic elements: lexical tone by native tonal language speakers, lexical tone by non-tonal language speakers, phoneme, word-level prosody, and sentence-level prosody. Results showed that lexical tone and prosody studies demonstrated more extensive activations in the right than the left auditory cortex, whereas the opposite pattern was found for phoneme studies. Only tonal language speakers consistently recruited the left anterior superior temporal gyrus (STG) for processing lexical tone, an area implicated in phoneme processing and word-form recognition. Moreover, an anterior-lateral to posterior-medial gradient of activation as a function of element timescale was revealed in the right STG, in which the activation for lexical tone lied between that for phoneme and that for prosody. Another topological pattern was shown on the left precentral gyrus (preCG), with the activation for lexical tone overlapped with that for prosody but ventral to that for phoneme. These findings provide evidence that the neural network for lexical tone perception is hybrid with those for phoneme and prosody. That is, resembling prosody, lexical tone perception, regardless of language experience, involved right auditory cortex, with activation localized between sites engaged by phonemic and prosodic processing, suggesting a hierarchical organization of representations in the right auditory cortex. For tonal language speakers, lexical tone additionally engaged the left STG lexical mapping network, consistent with the phonemic representation. Similarly, when processing lexical tone, only tonal language speakers engaged the left preCG site implicated in prosody perception, consistent with tonal language speakers having stronger articulatory representations for lexical tone in the laryngeal sensorimotor network. A dynamic dual-stream model for lexical tone perception was proposed and discussed.
Collapse
Affiliation(s)
- Baishen Liang
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
27
|
Morningstar M, Nelson EE, Dirks MA. Maturation of vocal emotion recognition: Insights from the developmental and neuroimaging literature. Neurosci Biobehav Rev 2018; 90:221-230. [DOI: 10.1016/j.neubiorev.2018.04.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 03/16/2018] [Accepted: 04/24/2018] [Indexed: 01/05/2023]
|
28
|
Alba-Ferrara L, Kochen S, Hausmann M. Emotional Prosody Processing in Epilepsy: Some Insights on Brain Reorganization. Front Hum Neurosci 2018; 12:92. [PMID: 29593517 PMCID: PMC5859098 DOI: 10.3389/fnhum.2018.00092] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Accepted: 02/26/2018] [Indexed: 11/27/2022] Open
Abstract
Drug resistant epilepsy is one of the most complex, multifactorial and polygenic neurological syndrome. Besides its dynamicity and variability, it still provides us with a model to study brain-behavior relationship, giving cues on the anatomy and functional representation of brain function. Given that onset zone of focal epileptic seizures often affects different anatomical areas, cortical but limited to one hemisphere, this condition also let us study the functional differences of the left and right cerebral hemispheres. One lateralized function in the human brain is emotional prosody, and it can be a useful ictal sign offering hints on the location of the epileptogenic zone. Besides its importance for effective communication, prosody is not considered an eloquent domain, making resective surgery on its neural correlates feasible. We performed an Electronic databases search (Medline and PsychINFO) from inception to July 2017 for studies about prosody in epilepsy. The search terms included “epilepsy,” “seizure,” “emotional prosody,” and “vocal affect.” This review focus on emotional prosody processing in epilepsy as it can give hints regarding plastic functional changes following seizures (preoperatively), resection (post operatively), and also as an ictal sign enabling the assessment of dynamic brain networks. Moreover, it is argued that such reorganization can help to preserve the expression and reception of emotional prosody as a central skill to develop appropriate social interactions.
Collapse
Affiliation(s)
- Lucy Alba-Ferrara
- Facultad de Ciencias Biomedicas, Austral University, Buenos Aires, Argentina.,Estudios en Neurociencias y Sistemas Complejos, Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Florencio Varela, Argentina
| | - Silvia Kochen
- Estudios en Neurociencias y Sistemas Complejos, Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Florencio Varela, Argentina
| | - Markus Hausmann
- Science Labs, Department of Psychology, Durham University, Durham, United Kingdom
| |
Collapse
|
29
|
Speech Prosodies of Different Emotional Categories Activate Different Brain Regions in Adult Cortex: an fNIRS Study. Sci Rep 2018; 8:218. [PMID: 29317758 PMCID: PMC5760650 DOI: 10.1038/s41598-017-18683-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 12/14/2017] [Indexed: 11/12/2022] Open
Abstract
Emotional expressions of others embedded in speech prosodies are important for social interactions. This study used functional near-infrared spectroscopy to investigate how speech prosodies of different emotional categories are processed in the cortex. The results demonstrated several cerebral areas critical for emotional prosody processing. We confirmed that the superior temporal cortex, especially the right middle and posterior parts of superior temporal gyrus (BA 22/42), primarily works to discriminate between emotional and neutral prosodies. Furthermore, the results suggested that categorization of emotions occurs within a high-level brain region–the frontal cortex, since the brain activation patterns were distinct when positive (happy) were contrasted to negative (fearful and angry) prosody in the left middle part of inferior frontal gyrus (BA 45) and the frontal eye field (BA8), and when angry were contrasted to neutral prosody in bilateral orbital frontal regions (BA 10/11). These findings verified and extended previous fMRI findings in adult brain and also provided a “developed version” of brain activation for our following neonatal study.
Collapse
|
30
|
Rosenblau G, Kliemann D, Dziobek I, Heekeren HR. Emotional prosody processing in autism spectrum disorder. Soc Cogn Affect Neurosci 2017; 12:224-239. [PMID: 27531389 PMCID: PMC5390729 DOI: 10.1093/scan/nsw118] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Accepted: 08/12/2016] [Indexed: 01/10/2023] Open
Abstract
Individuals with Autism Spectrum Disorder (ASD) are characterized by severe deficits in social communication, whereby the nature of their impairments in emotional prosody processing have yet to be specified. Here, we investigated emotional prosody processing in individuals with ASD and controls with novel, lifelike behavioral and neuroimaging paradigms. Compared to controls, individuals with ASD showed reduced emotional prosody recognition accuracy on a behavioral task. On the neural level, individuals with ASD displayed reduced activity of the STS, insula and amygdala for complex vs basic emotions compared to controls. Moreover, the coupling between the STS and amygdala for complex vs basic emotions was reduced in the ASD group. Finally, groups differed with respect to the relationship between brain activity and behavioral performance. Brain activity during emotional prosody processing was more strongly related to prosody recognition accuracy in ASD participants. In contrast, the coupling between STS and anterior cingulate cortex (ACC) activity predicted behavioral task performance more strongly in the control group. These results provide evidence for aberrant emotional prosody processing of individuals with ASD. They suggest that the differences in the relationship between the neural and behavioral level of individuals with ASD may account for their observed deficits in social communication.
Collapse
Affiliation(s)
- Gabriela Rosenblau
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Berlin 14195, Germany.,Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany.,Yale Child Study Center, Yale University, 230 S. Frontage Road, New Haven, CT 06519, USA
| | - Dorit Kliemann
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Berlin 14195, Germany.,Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, MA 02139, USA.,Department of Neurology, Massachusetts General Hospital/Harvard Medical School, 149 Thirteenth Street, Charlestown, MA 02129, USA
| | - Isabel Dziobek
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Berlin 14195, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Unter den Linden 6, Berlin 10099, Germany
| | - Hauke R Heekeren
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Berlin 14195, Germany.,Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany.,Dahlem Institute for Neuroimaging of Emotion, Freie Universität, Berlin, Germany
| |
Collapse
|
31
|
Rangaprakash D, Dretsch MN, Venkataraman A, Katz JS, Denney TS, Deshpande G. Identifying disease foci from static and dynamic effective connectivity networks: Illustration in soldiers with trauma. Hum Brain Mapp 2017; 39:264-287. [PMID: 29058357 DOI: 10.1002/hbm.23841] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 08/29/2017] [Accepted: 10/01/2017] [Indexed: 12/15/2022] Open
Abstract
Brain connectivity studies report group differences in pairwise connection strengths. While informative, such results are difficult to interpret since our understanding of the brain relies on region-based properties, rather than on connection information. Given that large disruptions in the brain are often caused by a few pivotal sources, we propose a novel framework to identify the sources of functional disruption from effective connectivity networks. Our approach integrates static and time-varying effective connectivity modeling in a probabilistic framework, to identify aberrant foci and the corresponding aberrant connectomics network. Using resting-state fMRI, we illustrate the utility of this novel approach in U.S. Army soldiers (N = 87) with posttraumatic stress disorder (PTSD), mild traumatic brain injury (mTBI) and combat controls. Additionally, we employed machine-learning classification to identify those significant connectivity features that possessed high predictive ability. We identified three disrupted foci (middle frontal gyrus [MFG], insula, hippocampus), and an aberrant prefrontal-subcortical-parietal network of information flow. We found the MFG to be the pivotal focus of network disruption, with aberrant strength and temporal-variability of effective connectivity to the insula, amygdala and hippocampus. These connectivities also possessed high predictive ability (giving a classification accuracy of 81%); and they exhibited significant associations with symptom severity and neurocognitive functioning. In summary, dysregulation originating in the MFG caused elevated and temporally less-variable connectivity in subcortical regions, followed by a similar effect on parietal memory-related regions. This mechanism likely contributes to the reduced control over traumatic memories leading to re-experiencing, hyperarousal and flashbacks observed in soldiers with PTSD and mTBI. Hum Brain Mapp 39:264-287, 2018. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- D Rangaprakash
- AU MRI Research Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA.,Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, USA
| | - Michael N Dretsch
- U.S. Army Aeromedical Research Laboratory, Fort Rucker, Alabama.,Human Dimension Division, HQ TRADOC, Fort Eustis, Virgina
| | - Archana Venkataraman
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland
| | - Jeffrey S Katz
- AU MRI Research Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA.,Department of Psychology, Auburn University, Auburn, Alabama.,Alabama Advanced Imaging Consortium, USA
| | - Thomas S Denney
- AU MRI Research Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA.,Department of Psychology, Auburn University, Auburn, Alabama.,Alabama Advanced Imaging Consortium, USA
| | - Gopikrishna Deshpande
- AU MRI Research Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA.,Department of Psychology, Auburn University, Auburn, Alabama.,Alabama Advanced Imaging Consortium, USA
| |
Collapse
|
32
|
Abstract
Assessment and outcome monitoring are critical for the effective detection and treatment of mental illness. Traditional methods of capturing social, functional, and behavioral data are limited to the information that patients report back to their health care provider at selected points in time. As a result, these data are not accurate accounts of day-to-day functioning, as they are often influenced by biases in self-report. Mobile technology (mobile applications on smartphones, activity bracelets) has the potential to overcome such problems with traditional assessment and provide information about patient symptoms, behavior, and functioning in real time. Although the use of sensors and apps are widespread, several questions remain in the field regarding the reliability of off-the-shelf apps and sensors, use of these tools by consumers, and provider use of these data in clinical decision-making.
Collapse
Affiliation(s)
- Patricia A Areàn
- Professor in Psychiatry and Behavioral Sciences, University of Washington, Seattle, Washington, USA
| | - Kien Hoa Ly
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Gerhard Andersson
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden ; Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
33
|
Adamaszek M, D'Agata F, Ferrucci R, Habas C, Keulen S, Kirkby KC, Leggio M, Mariën P, Molinari M, Moulton E, Orsi L, Van Overwalle F, Papadelis C, Priori A, Sacchetti B, Schutter DJ, Styliadis C, Verhoeven J. Consensus Paper: Cerebellum and Emotion. THE CEREBELLUM 2017; 16:552-576. [PMID: 27485952 DOI: 10.1007/s12311-016-0815-8] [Citation(s) in RCA: 334] [Impact Index Per Article: 47.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Over the past three decades, insights into the role of the cerebellum in emotional processing have substantially increased. Indeed, methodological refinements in cerebellar lesion studies and major technological advancements in the field of neuroscience are in particular responsible to an exponential growth of knowledge on the topic. It is timely to review the available data and to critically evaluate the current status of the role of the cerebellum in emotion and related domains. The main aim of this article is to present an overview of current facts and ongoing debates relating to clinical, neuroimaging, and neurophysiological findings on the role of the cerebellum in key aspects of emotion. Experts in the field of cerebellar research discuss the range of cerebellar contributions to emotion in nine topics. Topics include the role of the cerebellum in perception and recognition, forwarding and encoding of emotional information, and the experience and regulation of emotional states in relation to motor, cognitive, and social behaviors. In addition, perspectives including cerebellar involvement in emotional learning, pain, emotional aspects of speech, and neuropsychiatric aspects of the cerebellum in mood disorders are briefly discussed. Results of this consensus paper illustrate how theory and empirical research have converged to produce a composite picture of brain topography, physiology, and function that establishes the role of the cerebellum in many aspects of emotional processing.
Collapse
Affiliation(s)
- M Adamaszek
- Department of Clinical and Cognitive Neurorehabilitation, Klinik Bavaria Kreischa, An der Wolfsschlucht, 01731, Kreischa, Germany.
| | - F D'Agata
- Department of Neuroscience, University of Turin, Turin, Italy
| | - R Ferrucci
- Fondazione IRCCS Ca' Granda, Granada, Italy
- Università degli Studi di Milano, Milan, Italy
| | - C Habas
- Service de NeuroImagerie (NeuroImaging department) Centre Hospitalier national D'Ophtalmologie des 15/20, Paris, France
| | - S Keulen
- Department of Clinical and Experimental Neurolinguistics, CLIEN, Vrije Universiteit Brussel, Brussels, Belgium
- Center for Language and Cognition Groningen, Rijksuniversiteit Groningen, Groningen, The Netherlands
| | - K C Kirkby
- Psychiatry, School of Medicine, University of Tasmania, Hobart, Australia
| | - M Leggio
- I.R.C.C.S. Santa Lucia Foundation, Rome, Italy
- Department of Psychology, Sapienza University of Rome, Rome, Italy
| | - P Mariën
- Department of Clinical and Experimental Neurolinguistics, CLIEN, Vrije Universiteit Brussel, Brussels, Belgium
- Department of Neurology and Memory Clinic, ZNA Middelheim Hospital, Antwerp, Belgium
| | - M Molinari
- I.R.C.C.S. Santa Lucia Foundation, Rome, Italy
| | - E Moulton
- P.A.I.N. Group, Center for Pain and the Brain, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - L Orsi
- Neurologic Division 1, Department of Neuroscience and Mental Health, Città della Salute e della Scienza di Torino, Turin, Italy
| | - F Van Overwalle
- Faculty of Psychology and Educational Sciences, Vrije Universiteit Brussel, Brussels, Belgium
| | - C Papadelis
- Fetal-Neonatal Neuroimaging and Developmental Center, Boston Children's Hospital, Boston, MA, USA
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - A Priori
- Fondazione IRCCS Ca' Granda, Granada, Italy
- Università degli Studi di Milano, Milan, Italy
- III Clinica Neurologica, Polo Ospedaliero San Paolo, San Paolo, Italy
| | - B Sacchetti
- Department of Neuroscience, Section of Physiology, University of Turin, Torino, Italy
| | - D J Schutter
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - C Styliadis
- Medical School, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - J Verhoeven
- Department of Language and Communication Science, City University, London, UK
- Computational Linguistics and Psycholinguistics Research Center (CLIPS), Universiteit Antwerpen, Antwerp, Belgium
| |
Collapse
|
34
|
Jiang X, Sanford R, Pell MD. Neural systems for evaluating speaker (Un)believability. Hum Brain Mapp 2017; 38:3732-3749. [PMID: 28462535 DOI: 10.1002/hbm.23630] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 04/13/2017] [Accepted: 04/17/2017] [Indexed: 12/11/2022] Open
Abstract
Our voice provides salient cues about how confident we sound, which promotes inferences about how believable we are. However, the neural mechanisms involved in these social inferences are largely unknown. Employing functional magnetic resonance imaging, we examined the brain networks and individual differences underlying the evaluation of speaker believability from vocal expressions. Participants (n = 26) listened to statements produced in a confident, unconfident, or "prosodically unmarked" (neutral) voice, and judged how believable the speaker was on a 4-point scale. We found frontal-temporal networks were activated for different levels of confidence, with the left superior and inferior frontal gyrus more activated for confident statements, the right superior temporal gyrus for unconfident expressions, and bilateral cerebellum for statements in a neutral voice. Based on listener's believability judgment, we observed increased activation in the right superior parietal lobule (SPL) associated with higher believability, while increased left posterior central gyrus (PoCG) was associated with less believability. A psychophysiological interaction analysis found that the anterior cingulate cortex and bilateral caudate were connected to the right SPL when higher believability judgments were made, while supplementary motor area was connected with the left PoCG when lower believability judgments were made. Personal characteristics, such as interpersonal reactivity and the individual tendency to trust others, modulated the brain activations and the functional connectivity when making believability judgments. In sum, our data pinpoint neural mechanisms that are involved when inferring one's believability from a speaker's voice and establish ways that these mechanisms are modulated by individual characteristics of a listener. Hum Brain Mapp 38:3732-3749, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Xiaoming Jiang
- School of Communication Sciences and Disorders, McGill University, Montréal, Canada
| | - Ryan Sanford
- McConnell Brain Imaging Center, Montréal Neurological Institute, McGill University, Montréal, Canada
| | - Marc D Pell
- School of Communication Sciences and Disorders, McGill University, Montréal, Canada.,McConnell Brain Imaging Center, Montréal Neurological Institute, McGill University, Montréal, Canada
| |
Collapse
|
35
|
Gruber T, Grandjean D. A comparative neurological approach to emotional expressions in primate vocalizations. Neurosci Biobehav Rev 2016; 73:182-190. [PMID: 27993605 DOI: 10.1016/j.neubiorev.2016.12.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 12/01/2016] [Accepted: 12/03/2016] [Indexed: 12/20/2022]
Abstract
Different approaches from different research domains have crystallized debate over primate emotional processing and vocalizations in recent decades. On one side, researchers disagree about whether emotional states or processes in animals truly compare to those in humans. On the other, a long-held assumption is that primate vocalizations are innate communicative signals over which nonhuman primates have limited control and a mirror of the emotional state of the individuals producing them, despite growing evidence of intentional production for some vocalizations. Our goal is to connect both sides of the discussion in deciphering how the emotional content of primate calls compares with emotional vocal signals in humans. We focus particularly on neural bases of primate emotions and vocalizations to identify cerebral structures underlying emotion, vocal production, and comprehension in primates, and discuss whether particular structures or neuronal networks solely evolved for specific functions in the human brain. Finally, we propose a model to classify emotional vocalizations in primates according to four dimensions (learning, control, emotional, meaning) to allow comparing calls across species.
Collapse
Affiliation(s)
- Thibaud Gruber
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland.
| | - Didier Grandjean
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland
| |
Collapse
|
36
|
Alba-Ferrara L, Müller-Oehring EM, Sullivan EV, Pfefferbaum A, Schulte T. Brain responses to emotional salience and reward in alcohol use disorder. Brain Imaging Behav 2016; 10:136-46. [PMID: 25875013 DOI: 10.1007/s11682-015-9374-8] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Heightened neural responsiveness of alcoholics to alcohol cues and social emotion may impede sobriety. To test mesocorticolimbic network responsivity, 10 (8 men) alcohol use disorder (AUD) patients sober for 3 weeks to 10 months and 11 (8 men) controls underwent fMRI whilst viewing pictures of alcohol and non-alcohol beverages and of emotional faces (happy, sad, angry). AUD and controls showed similarities in mesocorticolimbic activity: both groups activated fusiform for emotional faces and hippocampal and pallidum regions during alcohol picture processing. In AUD, less fusiform activity to emotional faces and more pallidum activity to alcohol pictures were associated with longer sobriety. Using graph theory-based network efficiency measures to specify the role of the mesocorticolimbic network nodes for emotion and reward in sober AUD revealed that the left hippocampus was less efficiently connected with the other task-activated network regions in AUD than controls when viewing emotional faces, while the pallidum was more efficiently connected when viewing alcohol beverages. Together our findings identified lower occipito-temporal sensitivity to emotional faces and enhanced striatal sensitivity to alcohol stimuli in AUD than controls. Considering the role of the striatum in encoding reward, its activation enhancement with longer sobriety may reflect adaptive neural changes in the first year of drinking cessation and mesocorticolimbic system vulnerability for encoding emotional salience and reward potentially affecting executive control ability and relapse propensity during abstinence.
Collapse
Affiliation(s)
- L Alba-Ferrara
- Instituto San Lazaro De Neurociencias, National Scientific and Technical Research Council (CONICET), Av. Rivadavia 1917 - C.A.B..A., Buenos Aires, Argentina.,Bioscience Division, Neuroscience Program, SRI International, 333 Ravenswood Ave, Menlo Park, 94022, CA, USA
| | - E M Müller-Oehring
- Bioscience Division, Neuroscience Program, SRI International, 333 Ravenswood Ave, Menlo Park, 94022, CA, USA.,Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Rd, Stanford, 94305, CA, USA
| | - E V Sullivan
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Rd, Stanford, 94305, CA, USA
| | - A Pfefferbaum
- Bioscience Division, Neuroscience Program, SRI International, 333 Ravenswood Ave, Menlo Park, 94022, CA, USA.,Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Rd, Stanford, 94305, CA, USA
| | - T Schulte
- Bioscience Division, Neuroscience Program, SRI International, 333 Ravenswood Ave, Menlo Park, 94022, CA, USA. .,Palo Alto University, Pacific Gradualte School of Psychology, 1791 Arastradero Rd, Palo Alto, CA, 94304, USA.
| |
Collapse
|
37
|
Perceiving emotional expressions in others: Activation likelihood estimation meta-analyses of explicit evaluation, passive perception and incidental perception of emotions. Neurosci Biobehav Rev 2016; 71:810-828. [DOI: 10.1016/j.neubiorev.2016.10.020] [Citation(s) in RCA: 62] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Revised: 09/17/2016] [Accepted: 10/24/2016] [Indexed: 01/09/2023]
|
38
|
The sound of emotions-Towards a unifying neural network perspective of affective sound processing. Neurosci Biobehav Rev 2016; 68:96-110. [PMID: 27189782 DOI: 10.1016/j.neubiorev.2016.05.002] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/01/2016] [Accepted: 05/04/2016] [Indexed: 12/15/2022]
Abstract
Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.
Collapse
|
39
|
Generating an item pool for translational social cognition research: methodology and initial validation. Behav Res Methods 2015; 47:228-34. [PMID: 24719265 DOI: 10.3758/s13428-014-0464-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Existing sets of social and emotional stimuli suitable for social cognition research are limited in many ways, including size, unimodal stimulus delivery, and restriction to major universal emotions. Existing measures of social cognition could be improved by taking advantage of item response theory and adaptive testing technology to develop instruments that obtain more efficient measures of multimodal social cognition. However, for this to be possible, large pools of emotional stimuli must be obtained and validated. We present the development of a large, high-quality multimedia stimulus set produced by professional adult and child actors (ages 5 to 74) containing both visual and vocal emotional expressions. We obtained over 74,000 audiovisual recordings of a wide array of emotional and social behaviors, including the main universal emotions (happiness, sadness, anger, fear, disgust, and surprise), as well as more complex social expressions (pride, affection, sarcasm, jealousy, and shame). The actors generated a high quantity of technically superior, ecologically valid stimuli that were digitized, archived, and rated for accuracy and intensity of expressions. A subset of these facial and vocal expressions of emotion and social behavior were submitted for quantitative ratings to generate parameters for validity and discriminability. These stimuli are suitable for affective neuroscience-based psychometric tests, functional neuroimaging, and social cognitive rehabilitation programs. The purposes of this report are to describe the method of obtaining and validating this database and to make it accessible to the scientific community. We invite all those interested in participating in the use and validation of these stimuli to access them at www.med.upenn.edu/bbl/actors/index.shtml .
Collapse
|
40
|
Bilateral dorsal and ventral fiber pathways for the processing of affective prosody identified by probabilistic fiber tracking. Neuroimage 2015; 109:27-34. [DOI: 10.1016/j.neuroimage.2015.01.016] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2014] [Revised: 12/31/2014] [Accepted: 01/02/2015] [Indexed: 11/20/2022] Open
|
41
|
Fedorenko E, Hsieh PJ, Balewski Z. A possible functional localizer for identifying brain regions sensitive to sentence-level prosody. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 30:120-148. [PMID: 25642425 PMCID: PMC4306436 DOI: 10.1080/01690965.2013.861917] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Investigations of how we produce and perceive prosodic patterns are not only interesting in their own right but can inform fundamental questions in language research. We here argue that functional magnetic resonance imaging (fMRI) in general - and the functional localization approach in particular (e.g., Kanwisher et al., 1997; Saxe et al., 2006; Fedorenko et al., 2010; Nieto-Castañon & Fedorenko, 2012) - has the potential to help address open research questions in prosody research and at the intersection of prosody and other domains. Critically, this approach can go beyond questions like "where in the brain does mental process x produce activation" and toward questions that probe the nature of the representations and computations that subserve different mental abilities. We describe one way to functionally define regions sensitive to sentence-level prosody in individual subjects. This or similar "localizer" contrasts can be used in future studies to test hypotheses about the precise contributions of prosody-sensitive brain regions to prosodic processing and cognition more broadly.
Collapse
Affiliation(s)
| | - Po-Jang Hsieh
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School
| | | |
Collapse
|
42
|
Alba-Ferrara L, Fernandez F, Salas R, de Erausquin GA. Transcranial Magnetic Stimulation and Deep Brain Stimulation in the treatment of alcohol dependence. ADDICTIVE DISORDERS & THEIR TREATMENT 2014; 13:159-169. [PMID: 25598743 PMCID: PMC4292849 DOI: 10.1097/adt.0b013e31829cf047] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Alcohol dependence is a major social, economic, and public health problem. Alcoholism can lead to damage of the gastrointestinal, nervous, cardiovascular, and respiratory systems and it can be lethal, costing hundreds of billions to the health care system. Despite the existence of cognitive-behavioral therapy, psychosocial interventions, and spiritually integrated treatment to treat it, alcohol dependence has a high relapse rate and poor prognosis, albeit with high interindividual variability. In this review, we discuss the use of two neuromodulation techniques, namely repetitive transcranial magnetic stimulation (rTMS) and deep brain stimulation (DBS), and their advantages and disadvantages compared to first-line pharmacological treatment for alcohol dependence. We also discuss rTMS and DBS targets for alcohol dependence treatment, considering experimental animal and human evidence, with careful consideration of methodological issues preventing the identification of feasible targets for neuromodulation treatments, as well as inter-individual variability factors influencing alcoholism prognosis. Lastly, we anticipate future research aiming to tailor the treatment to each individual patient by combining neurofunctional, neuroanatomical and neurodisruptive techniques optimizing the outcome.
Collapse
Affiliation(s)
- L. Alba-Ferrara
- Roskamp Laboratory of Brain Development, Modulation and Repair, Department of Psychiatry and Behavioral Neuroscience, Morsani College of Medicine, University of South Florida, Tampa, FL, USA
| | - F. Fernandez
- Institute for Research in Psychiatry, Department of Psychiatry and Behavioral Neuroscience, Morsani College of Medicine, University of South Florida, Tampa, FL, USA
| | - R. Salas
- Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| | - G. A. de Erausquin
- Roskamp Laboratory of Brain Development, Modulation and Repair, Department of Psychiatry and Behavioral Neuroscience, Morsani College of Medicine, University of South Florida, Tampa, FL, USA
| |
Collapse
|
43
|
Frühholz S, Trost W, Grandjean D. The role of the medial temporal limbic system in processing emotions in voice and music. Prog Neurobiol 2014; 123:1-17. [PMID: 25291405 DOI: 10.1016/j.pneurobio.2014.09.003] [Citation(s) in RCA: 83] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 09/16/2014] [Accepted: 09/29/2014] [Indexed: 01/15/2023]
Abstract
Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| | - Wiebke Trost
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
44
|
Servaas MN, Riese H, Renken RJ, Marsman JBC, Lambregs J, Ormel J, Aleman A. The effect of criticism on functional brain connectivity and associations with neuroticism. PLoS One 2013; 8:e69606. [PMID: 23922755 PMCID: PMC3724923 DOI: 10.1371/journal.pone.0069606] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2013] [Accepted: 06/10/2013] [Indexed: 11/18/2022] Open
Abstract
Neuroticism is a robust personality trait that constitutes a risk factor for psychopathology, especially anxiety disorders and depression. High neurotic individuals tend to be more self-critical and are overly sensitive to criticism by others. Hence, we used a novel resting-state paradigm to investigate the effect of criticism on functional brain connectivity and associations with neuroticism. Forty-eight participants completed the NEO Personality Inventory Revised (NEO-PI-R) to assess neuroticism. Next, we recorded resting state functional magnetic resonance imaging (rsfMRI) during two sessions. We manipulated the second session before scanning by presenting three standardized critical remarks through headphones, in which the subject was urged to please lie still in the scanner. A seed-based functional connectivity method and subsequent clustering were used to analyse the resting state data. Based on the reviewed literature related to criticism, we selected brain regions associated with self-reflective processing and stress-regulation as regions of interest. The findings showed enhanced functional connectivity between the clustered seed regions and brain areas involved in emotion processing and social cognition during the processing of criticism. Concurrently, functional connectivity was reduced between these clusters and brain structures related to the default mode network and higher-order cognitive control. Furthermore, individuals scoring higher on neuroticism showed altered functional connectivity between the clustered seed regions and brain areas involved in the appraisal, expression and regulation of negative emotions. These results may suggest that the criticized person is attempting to understand the beliefs, perceptions and feelings of the critic in order to facilitate flexible and adaptive social behavior. Furthermore, multiple aspects of emotion processing were found to be affected in individuals scoring higher on neuroticism during the processing of criticism, which may increase their sensitivity to negative social-evaluation.
Collapse
Affiliation(s)
- Michelle Nadine Servaas
- Neuroimaging Center, Department of Neuroscience, University Medical Center Groningen/University of Groningen, Groningen, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
45
|
Paulmann S, Bleichner M, Kotz SA. Valence, arousal, and task effects in emotional prosody processing. Front Psychol 2013; 4:345. [PMID: 23801973 PMCID: PMC3689289 DOI: 10.3389/fpsyg.2013.00345] [Citation(s) in RCA: 67] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2013] [Accepted: 05/28/2013] [Indexed: 11/13/2022] Open
Abstract
Previous research suggests that emotional prosody processing is a highly rapid and complex process. In particular, it has been shown that different basic emotions can be differentiated in an early event-related brain potential (ERP) component, the P200. Often, the P200 is followed by later long lasting ERPs such as the late positive complex. The current experiment set out to explore in how far emotionality and arousal can modulate these previously reported ERP components. In addition, we also investigated the influence of task demands (implicit vs. explicit evaluation of stimuli). Participants listened to pseudo-sentences (sentences with no lexical content) spoken in six different emotions or in a neutral tone of voice while they either rated the arousal level of the speaker or their own arousal level. Results confirm that different emotional intonations can first be differentiated in the P200 component, reflecting a first emotional encoding of the stimulus possibly including a valence tagging process. A marginal significant arousal effect was also found in this time-window with high arousing stimuli eliciting a stronger P200 than low arousing stimuli. The P200 component was followed by a long lasting positive ERP between 400 and 750 ms. In this late time-window, both emotion and arousal effects were found. No effects of task were observed in either time-window. Taken together, results suggest that emotion relevant details are robustly decoded during early processing and late processing stages while arousal information is only reliably taken into consideration at a later stage of processing.
Collapse
Affiliation(s)
- Silke Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex Colchester, UK
| | | | | |
Collapse
|
46
|
Alba-Ferrara L, de Erausquin GA, Hirnstein M, Weis S, Hausmann M. Emotional prosody modulates attention in schizophrenia patients with hallucinations. Front Hum Neurosci 2013; 7:59. [PMID: 23459397 PMCID: PMC3586698 DOI: 10.3389/fnhum.2013.00059] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2012] [Accepted: 02/14/2013] [Indexed: 11/13/2022] Open
Abstract
Recent findings have demonstrated that emotional prosody (EP) attracts attention involuntarily (Grandjean et al., 2008). The automat shift of attention toward emotionally salient stimuli can be overcome by attentional control (Hahn et al., 2010). Attentional control is impaired in schizophrenia, especially in schizophrenic patients with hallucinations because the "voices" capture attention increasing the processing load and competing for top-down resources. The present study investigates how involuntary attention is driven by implicit EP in schizophrenia with auditory verbal hallucinations (AVH) and without (NAVH). Fifteen AVH patients, 12 NAVH patients and 16 healthy controls (HC) completed a dual-task dichotic listening paradigm, in which an emotional vocal outburst was paired with a neutral vocalization spoken in male and female voices. Participants were asked to report the speaker's gender while attending to either the left or right ear. NAVH patients and HC revealed shorter response times for stimuli presented to the attended left ear than the attended right ear. This laterality effect was not present in AVH patients. In addition, NAVH patients and HC showed faster responses when the EP stimulus was presented to the unattended ear, probably because of less interference between the attention-controlled gender voice identification task and involuntary EP processing. AVH patients did not benefit from presenting emotional stimuli to the unattended ear. The findings suggest that similar to HC, NAVH patients show a right hemispheric bias for EP processing. AVH patients seem to be less lateralized for EP and therefore might be more susceptible to interfering involuntary EP processing; regardless which ear/hemisphere receives the bottom up input.
Collapse
Affiliation(s)
- L. Alba-Ferrara
- Department of Psychiatry and Neurosciences, Roskamp Laboratory of Brain Development, Modulation and Repair, Morsani College of Medicine, University of South FloridaTampa, FL, USA
- Department of Psychology, Durham UniversityDurham, UK
| | - G. A. de Erausquin
- Department of Psychiatry and Neurosciences, Roskamp Laboratory of Brain Development, Modulation and Repair, Morsani College of Medicine, University of South FloridaTampa, FL, USA
| | - M. Hirnstein
- Department of Psychology, Durham UniversityDurham, UK
- Department of Biological and Medical Psychology, University of BergenBergen, Norway
| | - S. Weis
- Department of Psychology, Durham UniversityDurham, UK
| | - M. Hausmann
- Department of Psychology, Durham UniversityDurham, UK
| |
Collapse
|
47
|
Mitchell RLC, Ross ED. Attitudinal prosody: what we know and directions for future study. Neurosci Biobehav Rev 2013; 37:471-9. [PMID: 23384530 DOI: 10.1016/j.neubiorev.2013.01.027] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2012] [Revised: 01/16/2013] [Accepted: 01/28/2013] [Indexed: 10/27/2022]
Abstract
Prosodic aspects of speech such as pitch, duration and amplitude constitute nonverbal cues that supplement or modify the meaning of the spoken word, to provide valuable clues as to a speakers' state of mind. It can thus indicate what emotion a person is feeling (emotional prosody), or their attitude towards an event, person or object (attitudinal prosody). Whilst the study of emotional prosody has gathered pace, attitudinal prosody now deserves equal attention. In social cognition, understanding attitudinal prosody is important in its own right, since it can convey powerful constructs such as confidence, persuasion, sarcasm and superiority. In this review, it is examined what prosody is, how it conveys attitudes, and which attitudes prosody can convey. The review finishes by considering the neuroanatomy associated with attitudinal prosody, and put forward the hypothesis that this cognition is mediated by the right cerebral hemisphere, particularly posterior superior lateral temporal cortex, with an additional role for the basal ganglia, and limbic regions such as the medial prefrontal cortex and amygdala. It is suggested that further exploration of its functional neuroanatomy is greatly needed, since it could provide valuable clues about the value of current prosody nomenclature and its separability from other types of prosody at the behavioural level.
Collapse
|
48
|
Frühholz S, Grandjean D. Multiple subregions in superior temporal cortex are differentially sensitive to vocal expressions: A quantitative meta-analysis. Neurosci Biobehav Rev 2013; 37:24-35. [DOI: 10.1016/j.neubiorev.2012.11.002] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Revised: 10/08/2012] [Accepted: 11/04/2012] [Indexed: 11/16/2022]
|
49
|
Ghosh BCP, Calder AJ, Peers PV, Lawrence AD, Acosta-Cabronero J, Pereira JM, Hodges JR, Rowe JB. Social cognitive deficits and their neural correlates in progressive supranuclear palsy. ACTA ACUST UNITED AC 2012; 135:2089-102. [PMID: 22637582 PMCID: PMC3381722 DOI: 10.1093/brain/aws128] [Citation(s) in RCA: 76] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Although progressive supranuclear palsy is defined by its akinetic rigidity, vertical supranuclear gaze palsy and falls, cognitive impairments are an important determinant of patients’ and carers’ quality of life. Here, we investigate whether there is a broad deficit of modality-independent social cognition in progressive supranuclear palsy and explore the neural correlates for these. We recruited 23 patients with progressive supranuclear palsy (using clinical diagnostic criteria, nine with subsequent pathological confirmation) and 22 age- and education-matched controls. Participants performed an auditory (voice) emotion recognition test, and a visual and auditory theory of mind test. Twenty-two patients and 20 controls underwent structural magnetic resonance imaging to analyse neural correlates of social cognition deficits using voxel-based morphometry. Patients were impaired on the voice emotion recognition and theory of mind tests but not auditory and visual control conditions. Grey matter atrophy in patients correlated with both voice emotion recognition and theory of mind deficits in the right inferior frontal gyrus, a region associated with prosodic auditory emotion recognition. Theory of mind deficits also correlated with atrophy of the anterior rostral medial frontal cortex, a region associated with theory of mind in health. We conclude that patients with progressive supranuclear palsy have a multimodal deficit in social cognition. This deficit is due, in part, to progressive atrophy in a network of frontal cortical regions linked to the integration of socially relevant stimuli and interpretation of their social meaning. This impairment of social cognition is important to consider for those managing and caring for patients with progressive supranuclear palsy.
Collapse
Affiliation(s)
- Boyd C P Ghosh
- Wessex Neurosciences Centre, Mailpoint 101, Southampton University Hospitals NHS Trust, Tremona Road, Southampton SO16 6YD, UK.
| | | | | | | | | | | | | | | |
Collapse
|
50
|
Alba-Ferrara L, Fernyhough C, Weis S, Mitchell RLC, Hausmann M. Contributions of emotional prosody comprehension deficits to the formation of auditory verbal hallucinations in schizophrenia. Clin Psychol Rev 2012; 32:244-50. [PMID: 22459787 DOI: 10.1016/j.cpr.2012.02.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2011] [Revised: 02/06/2012] [Accepted: 02/08/2012] [Indexed: 10/28/2022]
Abstract
Deficits in emotional processing have been widely described in schizophrenia. Associations of positive symptoms with poor emotional prosody comprehension (EPC) have been reported at the phenomenological, behavioral, and neural levels. This review focuses on the relation between emotional processing deficits and auditory verbal hallucinations (AVH). We explore the possibility that the relation between AVH and EPC in schizophrenia might be mediated by the disruption of a common mechanism intrinsic to auditory processing, and that, moreover, prosodic feature processing deficits play a pivotal role in the formation of AVH. The review concludes with proposing a mechanism by which AVH are constituted and showing how different aspects of our neuropsychological model can explain the constellation of subjective experiences which occur in relation to AVH.
Collapse
Affiliation(s)
- Lucy Alba-Ferrara
- Roskamp Laboratory of Brain Development, Modulation and Repair. Department of Psychiatry and Neurosciences, Morsani College of Medicine, University of South Florida, FL, USA.
| | | | | | | | | |
Collapse
|