1
|
Morningstar M, Billetdeaux KA, Mattson WI, Gilbert AC, Nelson EE, Hoskinson KR. Neural response to vocal emotional intensity in youth. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024:10.3758/s13415-024-01224-6. [PMID: 39300012 DOI: 10.3758/s13415-024-01224-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/24/2024] [Indexed: 09/22/2024]
Abstract
Previous research has identified regions of the brain that are sensitive to emotional intensity in faces, with some evidence for developmental differences in this pattern of response. However, comparable understanding of how the brain tracks linear variations in emotional prosody is limited-especially in youth samples. The current study used novel stimuli (morphing emotional prosody from neutral to anger/happiness in linear increments) to investigate whether neural response to vocal emotion was parametrically modulated by emotional intensity and whether there were age-related changes in this effect. Participants aged 8-21 years (n = 56, 52% female) completed a vocal emotion recognition task, in which they identified the intended emotion in morphed recordings of vocal prosody, while undergoing functional magnetic resonance imaging. Parametric analyses of whole-brain response to morphed stimuli found that activation in the bilateral superior temporal gyrus (STG) scaled to emotional intensity in angry (but not happy) voices. Multivariate region-of-interest analyses revealed the same pattern in the right amygdala. Sensitivity to emotional intensity did not vary by participants' age. These findings provide evidence for the linear parameterization of emotional intensity in angry vocal prosody within the bilateral STG and right amygdala. Although findings should be replicated, the current results also suggest that this pattern of neural sensitivity may not be subject to strong developmental influences.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3L3, Canada.
- Centre for Neuroscience Studies, Queen's University, Kingston, Canada.
| | - K A Billetdeaux
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - W I Mattson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - A C Gilbert
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
- Centre for Research on Brain, Language, and Music, Montreal, Canada
| | - E E Nelson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH, USA
| | - K R Hoskinson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
2
|
Miura KW, Kudo T, Otake-Matsuura M. Web-Based Group Conversational Intervention on Cognitive Function and Comprehensive Functional Status Among Japanese Older Adults: Protocol for a 6-Month Randomized Controlled Trial. JMIR Res Protoc 2024; 13:e56608. [PMID: 38990615 PMCID: PMC11273076 DOI: 10.2196/56608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/14/2024] [Accepted: 05/14/2024] [Indexed: 07/12/2024] Open
Abstract
BACKGROUND Social communication is a key factor in maintaining cognitive function and contributes to well-being in later life. OBJECTIVE This study will examine the effects of "Photo-Integrated Conversation Moderated by Application version 2" (PICMOA-2), which is a web-based conversational intervention, on cognitive performance, frailty, and social and psychological indicators among community-dwelling older adults. METHODS This study is a randomized controlled trial with an open-label, 2-parallel group trial and 1:1 allocation design. Community dwellers aged 65 years and older were enrolled in the trial and divided into the intervention and control groups. The intervention group receives the PICMOA-2 program, a web-based group conversation, once every 2 weeks for 6 months. The primary outcome is verbal fluency, including phonemic and semantic fluency. The secondary outcomes are other neuropsychiatric batteries, including the Mini-Mental State Examination, Logical Memory (immediate and delay), verbal paired associates, and comprehensive functional status evaluated by questionnaires, including frailty, social status, and well-being. The effect of the intervention will be examined using a mixed linear model. As a secondary aim, we will test whether the intervention effects vary with the covariates at baseline to examine the effective target attributes. RESULTS Recruitment was completed in July 2023. A total of 66 participants were randomly allocated to intervention or control groups. As of January 1, 2024, the intervention is ongoing. Participants are expected to complete the intervention at the end of February 2024, and the postintervention evaluation will be conducted in March 2024. CONCLUSIONS This protocol outlines the randomized controlled trial study design evaluating the effect of a 6-month intervention with PICMOA-2. This study will provide evidence on the effectiveness of social interventions on cognitive function and identify effective target images for remote social intervention. TRIAL REGISTRATION UMIN Clinical Trials UMIN000050877; https://tinyurl.com/5eahsy66. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/56608.
Collapse
Affiliation(s)
- Kumi Watanabe Miura
- Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Takashi Kudo
- Department of Psychiatry, Graduate School of Medicine, Osaka University, Osaka, Japan
| | | |
Collapse
|
3
|
Degano G, Donhauser PW, Gwilliams L, Merlo P, Golestani N. Speech prosody enhances the neural processing of syntax. Commun Biol 2024; 7:748. [PMID: 38902370 PMCID: PMC11190187 DOI: 10.1038/s42003-024-06444-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 06/12/2024] [Indexed: 06/22/2024] Open
Abstract
Human language relies on the correct processing of syntactic information, as it is essential for successful communication between speakers. As an abstract level of language, syntax has often been studied separately from the physical form of the speech signal, thus often masking the interactions that can promote better syntactic processing in the human brain. However, behavioral and neural evidence from adults suggests the idea that prosody and syntax interact, and studies in infants support the notion that prosody assists language learning. Here we analyze a MEG dataset to investigate how acoustic cues, specifically prosody, interact with syntactic representations in the brains of native English speakers. More specifically, to examine whether prosody enhances the cortical encoding of syntactic representations, we decode syntactic phrase boundaries directly from brain activity, and evaluate possible modulations of this decoding by the prosodic boundaries. Our findings demonstrate that the presence of prosodic boundaries improves the neural representation of phrase boundaries, indicating the facilitative role of prosodic cues in processing abstract linguistic features. This work has implications for interactive models of how the brain processes different linguistic features. Future research is needed to establish the neural underpinnings of prosody-syntax interactions in languages with different typological characteristics.
Collapse
Affiliation(s)
- Giulio Degano
- Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland.
| | - Peter W Donhauser
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany
| | - Laura Gwilliams
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Paola Merlo
- Department of Linguistics, University of Geneva, Geneva, Switzerland
- University Centre for Informatics, University of Geneva, Geneva, Switzerland
| | - Narly Golestani
- Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- Brain and Language Lab, Cognitive Science Hub, University of Vienna, Vienna, Austria
- Department of Behavioral and Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| |
Collapse
|
4
|
Olson HA, Chen EM, Lydic KO, Saxe RR. Left-Hemisphere Cortical Language Regions Respond Equally to Observed Dialogue and Monologue. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:575-610. [PMID: 38144236 PMCID: PMC10745132 DOI: 10.1162/nol_a_00123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 09/20/2023] [Indexed: 12/26/2023]
Abstract
Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. Twenty adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20 s videos of puppets speaking either to another puppet (the dialogue condition) or directly to the viewer (the monologue condition), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1-3 min each) of two puppets conversing with each other, in which one puppet was comprehensible while the other's speech was reversed. All participants saw the same visual input but were randomly assigned which character's speech was comprehensible. In left-hemisphere cortical language regions, the time course of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually localized theory of mind regions and right-hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.
Collapse
|
5
|
Geraudie A, Pressman PS, Pariente J, Millanski C, Palser ER, Ratnasiri BM, Battistella G, Mandelli ML, Miller ZA, Miller BL, Sturm V, Rankin KP, Gorno-Tempini ML, Montembeault M. Expressive Prosody in Patients With Focal Anterior Temporal Neurodegeneration. Neurology 2023; 101:e825-e835. [PMID: 37400244 PMCID: PMC10449437 DOI: 10.1212/wnl.0000000000207516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 04/25/2023] [Indexed: 07/05/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Progressive focal anterior temporal lobe (ATL) neurodegeneration has been historically called semantic dementia. More recently, semantic variant primary progressive aphasia (svPPA) and semantic behavioral variant frontotemporal dementia (sbvFTD) have been linked with predominant left and right ATL neurodegeneration, respectively. Nonetheless, clinical tools for an accurate diagnosis of sbvFTD are still lacking. Expressive prosody refers to the modulation of pitch, loudness, tempo, and quality of voice used to convey emotional and linguistic information and has been linked to bilateral but right-predominant frontotemporal functioning. Changes in expressive prosody can be detected with semiautomated methods and could represent a useful diagnostic marker of socioemotional functioning in sbvFTD. METHODS Participants underwent a comprehensive neuropsychological and language evaluation and a 3T MRI at the University of California San Francisco. Each participant provided a verbal description of the picnic scene from the Western Aphasia Battery. The fundamental frequency (f0) range, an acoustic measure of pitch variability, was extracted for each participant. We compared the f0 range between groups and investigated associations with an informant-rated measure of empathy, a facial emotion labeling task, and gray matter (GM) volumes using voxel-based morphometry. RESULTS Twenty-eight patients with svPPA, 18 with sbvFTD, and 18 healthy controls (HCs) were included. f0 range was significantly different across groups: patients with sbvFTD showed reduced f0 range in comparison with both patients with svPPA (mean difference of -1.4 ± 2.4 semitones; 95% CI -2.4 to -0.4]; p < 0.005) and HCs (mean difference of -1.9 ± 3.0 semitones; 95% CI -3.0 to -0.7]; p < 0.001). A higher f0 range was correlated with a greater informant-rated empathy (r = 0.355; p ≤ 0.05), but not facial emotion labeling. Finally, the lower f0 range was correlated with lower GM volume in the right superior temporal gyrus, encompassing anterior and posterior portions (p < 0.05 FWE cluster corrected). DISCUSSION Expressive prosody may be a useful clinical marker of sbvFTD. Reduced empathy is a core symptom in sbvFTD; the present results extend this to prosody, a core component of social interaction, at the intersection of speech and emotion. They also inform the long-standing debate on the lateralization of expressive prosody in the brain, highlighting the critical role of the right superior temporal lobe.
Collapse
Affiliation(s)
- Amandine Geraudie
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Peter S Pressman
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Jérémie Pariente
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Carly Millanski
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Eleanor R Palser
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Buddhika M Ratnasiri
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Giovanni Battistella
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Maria Luisa Mandelli
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Zachary A Miller
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Bruce L Miller
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Virginia Sturm
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Katherine P Rankin
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Maria Luisa Gorno-Tempini
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada
| | - Maxime Montembeault
- From the Memory and Aging Center (A.G., E.R.P., B.M.R., G.B., M.L.M., Z.A.M., B.L.M., V.S., K.P.R., M.L.G.-T., M.M.), Department of Neurology, University of California San Francisco; Neurology Department (A.G., J.P.), Toulouse University Hospital; Institut du Cerveau (ICM) (A.G.), INSERM U1127, CNRS UMR 7225, Sorbonne Université, Paris, France; Department of Neurology (P.S.P.), University of Colorado; Department of Speech (C.M.), Language, and Hearing Sciences, The University of Texas at Austin; Dyslexia Center (E.R.P., M.L.M., Z.A.M., V.S., M.L.G.-T.), Department of Neurology, University of California San Francisco; Department of Otolaryngology-Head and Neck Surgery (G.B.), Massachusets Eye and Ear and Harvard Medical School, Boston; Douglas Research Centre (M.M.); and Department of Psychiatry (M.M.), McGill University, Montréal, Quebec, Canada.
| |
Collapse
|
6
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
7
|
Lin Y, Fan X, Chen Y, Zhang H, Chen F, Zhang H, Ding H, Zhang Y. Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words. Brain Sci 2022; 12:brainsci12121706. [PMID: 36552167 PMCID: PMC9776349 DOI: 10.3390/brainsci12121706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/06/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers' gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xinran Fan
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yueqi Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha 410012, China
| | - Hui Zhang
- School of International Education, Shandong University, Jinan 250100, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
8
|
Guo Z, Chen F. Decoding lexical tones and vowels in imagined tonal monosyllables using fNIRS signals. J Neural Eng 2022; 19. [PMID: 36317255 DOI: 10.1088/1741-2552/ac9e1d] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 10/27/2022] [Indexed: 11/11/2022]
Abstract
Objective.Speech is a common way of communication. Decoding verbal intent could provide a naturalistic communication way for people with severe motor disabilities. Active brain computer interaction (BCI) speller is one of the most commonly used speech BCIs. To reduce the spelling time of Chinese words, identifying vowels and tones that are embedded in imagined Chinese words is essential. Functional near-infrared spectroscopy (fNIRS) has been widely used in BCI because it is portable, non-invasive, safe, low cost, and has a relatively high spatial resolution.Approach.In this study, an active BCI speller based on fNIRS is presented by covertly rehearsing tonal monosyllables with vowels (i.e. /a/, /i/, /o/, and /u/) and four lexical tones in Mandarin Chinese (i.e. tones 1, 2, 3, and 4) for 10 s.Main results.fNIRS results showed significant differences in the right superior temporal gyrus between imagined vowels with tone 2/3/4 and those with tone 1 (i.e. more activations and stronger connections to other brain regions for imagined vowels with tones 2/3/4 than for those with tone 1). Speech-related areas for tone imagery (i.e. the right hemisphere) provided majority of information for identifying tones, while the left hemisphere had advantages in vowel identification. Having decoded both vowels and tones during the post-stimulus 15 s period, the average classification accuracies exceeded 40% and 70% in multiclass (i.e. four classes) and binary settings, respectively. To spell words more quickly, the time window size for decoding was reduced from 15 s to 2.5 s while the classification accuracies were not significantly reduced.Significance.For the first time, this work demonstrated the possibility of discriminating lexical tones and vowels in imagined tonal syllables simultaneously. In addition, the reduced time window for decoding indicated that the spelling time of Chinese words could be significantly reduced in the fNIRS-based BCIs.
Collapse
Affiliation(s)
- Zengzhi Guo
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, People's Republic of China.,Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, People's Republic of China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, People's Republic of China
| |
Collapse
|
9
|
Complementary hemispheric lateralization of language and social processing in the human brain. Cell Rep 2022; 41:111617. [DOI: 10.1016/j.celrep.2022.111617] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 06/10/2022] [Accepted: 10/16/2022] [Indexed: 11/09/2022] Open
|
10
|
Newport EL, Seydell-Greenwald A, Landau B, Turkeltaub PE, Chambers CE, Martin KC, Rennert R, Giannetti M, Dromerick AW, Ichord RN, Carpenter JL, Berl MM, Gaillard WD. Language and developmental plasticity after perinatal stroke. Proc Natl Acad Sci U S A 2022; 119:e2207293119. [PMID: 36215488 PMCID: PMC9586296 DOI: 10.1073/pnas.2207293119] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
The mature human brain is lateralized for language, with the left hemisphere (LH) primarily responsible for sentence processing and the right hemisphere (RH) primarily responsible for processing suprasegmental aspects of language such as vocal emotion. However, it has long been hypothesized that in early life there is plasticity for language, allowing young children to acquire language in other cortical regions when LH areas are damaged. If true, what are the constraints on functional reorganization? Which areas of the brain can acquire language, and what happens to the functions these regions ordinarily perform? We address these questions by examining long-term outcomes in adolescents and young adults who, as infants, had a perinatal arterial ischemic stroke to the LH areas ordinarily subserving sentence processing. We compared them with their healthy age-matched siblings. All participants were tested on a battery of behavioral and functional imaging tasks. While stroke participants were impaired in some nonlinguistic cognitive abilities, their processing of sentences and of vocal emotion was normal and equal to that of their healthy siblings. In almost all, these abilities have both developed in the healthy RH. Our results provide insights into the remarkable ability of the young brain to reorganize language. Reorganization is highly constrained, with sentence processing almost always in the RH frontotemporal regions homotopic to their location in the healthy brain. This activation is somewhat segregated from RH emotion processing, suggesting that the two functions perform best when each has its own neural territory.
Collapse
Affiliation(s)
- Elissa L. Newport
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- 1To whom correspondence may be addressed.
| | - Anna Seydell-Greenwald
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Barbara Landau
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- cJohns Hopkins University, Baltimore, MD 21218
| | - Peter E. Turkeltaub
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Catherine E. Chambers
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Kelly C. Martin
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Rebecca Rennert
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Margot Giannetti
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Alexander W. Dromerick
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Rebecca N. Ichord
- dPerelman School of Medicine at the University of Pennsylvania and Children’s Hospital of Philadelphia, Philadelphia, PA 19104
| | | | - Madison M. Berl
- eChildren’s National Hospital and Center for Neuroscience, Washington, DC 20010
| | - William D. Gaillard
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- eChildren’s National Hospital and Center for Neuroscience, Washington, DC 20010
| |
Collapse
|
11
|
Soldevila-Matías P, García-Martí G, Fuentes-Durá I, Ruiz JC, González-Navarro L, González-Vivas C, Radua J, Sanjuán J. Brain activity changes with emotional words in different stages of psychosis. Eur Psychiatry 2022; 66:e25. [PMID: 36193735 PMCID: PMC10044295 DOI: 10.1192/j.eurpsy.2022.2321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND To date, a large number of functional magnetic resonance imaging (fMRI) studies have been conducted on psychosis. However, little is known about changes in brain functioning in psychotic patients using an emotional auditory paradigm at different stages of the disease. Such knowledge is important for advancing our understanding of the disorder and thus creating more targeted interventions. This study aimed to investigate whether individuals with first-episode psychosis (FEP) and chronic schizophrenia show abnormal brain responses to emotional auditory processing and to compare the responses between FEP and chronic schizophrenia. METHODS Patients with FEP (n = 31) or chronic schizophrenia (n = 23) and healthy controls (HCs, n = 31) underwent an fMRI scan while presented with both emotional and nonemotional words. RESULTS Using HC as a reference, patients with FEP showed decreased right temporal activation, while patients with chronic schizophrenia showed increased bilateral temporal activation. When comparing the patient groups, individuals with FEP showed lower frontal lobe activation. CONCLUSION To the best of our knowledge, this is the first study with an emotional auditory paradigm used in psychotic patients at different stages of the disease. Our results suggested that the temporal lobe might be a key issue in the physiopathology of psychosis, although abnormal activation could also be derived from a connectivity problem. There is lower activation in the early stage and evolution to greater activation when patients become chronic. This study highlights the relevance of using emotional paradigms to better understand brain activation at different stages of psychosis.
Collapse
Affiliation(s)
- Pau Soldevila-Matías
- Department of Basic Psychology, Faculty of Psychology, University of Valencia, Valencia, Spain.,Research Institute of Clinic University Hospital of Valencia (INCLIVA), Valencia, Spain.,Department of Psychology, Faculty of Health Sciences, European University of Valencia, Spain
| | - Gracián García-Martí
- CIBERSAM, Biomedical Research Network on Mental Health Area, Madrid, Spain.,Biomedical Engineering Unit/Radiology Department, Quirónsalud Hospital, Valencia, Spain
| | - Inmaculada Fuentes-Durá
- Research Institute of Clinic University Hospital of Valencia (INCLIVA), Valencia, Spain.,CIBERSAM, Biomedical Research Network on Mental Health Area, Madrid, Spain.,Department of Personality, Evaluation and Psychological Treatment, Faculty of Psychology, University of Valencia, Valencia, Spain
| | - Juan Carlos Ruiz
- Department of Behavioural Sciences Methodology, Faculty of Psychology, University of Valencia, Valencia, Spain
| | | | - Carlos González-Vivas
- Research Institute of Clinic University Hospital of Valencia (INCLIVA), Valencia, Spain
| | - Joaquim Radua
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain.,Centre for Psychiatric Research and Education, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.,Department of Psychosis Studies, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, United Kingdom
| | - Julio Sanjuán
- Research Institute of Clinic University Hospital of Valencia (INCLIVA), Valencia, Spain.,Department of Psychology, Faculty of Health Sciences, European University of Valencia, Spain.,Department of Psychiatry, University of Valencia School of Medicine, Valencia, Spain
| |
Collapse
|
12
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
13
|
Elizalde Acevedo B, Olano MA, Bendersky M, Kochen S, Agüero Vera V, Chambeaud N, Gargiulo M, Sabatte J, Gargiulo Á, Alba-Ferrara L. Brain mapping of emotional prosody in patients with drug-resistant temporal epilepsy: An indicator of plasticity. Cortex 2022; 153:97-109. [PMID: 35635861 DOI: 10.1016/j.cortex.2022.04.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/17/2022] [Accepted: 04/13/2022] [Indexed: 12/01/2022]
Abstract
INTRODUCTION Emotional prosody, a suprasegmental component of language, is predominantly processed by right temporo-frontal areas of the cerebral cortex. In temporal lobe epilepsy (TLE), brain disturbances affecting prosody processing frequently occur. This research assesses compensatory brain mechanisms of prosody processing in refractory TLE using fMRI. METHODS Patients with focal unilateral epilepsy, right (RTLE) (N = 19), left (LTLE) (N = 19), and healthy controls (CTRL) (N = 20) were evaluated during a prosody decoding fMRI task. The stimuli consisted in spoken numbers with different tones of voice (joy, fear, anger, neutral and silent trials). Participants were instructed to label the emotion with a keypad. "Joy" was removed from the analysis due to a high degree of variability. A lateralization index (LI) was used to see individual differences in the interhemispheric activations of each participant. RESULTS Behaviorally, The LTLE and RTLE groups did not differ significantly from each other neither from CTRL. In Negative Emotions versus Baseline contrast, the whole sample analysis showed extensive activations in bilateral superior temporal gyrus, bilateral precentral and post-central gyrus, right putamen, and left cerebellar vermis. Compared to the LTLE and CTRL, RTLE activated similar areas, but to a lesser extent. The LI analysis revealed significant differences in hemispheric laterality of the temporal lobe and the parietal lobe between RTLE compared to LTLE and CTRL, being the RTLE group lateralized towards the left, unlike the other two groups. DISCUSSION The LI indicated that, since the CTRL and the LTLE groups recruited putative prosodic regions, the RTLE lateralized prosody processing towards the left, recruiting contralateral nodes, homotopic to the putative areas of the prosody. Considering that the groups did not differ in prosody task performance, the findings suggest that, in the RTLE group, alternative brain nodes were recruited for the task, demonstrating plasticity.
Collapse
Affiliation(s)
- Bautista Elizalde Acevedo
- Instituto de Investigaciones en Medicina Traslacional (IIMT), CONICET-Universidad Austral, Derqui-Pilar, Buenos Aires, Argentina; Departamento de Psicología, Facultad de Ciencias Biomédicas, Universidad Austral, Pilar, Buenos Aires, Argentina; Unidad Ejecutora para el Estudio de las Neurociencias y Sistemas Complejos (ENyS), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina.
| | - María A Olano
- Departamento de Psicología, Facultad de Ciencias Biomédicas, Universidad Austral, Pilar, Buenos Aires, Argentina
| | - Mariana Bendersky
- Unidad Ejecutora para el Estudio de las Neurociencias y Sistemas Complejos (ENyS), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina; Laboratorio de Anatomía Viviente, 3ra Cátedra de Anatomía Normal, Facultad de Medicina, Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Silvia Kochen
- Unidad Ejecutora para el Estudio de las Neurociencias y Sistemas Complejos (ENyS), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina
| | - Valentina Agüero Vera
- Departamento de Psicología, Facultad de Ciencias Biomédicas, Universidad Austral, Pilar, Buenos Aires, Argentina
| | - Nahuel Chambeaud
- Universidad de Buenos Aires, Facultad de Psicología, Buenos Aires, Argentina
| | - Mercedes Gargiulo
- Centro Integral de Salud Mental Argentino (CISMA), Buenos Aires, Argentina
| | - Juliana Sabatte
- Centro Integral de Salud Mental Argentino (CISMA), Buenos Aires, Argentina
| | - Ángel Gargiulo
- Centro Integral de Salud Mental Argentino (CISMA), Buenos Aires, Argentina
| | - Lucía Alba-Ferrara
- Departamento de Psicología, Facultad de Ciencias Biomédicas, Universidad Austral, Pilar, Buenos Aires, Argentina; Unidad Ejecutora para el Estudio de las Neurociencias y Sistemas Complejos (ENyS), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina
| |
Collapse
|
14
|
Zhang Y, Zhou W, Huang J, Hong B, Wang X. Neural correlates of perceived emotions in human insula and amygdala for auditory emotion recognition. Neuroimage 2022; 260:119502. [PMID: 35878727 DOI: 10.1016/j.neuroimage.2022.119502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 06/21/2022] [Accepted: 07/21/2022] [Indexed: 11/28/2022] Open
Abstract
The emotional status of a speaker is an important non-linguistic cue carried by human voice and can be perceived by a listener in vocal communication. Understanding the neural circuits involved in processing emotions carried by human voice is crucial for understanding the neural basis of social interaction. Previous studies have shown that human insula and amygdala responded more selectively to emotional sounds than non-emotional sounds. However, it is not clear whether the neural selectivity to emotional sounds in these brain structures is determined by the emotion presented by a speaker which is associated with the acoustic properties of the sounds or by the emotion perceived by a listener. In this study, we recorded intracranial electroencephalography (iEEG) responses to emotional human voices while subjects performed emotion recognition tasks. We found that the iEEG responses of Heschl's gyrus (HG) and posterior insula were determined by the presented emotion, whereas the iEEG responses of anterior insula and amygdala were driven by the perceived emotion. These results suggest that the anterior insula and amygdala play a crucial role in conscious perception of emotions carried by human voice.
Collapse
Affiliation(s)
- Yang Zhang
- Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Biomedical Engineering, Tsinghua University, Beijing 100084, PR China; Department of Biomedical Engineering, the Johns Hopkins University, Baltimore, MD 21205, United States.
| | - Wenjing Zhou
- Department of Epilepsy Center, Tsinghua University Yuquan Hospital, Beijing 100040, PR China
| | - Juan Huang
- Department of Biomedical Engineering, the Johns Hopkins University, Baltimore, MD 21205, United States
| | - Bo Hong
- Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Biomedical Engineering, Tsinghua University, Beijing 100084, PR China.
| | - Xiaoqin Wang
- Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Biomedical Engineering, Tsinghua University, Beijing 100084, PR China; Department of Biomedical Engineering, the Johns Hopkins University, Baltimore, MD 21205, United States.
| |
Collapse
|
15
|
Wang C, Cho NS, Dyk KV, Islam S, Raymond C, Choi J, Salamon N, Pope WB, Lai A, Cloughesy TF, Nghiemphu PL, Ellingson BM. Characterization of Cognitive Function in Survivors of Diffuse Gliomas Using Morphometric Correlation Networks. Tomography 2022; 8:1437-1452. [PMID: 35736864 PMCID: PMC9229761 DOI: 10.3390/tomography8030116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/13/2022] [Accepted: 05/24/2022] [Indexed: 11/18/2022] Open
Abstract
This pilot study investigates structural alterations and their relationships with cognitive function in survivors of diffuse gliomas. Twenty-four survivors of diffuse gliomas (mean age 44.5 ± 11.5), from whom high-resolution T1-weighted images, neuropsychological tests, and self-report questionnaires were obtained, were analyzed. Patients were grouped by degree of cognitive impairment, and interregional correlations of cortical thickness were computed to generate morphometric correlation networks (MCNs). The results show that the cortical thickness of the right insula (R2 = 0.3025, p = 0.0054) was negatively associated with time since the last treatment, and the cortical thickness of the left superior temporal gyrus (R2 = 0.2839, p = 0.0107) was positively associated with cognitive performance. Multiple cortical regions in the default mode, salience, and language networks were identified as predominant nodes in the MCNs of survivors of diffuse gliomas. Compared to cognitively impaired patients, cognitively non-impaired patients tended to have higher network stability in network nodes removal analysis, especially when the fraction of removed nodes (among 66 nodes in total) exceeded 55%. These findings suggest that structural networks are altered in survivors of diffuse gliomas and that their cortical structures may also be adapting to support cognitive function during survivorship.
Collapse
Affiliation(s)
- Chencai Wang
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (C.W.); (N.S.C.); (S.I.); (C.R.)
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (N.S.); (W.B.P.)
| | - Nicholas S. Cho
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (C.W.); (N.S.C.); (S.I.); (C.R.)
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (N.S.); (W.B.P.)
- Medical Scientist Training Program, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA
- Department of Bioengineering, Henry Samueli School of Engineering and Applied Science, University of California Los Angeles, Los Angeles, CA 90095, USA
| | - Kathleen Van Dyk
- Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, Semel Institute, University of California Los Angeles, Los Angeles, CA 90095, USA;
| | - Sabah Islam
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (C.W.); (N.S.C.); (S.I.); (C.R.)
- Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, USA
| | - Catalina Raymond
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (C.W.); (N.S.C.); (S.I.); (C.R.)
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (N.S.); (W.B.P.)
| | - Justin Choi
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA; (J.C.); (A.L.); (T.F.C.); (P.L.N.)
| | - Noriko Salamon
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (N.S.); (W.B.P.)
| | - Whitney B. Pope
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (N.S.); (W.B.P.)
| | - Albert Lai
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA; (J.C.); (A.L.); (T.F.C.); (P.L.N.)
| | - Timothy F. Cloughesy
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA; (J.C.); (A.L.); (T.F.C.); (P.L.N.)
| | - Phioanh L. Nghiemphu
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA; (J.C.); (A.L.); (T.F.C.); (P.L.N.)
| | - Benjamin M. Ellingson
- UCLA Brain Tumor Imaging Laboratory (BTIL), Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (C.W.); (N.S.C.); (S.I.); (C.R.)
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90024, USA; (N.S.); (W.B.P.)
- Department of Bioengineering, Henry Samueli School of Engineering and Applied Science, University of California Los Angeles, Los Angeles, CA 90095, USA
- Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, Semel Institute, University of California Los Angeles, Los Angeles, CA 90095, USA;
- Department of Neurosurgery, University of California Los Angeles, Los Angeles, CA 90095, USA
- Correspondence: ; Tel.: +1-(310)-481-7572
| |
Collapse
|
16
|
Morningstar M, Grannis C, Mattson WI, Nelson EE. Functional patterns of neural activation during vocal emotion recognition in youth with and without refractory epilepsy. Neuroimage Clin 2022; 34:102966. [PMID: 35182929 PMCID: PMC8859003 DOI: 10.1016/j.nicl.2022.102966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 01/12/2022] [Accepted: 02/11/2022] [Indexed: 01/10/2023]
Abstract
Epilepsy has been associated with deficits in the social cognitive ability to decode others' nonverbal cues to infer their emotional intent (emotion recognition). Studies have begun to identify potential neural correlates of these deficits, but have focused primarily on one type of nonverbal cue (facial expressions) to the detriment of other crucial social signals that inform the tenor of social interactions (e.g., tone of voice). Less is known about how individuals with epilepsy process these forms of social stimuli, with a particular gap in knowledge about representation of vocal cues in the developing brain. The current study compared vocal emotion recognition skills and functional patterns of neural activation to emotional voices in youth with and without refractory focal epilepsy. We made novel use of inter-subject pattern analysis to determine brain areas in which activation to emotional voices was predictive of epilepsy status. Results indicated that youth with epilepsy were comparatively less able to infer emotional intent in vocal expressions than their typically developing peers. Activation to vocal emotional expressions in regions of the mentalizing and/or default mode network (e.g., right temporo-parietal junction, right hippocampus, right medial prefrontal cortex, among others) differentiated youth with and without epilepsy. These results are consistent with emerging evidence that pediatric epilepsy is associated with altered function in neural networks subserving social cognitive abilities. Our results contribute to ongoing efforts to understand the neural markers of social cognitive deficits in pediatric epilepsy, in order to better tailor and funnel interventions to this group of youth at risk for poor social outcomes.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, Kingston, ON, Canada; Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States; Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, United States.
| | - C Grannis
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States
| | - W I Mattson
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States
| | - E E Nelson
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States; Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, United States
| |
Collapse
|
17
|
Morningstar M, Mattson WI, Nelson EE. Longitudinal Change in Neural Response to Vocal Emotion in Adolescence. Soc Cogn Affect Neurosci 2022; 17:890-903. [PMID: 35323933 PMCID: PMC9527472 DOI: 10.1093/scan/nsac021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 02/25/2022] [Accepted: 03/21/2022] [Indexed: 01/09/2023] Open
Abstract
Adolescence is associated with maturation of function within neural networks supporting the processing of social information. Previous longitudinal studies have established developmental influences on youth’s neural response to facial displays of emotion. Given the increasing recognition of the importance of non-facial cues to social communication, we build on existing work by examining longitudinal change in neural response to vocal expressions of emotion in 8- to 19-year-old youth. Participants completed a vocal emotion recognition task at two timepoints (1 year apart) while undergoing functional magnetic resonance imaging. The right inferior frontal gyrus, right dorsal striatum and right precentral gyrus showed decreases in activation to emotional voices across timepoints, which may reflect focalization of response in these areas. Activation in the dorsomedial prefrontal cortex was positively associated with age but was stable across timepoints. In addition, the slope of change across visits varied as a function of participants’ age in the right temporo-parietal junction (TPJ): this pattern of activation across timepoints and age may reflect ongoing specialization of function across childhood and adolescence. Decreased activation in the striatum and TPJ across timepoints was associated with better emotion recognition accuracy. Findings suggest that specialization of function in social cognitive networks may support the growth of vocal emotion recognition skills across adolescence.
Collapse
Affiliation(s)
- Michele Morningstar
- Correspondence should be addressed to Michele Morningstar, Department of Psychology, Queen’s University, 62 Arch Street, Kingston, ON K7L 3L3, Canada. E-mail:
| | - Whitney I Mattson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
| | - Eric E Nelson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH 43205, USA
| |
Collapse
|
18
|
Zivan M, Gashri C, Habuba N, Horowitz-Kraus T. Reduced mother-child brain-to-brain synchrony during joint storytelling interaction interrupted by a media usage. Child Neuropsychol 2022; 28:918-937. [PMID: 35129078 DOI: 10.1080/09297049.2022.2034774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Parent-child synchrony is related to the quality of parent and child interactions and child development. One very emotionally and cognitively beneficial interaction in early childhood is Dialogic Reading (DR). Screen exposure was previously related to decreased parent-child interaction. Using a hyperscanning Electroencephalogram (EEG) method, the current study examined the neurobiological correlates for mother-child DR vs. mobile phone-interrupted DR in twenty-four white toddlers (24-42 months old, 8 girls) and their mothers. The DR-interrupted condition was related to decreased mother-child neural synchrony between the mother's language-related brain regions (left hemisphere) and the child's comprehension-related regions (right hemisphere) compared to the uninterrupted DR. This is the first neural evidence of the negative effect of parental smartphone use on parent-child interaction quality.
Collapse
Affiliation(s)
- Michal Zivan
- Educational Neuroimaging Group, Faculty of Education in Science and Technology and the Faculty of Biomedical Engineering, Technion.,Faculty of Education in Science and Technology, Technion - Israel Institute of Technology, Haifa, Israel
| | - Carmel Gashri
- Educational Neuroimaging Group, Faculty of Education in Science and Technology and the Faculty of Biomedical Engineering, Technion
| | - Nir Habuba
- Educational Neuroimaging Group, Faculty of Education in Science and Technology and the Faculty of Biomedical Engineering, Technion
| | - Tzipi Horowitz-Kraus
- Educational Neuroimaging Group, Faculty of Education in Science and Technology and the Faculty of Biomedical Engineering, Technion.,Faculty of Education in Science and Technology, Technion - Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
19
|
Hegde S, Gothwal M, Arumugham S, Yadav R, Pal P. Deficits in emotion perception and cognition in patients with parkinson's disease: A systematic review. Ann Indian Acad Neurol 2022; 25:367-375. [PMID: 35936598 PMCID: PMC9350746 DOI: 10.4103/aian.aian_573_21] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 11/03/2021] [Accepted: 01/16/2022] [Indexed: 12/02/2022] Open
Abstract
Non-motor symptoms (NMS) are common among Parkinson's disease (PD) patients and have a significant impact on quality of life. NMS such as deficits in emotion perception are gaining due focus in the recent times. As emotion perception and cognitive functions share certain common neural substrates, it becomes pertinent to evaluate existing emotion perception deficits in view of underlying cognitive deficits. The current systematic review aimed at examining studies on emotion perception PD in the last decade. We carried out a systematic review of 44 studies from the PubMed database. We reviewed studies examining emotion perception and associated cognitive deficits, especially executive function and visuospatial function in PD. This review also examines how early and advanced PD differ in emotion perception deficits and how the presence of common neuropsychiatric conditions such as anxiety, apathy, and depression as well as neurosurgical procedure such as deep brain stimulation affect emotion perception. The need for future research employing a comprehensive evaluation of neurocognitive functions and emotion perception is underscored as it has a significant bearing on planning holistic intervention strategies.
Collapse
|
20
|
Costanza A, Amerio A, Aguglia A, Magnani L, Serafini G, Amore M, Merli R, Ambrosetti J, Bondolfi G, Marzano L, Berardelli I. "Hard to Say, Hard to Understand, Hard to Live": Possible Associations between Neurologic Language Impairments and Suicide Risk. Brain Sci 2021; 11:brainsci11121594. [PMID: 34942896 PMCID: PMC8699610 DOI: 10.3390/brainsci11121594] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 11/25/2021] [Accepted: 11/29/2021] [Indexed: 01/03/2023] Open
Abstract
In clinical practice, patients with language impairments often exhibit suicidal ideation (SI) and suicidal behavior (SB, covering the entire range from suicide attempts, SA, to completed suicides). However, only few studies exist regarding this subject. We conducted a mini-review on the possible associations between neurologic language impairment (on the motor, comprehension, and semantic sides) and SI/SB. Based on the literature review, we hypothesized that language impairments exacerbate psychiatric comorbidities, which, in turn, aggravate language impairments. Patients trapped in this vicious cycle can develop SI/SB. The so-called “affective prosody” provides some relevant insights concerning the interaction between the different language levels and the world of emotions. This hypothesis is illustrated in a clinical presentation, consisting of the case of a 74-year old woman who was admitted to a psychiatric emergency department (ED) after a failed SA. Having suffered an ischemic stroke two years earlier, she suffered from incomplete Broca’s aphasia and dysprosody. She also presented with generalized anxiety and depressive symptoms. We observed that her language impairments were both aggravated by the exacerbations of her anxiety and depressive symptoms. In this patient, who had deficits on the motor side, these exacerbations were triggered by her inability to express herself, her emotional status, and suffering. SI was fluctuant, and—one year after the SA—she completed suicide. Further studies are needed to ascertain possible reciprocal and interacting associations between language impairments, psychiatric comorbidities, and SI/SB. They could enable clinicians to better understand their patient’s specific suffering, as brought on by language impairment, and contribute to the refining of suicide risk detection in this sub-group of affected patients.
Collapse
Affiliation(s)
- Alessandra Costanza
- Department of Psychiatry, Faculty of Medicine, University of Geneva (UNIGE), 1211 Geneva, Switzerland
- Correspondence: ; Tel.: +41-22-3797111
| | - Andrea Amerio
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa, 16132 Genoa, Italy; (A.A.); (A.A.); (L.M.); (G.S.); (M.A.)
- IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy
| | - Andrea Aguglia
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa, 16132 Genoa, Italy; (A.A.); (A.A.); (L.M.); (G.S.); (M.A.)
- IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy
| | - Luca Magnani
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa, 16132 Genoa, Italy; (A.A.); (A.A.); (L.M.); (G.S.); (M.A.)
- IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy
| | - Gianluca Serafini
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa, 16132 Genoa, Italy; (A.A.); (A.A.); (L.M.); (G.S.); (M.A.)
- IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy
| | - Mario Amore
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa, 16132 Genoa, Italy; (A.A.); (A.A.); (L.M.); (G.S.); (M.A.)
- IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy
| | - Roberto Merli
- Mental Health and Suicide Prevention Center, Department of Mental Health, 13900 Biella, Italy;
| | - Julia Ambrosetti
- Emergency Psychiatric Unit (UAUP), Department of Psychiatry and Department of Emergency, Geneva University Hospitals (HUG), 1211 Geneva, Switzerland;
| | - Guido Bondolfi
- Department of Psychiatry, Faculty of Medicine, University of Geneva (UNIGE), 1211 Geneva, Switzerland
- Department of Psychiatry, Service of Liaison Psychiatry and Crisis Intervention (SPLIC), Geneva University Hospitals (HUG), 1211 Geneva, Switzerland;
| | - Lisa Marzano
- Faculty of Science and Technology, Middlesex University, London NW4 4BT, UK;
| | - Isabella Berardelli
- Department of Neurosciences, Mental Health and Sensory Organs, Suicide Prevention Center, Sant’Andrea Hospital, Sapienza University of Rome, 00185 Rome, Italy;
| |
Collapse
|
21
|
Zainaee S, Mahdipour R, Mahdavi Rashed M, Sobhani-Rad D. Dysgraphia and dysprosody in a patient with arteriovenous malformation: a case report. Neurocase 2021; 27:259-265. [PMID: 34106816 DOI: 10.1080/13554794.2021.1929332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Arteriovenous malformation (AVM) results from development of abnormal connections between veins and arteries. This study reported anAVM case suffering from dysgraphia and dysprosody. According to the results after the trauma, the patient's handwriting was identified as macrographic and illegible, and written letters and verbs were neglected in free writing or dictation. Moreover, prosody of the patient's utterances was changed. Finally, an intervention was conducted to improve the writing impairments whereby they eventually enhanced. AVM can adversely affect communication opportunities and working life due to these impairments. Thus referring the patient to speech and language pathologists seems sensible and necessary.
Collapse
Affiliation(s)
- Shahryar Zainaee
- Department of Speech Therapy, School of Paramedical Sciences, Mashhad University of Medical Sciences
| | - Ramin Mahdipour
- Department of Anatomy and Cell Biology, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | | | - Davood Sobhani-Rad
- Department of Speech Therapy, School of Paramedical Sciences, Mashhad University of Medical Sciences
| |
Collapse
|
22
|
O'Connell K, Marsh AA, Edwards DF, Dromerick AW, Seydell-Greenwald A. Emotion recognition impairments and social well-being following right-hemisphere stroke. Neuropsychol Rehabil 2021; 32:1337-1355. [PMID: 33615994 PMCID: PMC8379297 DOI: 10.1080/09602011.2021.1888756] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Accurately recognizing and responding to the emotions of others is essential for proper social communication and helps bind strong relationships that are particularly important for stroke survivors. Emotion recognition typically engages cortical areas that are predominantly right-lateralized including superior temporal and inferior frontal gyri - regions frequently impacted by right-hemisphere stroke. Since prior work already links right-hemisphere stroke to deficits in emotion recognition, this research aims to extend these findings to determine whether impaired emotion recognition after right-hemisphere stroke is associated with worse social well-being outcomes. Eighteen right-hemisphere stroke patients (≥6 months post-stroke) and 21 neurologically healthy controls completed a multimodal emotion recognition test (Geneva Emotion Recognition Test - Short) and reported engagement in social/non-social activities and levels of social support. Right-hemisphere stroke was associated with worse emotion recognition accuracy, though not all patients exhibited impairment. In line with hypotheses, emotion recognition impairments were associated with greater loss of social activities after stroke, an effect that could not be attributed to stroke severity or loss of non-social activities. Impairments were also linked to reduced patient-reported social support. Results implicate emotion recognition difficulties as a potential antecedent of social withdrawal after stroke and warrant future research to test emotion recognition training post-stroke.
Collapse
Affiliation(s)
- Katherine O'Connell
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, USA
| | - Abigail A Marsh
- Department of Psychology, Georgetown University, Washington, DC, USA
| | - Dorothy Farrar Edwards
- Department of Kinesiology and Medicine, University of Wisconsin-Madison, Madison, WI, USA
| | - Alexander W Dromerick
- MedStar National Rehabilitation Hospital, Washington, DC, USA.,Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| | - Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| |
Collapse
|
23
|
Asymmetry of Auditory-Motor Speech Processing is Determined by Language Experience. J Neurosci 2021; 41:1059-1067. [PMID: 33298537 PMCID: PMC7880293 DOI: 10.1523/jneurosci.1977-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 10/24/2020] [Accepted: 11/12/2020] [Indexed: 11/21/2022] Open
Abstract
Speech processing relies on interactions between auditory and motor systems and is asymmetrically organized in the human brain. The left auditory system is specialized for processing of phonemes, whereas the right is specialized for processing of pitch changes in speech affecting prosody. In speakers of tonal languages, however, processing of pitch (i.e., tone) changes that alter word meaning is left-lateralized indicating that linguistic function and language experience shape speech processing asymmetries. Here, we investigated the asymmetry of motor contributions to auditory speech processing in male and female speakers of tonal and non-tonal languages. We temporarily disrupted the right or left speech motor cortex using transcranial magnetic stimulation (TMS) and measured the impact of these disruptions on auditory discrimination (mismatch negativity; MMN) responses to phoneme and tone changes in sequences of syllables using electroencephalography (EEG). We found that the effect of motor disruptions on processing of tone changes differed between language groups: disruption of the right speech motor cortex suppressed responses to tone changes in non-tonal language speakers, whereas disruption of the left speech motor cortex suppressed responses to tone changes in tonal language speakers. In non-tonal language speakers, the effects of disruption of left speech motor cortex on responses to tone changes were inconclusive. For phoneme changes, disruption of left but not right speech motor cortex suppressed responses in both language groups. We conclude that the contributions of the right and left speech motor cortex to auditory speech processing are determined by the functional roles of acoustic cues in the listener's native language.SIGNIFICANCE STATEMENT The principles underlying hemispheric asymmetries of auditory speech processing remain debated. The asymmetry of processing of speech sounds is affected by low-level acoustic cues, but also by their linguistic function. By combining transcranial magnetic stimulation (TMS) and electroencephalography (EEG), we investigated the asymmetry of motor contributions to auditory speech processing in tonal and non-tonal language speakers. We provide causal evidence that the functional role of the acoustic cues in the listener's native language affects the asymmetry of motor influences on auditory speech discrimination ability [indexed by mismatch negativity (MMN) responses]. Lateralized top-down motor influences can affect asymmetry of speech processing in the auditory system.
Collapse
|
24
|
Mental Simulations of Phonological Representations Are Causally Linked to Silent Reading of Direct Versus Indirect Speech. J Cogn 2021; 4:6. [PMID: 33506172 PMCID: PMC7792465 DOI: 10.5334/joc.141] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Embodied theories propose that language is understood via mental simulations of sensory states related to perception and action. Given that direct speech (e.g., She says, "It's a lovely day!") is perceived to be more vivid than indirect speech (e.g., She says (that) it's a lovely day) in perception, recent research shows in silent reading that more vivid speech representations are mentally simulated for direct speech than for indirect speech. This 'simulated' speech is found to contain suprasegmental prosodic representations (e.g., speech prosody) but its phonological detail and its causal role in silent reading of direct speech remain unclear. Here in three experiments, I explored the phonological aspect and the causal role of speech simulations in silent reading of tongue twisters in direct speech, indirect speech and non-speech sentences. The results demonstrated greater visual tongue-twister effects (phonemic interference) during silent reading (Experiment 1) but not oral reading (Experiment 2) of direct speech as compared to indirect speech and non-speech. The tongue-twister effects in silent reading of direct speech were selectively disrupted by phonological interference (concurrent articulation) as compared to manual interference (finger tapping) (Experiment 3). The results replicated more vivid speech simulations in silent reading of direct speech, and additionally extended them to the phonological dimension. Crucially, they demonstrated a causal role of phonological simulations in silent reading of direct speech, at least in tongue-twister reading. The findings are discussed in relation to multidimensionality and task dependence of mental simulation and its mechanisms.
Collapse
|
25
|
Liao X, Sun J, Jin Z, Wu D, Liu J. Cortical Morphological Changes in Congenital Amusia: Surface-Based Analyses. Front Psychiatry 2021; 12:721720. [PMID: 35095585 PMCID: PMC8794692 DOI: 10.3389/fpsyt.2021.721720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 12/07/2021] [Indexed: 11/25/2022] Open
Abstract
Background: Congenital amusia (CA) is a rare disorder characterized by deficits in pitch perception, and many structural and functional magnetic resonance imaging studies have been conducted to better understand its neural bases. However, a structural magnetic resonance imaging analysis using a surface-based morphology method to identify regions with cortical features abnormalities at the vertex-based level has not yet been performed. Methods: Fifteen participants with CA and 13 healthy controls underwent structural magnetic resonance imaging. A surface-based morphology method was used to identify anatomical abnormalities. Then, the surface parameters' mean value of the identified clusters with statistically significant between-group differences were extracted and compared. Finally, Pearson's correlation analysis was used to assess the correlation between the Montreal Battery of Evaluation of Amusia (MBEA) scores and surface parameters. Results: The CA group had significantly lower MBEA scores than the healthy controls (p = 0.000). The CA group exhibited a significant higher fractal dimension in the right caudal middle frontal gyrus and a lower sulcal depth in the right pars triangularis gyrus (p < 0.05; false discovery rate-corrected at the cluster level) compared to healthy controls. There were negative correlations between the mean fractal dimension values in the right caudal middle frontal gyrus and MBEA score, including the mean MBEA score (r = -0.5398, p = 0.0030), scale score (r = -0.5712, p = 0.0015), contour score (r = -0.4662, p = 0.0124), interval score (r = -0.4564, p = 0.0146), rhythmic score (r = -0.5133, p = 0.0052), meter score (r = -0.3937, p = 0.0382), and memory score (r = -0.3879, p = 0.0414). There was a significant positive correlation between the mean sulcal depth in the right pars triangularis gyrus and the MBEA score, including the mean score (r = 0.5130, p = 0.0052), scale score (r = 0.5328, p = 0.0035), interval score (r = 0.4059, p = 0.0321), rhythmic score (r = 0.5733, p = 0.0014), meter score (r = 0.5061, p = 0.0060), and memory score (r = 0.4001, p = 0.0349). Conclusion: Individuals with CA exhibit cortical morphological changes in the right hemisphere. These findings may indicate that the neural basis of speech perception and memory impairments in individuals with CA is associated with abnormalities in the right pars triangularis gyrus and middle frontal gyrus, and that these cortical abnormalities may be a neural marker of CA.
Collapse
Affiliation(s)
- Xuan Liao
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Junjie Sun
- Department of Radiology, The Sir Run Run Shaw Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Zhishuai Jin
- Medical Psychological Center, The Second Xiangya Hospital of Central South University, Changsha, China
| | - DaXing Wu
- Medical Psychological Center, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China.,Clinical Research Center for Medical Imaging in Hunan Province, Changsha, China.,Department of Radiology Quality Control Center, The Second Xiangya Hospital of Central South University, Changsha, China
| |
Collapse
|
26
|
Sonderfeld M, Mathiak K, Häring GS, Schmidt S, Habel U, Gur R, Klasen M. Supramodal neural networks support top-down processing of social signals. Hum Brain Mapp 2020; 42:676-689. [PMID: 33073911 PMCID: PMC7814753 DOI: 10.1002/hbm.25252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 08/08/2020] [Accepted: 09/29/2020] [Indexed: 12/17/2022] Open
Abstract
The perception of facial and vocal stimuli is driven by sensory input and cognitive top‐down influences. Important top‐down influences are attentional focus and supramodal social memory representations. The present study investigated the neural networks underlying these top‐down processes and their role in social stimulus classification. In a neuroimaging study with 45 healthy participants, we employed a social adaptation of the Implicit Association Test. Attentional focus was modified via the classification task, which compared two domains of social perception (emotion and gender), using the exactly same stimulus set. Supramodal memory representations were addressed via congruency of the target categories for the classification of auditory and visual social stimuli (voices and faces). Functional magnetic resonance imaging identified attention‐specific and supramodal networks. Emotion classification networks included bilateral anterior insula, pre‐supplementary motor area, and right inferior frontal gyrus. They were pure attention‐driven and independent from stimulus modality or congruency of the target concepts. No neural contribution of supramodal memory representations could be revealed for emotion classification. In contrast, gender classification relied on supramodal memory representations in rostral anterior cingulate and ventromedial prefrontal cortices. In summary, different domains of social perception involve different top‐down processes which take place in clearly distinguishable neural networks.
Collapse
Affiliation(s)
- Melina Sonderfeld
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Gianna S Häring
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Sarah Schmidt
- Life & Brain - Institute for Experimental Epileptology and Cognition Research, Bonn, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Raquel Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany.,Interdisciplinary Training Centre for Medical Education and Patient Safety - AIXTRA, Medical Faculty, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
27
|
Berro DH, Lemée JM, Leiber LM, Emery E, Menei P, Ter Minassian A. Overt speech feasibility using continuous functional magnetic resonance imaging: Isolation of areas involved in phonology and prosody. J Neurosci Res 2020; 98:2554-2565. [PMID: 32896001 DOI: 10.1002/jnr.24723] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 08/05/2020] [Accepted: 08/13/2020] [Indexed: 01/20/2023]
Abstract
To avoid motion artifacts, almost all speech-related functional magnetic resonance imagings (fMRIs) are performed covertly to detect language activations. This method may be difficult to execute, especially by patients with brain tumors, and does not allow the identification of phonological areas. Here, we aimed to evaluate overt task feasibility. Thirty-three volunteers participated in this study. They performed two functional sessions of covert and overt generation of a short sentence semantically linked with a word. Three main contrasts were performed: Covert and Overt for the isolation of language-activated areas, and Overt > Covert for the isolation of the motor cortical activation of speech. fMRI data preprocessing was performed with and without unwarping, and with and without regression of movement parameters as confounding variables. All types of results were compared to each other. For the Overt contrast, Dice coefficients showed strong overlap between each pair of types of results: 0.98 for the pair with and without unwarping, and 0.9 for the pair with and without movement parameter regression. The Overt > Covert contrast allowed isolation of motor laryngeal activations with high statistical reliability and revealed the right-lateralized temporal activity related to acoustic feedback. Overt speaking during magnetic resonance imaging induced few artifacts and did not significantly affect the results, allowing the identification of areas involved in primary motor control and prosodic regulation of speech. Unwarping and motion artifact regression in the postprocessing step, seem to not be necessary. Changes in lateralization of cortical activity by overt speech shall be explored before using these tasks for presurgical mapping.
Collapse
Affiliation(s)
- David Hassanein Berro
- Department of Neurosurgery, University Hospital of Caen Normandy, Caen, France.,Normandie Univ, UNICAEN, CEA, CNRS, ISTCT/CERVOxy Group, GIP Cyceron, Caen, France.,INSERM, CRCINA, Equipe 17, Bâtiment IRIS, Angers, France
| | - Jean-Michel Lemée
- INSERM, CRCINA, Equipe 17, Bâtiment IRIS, Angers, France.,Department of Neurosurgery, University Hospital of Angers, Angers, France
| | | | - Evelyne Emery
- Department of Neurosurgery, University Hospital of Caen Normandy, Caen, France.,INSERM, UMR-S U1237, PhIND Group, GIP Cyceron, Caen, France
| | - Philippe Menei
- INSERM, CRCINA, Equipe 17, Bâtiment IRIS, Angers, France.,Department of Neurosurgery, University Hospital of Angers, Angers, France
| | - Aram Ter Minassian
- Department of Anesthesiology, University Hospital of Angers, Angers, France.,LARIS, ISISV Team, University of Angers, Angers, France
| |
Collapse
|
28
|
Abstract
Comparative studies on brain asymmetry date back to the 19th century but then largely disappeared due to the assumption that lateralization is uniquely human. Since the reemergence of this field in the 1970s, we learned that left-right differences of brain and behavior exist throughout the animal kingdom and pay off in terms of sensory, cognitive, and motor efficiency. Ontogenetically, lateralization starts in many species with asymmetrical expression patterns of genes within the Nodal cascade that set up the scene for later complex interactions of genetic, environmental, and epigenetic factors. These take effect during different time points of ontogeny and create asymmetries of neural networks in diverse species. As a result, depending on task demands, left- or right-hemispheric loops of feedforward or feedback projections are then activated and can temporarily dominate a neural process. In addition, asymmetries of commissural transfer can shape lateralized processes in each hemisphere. It is still unclear if interhemispheric interactions depend on an inhibition/excitation dichotomy or instead adjust the contralateral temporal neural structure to delay the other hemisphere or synchronize with it during joint action. As outlined in our review, novel animal models and approaches could be established in the last decades, and they already produced a substantial increase of knowledge. Since there is practically no realm of human perception, cognition, emotion, or action that is not affected by our lateralized neural organization, insights from these comparative studies are crucial to understand the functions and pathologies of our asymmetric brain.
Collapse
Affiliation(s)
- Onur Güntürkün
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Felix Ströckens
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sebastian Ocklenburg
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
29
|
Steber S, König N, Stephan F, Rossi S. Uncovering electrophysiological and vascular signatures of implicit emotional prosody. Sci Rep 2020; 10:5807. [PMID: 32242032 PMCID: PMC7118077 DOI: 10.1038/s41598-020-62761-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 03/18/2020] [Indexed: 11/13/2022] Open
Abstract
The capability of differentiating between various emotional states in speech displays a crucial prerequisite for successful social interactions. The aim of the present study was to investigate neural processes underlying this differentiating ability by applying a simultaneous neuroscientific approach in order to gain both electrophysiological (via electroencephalography, EEG) and vascular (via functional near-infrared-spectroscopy, fNIRS) responses. Pseudowords conforming to angry, happy, and neutral prosody were presented acoustically to participants using a passive listening paradigm in order to capture implicit mechanisms of emotional prosody processing. Event-related brain potentials (ERPs) revealed a larger P200 and an increased late positive potential (LPP) for happy prosody as well as larger negativities for angry and neutral prosody compared to happy prosody around 500 ms. FNIRS results showed increased activations for angry prosody at right fronto-temporal areas. Correlation between negativity in the EEG and activation in fNIRS for angry prosody suggests analogous underlying processes resembling a negativity bias. Overall, results indicate that mechanisms of emotional and phonological encoding (P200), emotional evaluation (increased negativities) as well as emotional arousal and relevance (LPP) are present during implicit processing of emotional prosody.
Collapse
Affiliation(s)
- Sarah Steber
- ICONE - Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020, Innsbruck, Austria
- Department of Psychology, University of Innsbruck, 6020, Innsbruck, Austria
| | - Nicola König
- ICONE - Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020, Innsbruck, Austria
- Department of Psychology, University of Innsbruck, 6020, Innsbruck, Austria
| | - Franziska Stephan
- ICONE - Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020, Innsbruck, Austria
- Department of Educational Psychology, Faculty of Education, University of Leipzig, 04109, Leipzig, Germany
| | - Sonja Rossi
- ICONE - Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020, Innsbruck, Austria.
| |
Collapse
|
30
|
Abstract
The processing of emotional nonlinguistic information in speech is defined as emotional prosody. This auditory nonlinguistic information is essential in the decoding of social interactions and in our capacity to adapt and react adequately by taking into account contextual information. An integrated model is proposed at the functional and brain levels, encompassing 5 main systems that involve cortical and subcortical neural networks relevant for the processing of emotional prosody in its major dimensions, including perception and sound organization; related action tendencies; and associated values that integrate complex social contexts and ambiguous situations.
Collapse
Affiliation(s)
- Didier Grandjean
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
31
|
Lin Y, Ding H, Zhang Y. Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:896-912. [PMID: 32186969 DOI: 10.1044/2020_jslhr-19-00258] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics-prosody Stroop task) and cross-modal audiovisual task (i.e., semantics-prosody-face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
32
|
Nath A, Robinson M, Magnotti J, Karas P, Curry D, Paldino M. Determination of Differences in Seed-Based Resting State Functional Magnetic Resonance Imaging Language Networks in Pediatric Patients with Left- and Right-Lateralized Language: A Pilot Study. J Epilepsy Res 2019; 9:93-102. [PMID: 32509544 PMCID: PMC7251337 DOI: 10.14581/jer.19011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 01/24/2020] [Accepted: 01/24/2020] [Indexed: 11/03/2022] Open
Abstract
Background and Purpose The current tools available for localization of expressive language, including functional magnetic resonance imaging (fMRI) and cortical stimulation mapping (CSM), require that the patient remain stationary and follow language commands with precise timing. Many pediatric epilepsy patients, however, have intact language skills but are unable to participate in these tasks due to cognitive impairments or young age. In adult subjects, there is evidence that language laterality can be determined by resting state (RS) fMRI activity, however there are few studies on the use of RS to accurately predict language laterality in children. Methods A retrospective review of pediatric patients at Texas Children's Hospital was performed to identify patients who have undergone epilepsy surgical planning over 3 years with language localization using traditional methods of Wada testing, CSM, or task-based fMRI with calculated laterality index, as well as a 7-minute RS scan available without excessive motion or noise. We found the correlation between each subject's left and right Broca's region activity and each of 68 cortical regions. Results A group of nine patients with left-lateralized language were found to have greater voxel-wise correlations than a group of six patients with right-lateralized language between a left hemispheric Broca's region seed and the following six cortical regions: left inferior temporal, left lateral orbitofrontal, left pars triangularis, right lateral orbitofrontal, right pars orbitalis and right superior frontal regions. Conclusions In a cohort of children with epilepsy, we found that patients with left- and right-hemispheric language lateralization have different RS networks.
Collapse
Affiliation(s)
- Audrey Nath
- Department of Pediatric Neurology, Baylor College of Medicine, Houston, TX, USA
| | - Meghan Robinson
- Core for Advanced MRI, Baylor College of Medicine, Houston, TX, USA
| | - John Magnotti
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Patrick Karas
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Daniel Curry
- Division of Pediatric Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Michael Paldino
- Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| |
Collapse
|
33
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
34
|
Wagenbreth C, Kuehne M, Heinze HJ, Zaehle T. Deep Brain Stimulation of the Subthalamic Nucleus Influences Facial Emotion Recognition in Patients With Parkinson's Disease: A Review. Front Psychol 2019; 10:2638. [PMID: 31849760 PMCID: PMC6901782 DOI: 10.3389/fpsyg.2019.02638] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 11/08/2019] [Indexed: 12/17/2022] Open
Abstract
Parkinson’s disease (PD) is a neurodegenerative disorder characterized by motor symptoms following dopaminergic depletion in the substantia nigra. Besides motor impairments, however, several non-motor detriments can have the potential to considerably impact subjectively perceived quality of life in patients. Particularly emotion recognition of facial expressions has been shown to be affected in PD, and especially the perception of negative emotions like fear, anger, or disgust is impaired. While emotion processing generally refers to automatic implicit as well as conscious explicit processing, the focus of most previous studies in PD was on explicit recognition of emotions only, while largely ignoring implicit processing deficits. Deep brain stimulation of the subthalamic nucleus (STN-DBS) is widely accepted as a therapeutic measure in the treatment of PD and has been shown to advantageously influence motor problems. Among various concomitant non-motor effects of STN-DBS, modulation of facial emotion recognition under subthalamic stimulation has been investigated in previous studies with rather heterogeneous results. Although there seems to be a consensus regarding the processing of disgust, which significantly deteriorates under STN stimulation, findings concerning emotions like fear or happiness report heterogeneous data and seem to depend on various experimental settings and measurements. In the present review, we summarized previous investigations focusing on STN-DBS influence on recognition of facial emotional expressions in patients suffering from PD. In a first step, we provide a synopsis of disturbances and problems in facial emotion processing observed in patients with PD. Second, we present findings of STN-DBS influence on facial emotion recognition and especially highlight different impacts of stimulation on implicit and explicit emotional processing.
Collapse
Affiliation(s)
- Caroline Wagenbreth
- Department of Neurology, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Maria Kuehne
- Department of Neurology, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Hans-Jochen Heinze
- Department of Neurology, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Tino Zaehle
- Department of Neurology, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|
35
|
Age-related differences in neural activation and functional connectivity during the processing of vocal prosody in adolescence. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1418-1432. [PMID: 31515750 DOI: 10.3758/s13415-019-00742-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The ability to recognize others' emotions based on vocal emotional prosody follows a protracted developmental trajectory during adolescence. However, little is known about the neural mechanisms supporting this maturation. The current study investigated age-related differences in neural activation during a vocal emotion recognition (ER) task. Listeners aged 8 to 19 years old completed the vocal ER task while undergoing functional magnetic resonance imaging. The task of categorizing vocal emotional prosody elicited activation primarily in temporal and frontal areas. Age was associated with a) greater activation in regions in the superior, middle, and inferior frontal gyri, b) greater functional connectivity between the left precentral and inferior frontal gyri and regions in the bilateral insula and temporo-parietal junction, and c) greater fractional anisotropy in the superior longitudinal fasciculus, which connects frontal areas to posterior temporo-parietal regions. Many of these age-related differences in brain activation and connectivity were associated with better performance on the ER task. Increased activation in, and connectivity between, areas typically involved in language processing and social cognition may facilitate the development of vocal ER skills in adolescence.
Collapse
|
36
|
Sihvonen AJ, Särkämö T, Rodríguez-Fornells A, Ripollés P, Münte TF, Soinila S. Neural architectures of music - Insights from acquired amusia. Neurosci Biobehav Rev 2019; 107:104-114. [PMID: 31479663 DOI: 10.1016/j.neubiorev.2019.08.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 08/27/2019] [Accepted: 08/29/2019] [Indexed: 12/27/2022]
Abstract
The ability to perceive and produce music is a quintessential element of human life, present in all known cultures. Modern functional neuroimaging has revealed that music listening activates a large-scale bilateral network of cortical and subcortical regions in the healthy brain. Even the most accurate structural studies do not reveal which brain areas are critical and causally linked to music processing. Such questions may be answered by analysing the effects of focal brain lesions in patients´ ability to perceive music. In this sense, acquired amusia after stroke provides a unique opportunity to investigate the neural architectures crucial for normal music processing. Based on the first large-scale longitudinal studies on stroke-induced amusia using modern multi-modal magnetic resonance imaging (MRI) techniques, such as advanced lesion-symptom mapping, grey and white matter morphometry, tractography and functional connectivity, we discuss neural structures critical for music processing, consider music processing in light of the dual-stream model in the right hemisphere, and propose a neural model for acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Department of Neurosciences, University of Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland
| | - Antoni Rodríguez-Fornells
- Department of Cognition, University of Barcelona, Cognition & Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL), Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Pablo Ripollés
- Department of Psychology, New York University and Music and Audio Research Laboratory, New York University, USA
| | - Thomas F Münte
- Department of Neurology and Institute of Psychology II, University of Lübeck, Germany
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| |
Collapse
|
37
|
Leo V, Sihvonen AJ, Linnavalli T, Tervaniemi M, Laine M, Soinila S, Särkämö T. Cognitive and neural mechanisms underlying the mnemonic effect of songs after stroke. NEUROIMAGE-CLINICAL 2019; 24:101948. [PMID: 31419766 PMCID: PMC6706631 DOI: 10.1016/j.nicl.2019.101948] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 04/05/2019] [Accepted: 07/19/2019] [Indexed: 01/28/2023]
Abstract
Sung melody provides a mnemonic cue that can enhance the acquisition of novel verbal material in healthy subjects. Recent evidence suggests that also stroke patients, especially those with mild aphasia, can learn and recall novel narrative stories better when they are presented in sung than spoken format. Extending this finding, the present study explored the cognitive mechanisms underlying this effect by determining whether learning and recall of novel sung vs. spoken stories show a differential pattern of serial position effects (SPEs) and chunking effects in non-aphasic and aphasic stroke patients (N = 31) studied 6 months post-stroke. The structural neural correlates of these effects were also explored using voxel-based morphometry (VBM) and deterministic tractography (DT) analyses of structural MRI data. Non-aphasic patients showed more stable recall with reduced SPEs in the sung than spoken task, which was coupled with greater volume and integrity (indicated by fractional anisotropy, FA) of the left arcuate fasciculus. In contrast, compared to non-aphasic patients, the aphasic patients showed a larger recency effect (better recall of the last vs. middle part of the story) and enhanced chunking (larger units of correctly recalled consecutive items) in the sung than spoken task. In aphasics, the enhanced chunking and better recall on the middle verse in the sung vs. spoken task correlated also with better ability to perceive emotional prosody in speech. Neurally, the sung > spoken recency effect in aphasic patients was coupled with greater grey matter volume in a bilateral network of temporal, frontal, and parietal regions and also greater volume of the right inferior fronto-occipital fasciculus (IFOF). These results provide novel cognitive and neurobiological insight on how a repetitive sung melody can function as a verbal mnemonic aid after stroke. Non-aphasic stroke patients show more stable recall of sung than spoken stories. Aphasic patients show larger recency and chunking effects to sung vs. spoken stories. The left dorsal pathway mediates better recall of sung stories in non-aphasics. The right ventral pathway mediates better recall of sung stories in aphasics. Large-scale bilateral cortical networks are linked to musical mnemonics in aphasia.
Collapse
Affiliation(s)
- Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Neurosciences, Faculty of Medicine, University of Helsinki, Finland
| | - Tanja Linnavalli
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; CICERO Learning, University of Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| |
Collapse
|
38
|
Grisendi T, Reynaud O, Clarke S, Da Costa S. Processing pathways for emotional vocalizations. Brain Struct Funct 2019; 224:2487-2504. [DOI: 10.1007/s00429-019-01912-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 06/12/2019] [Indexed: 01/06/2023]
|
39
|
Saffarian A, Shavaki YA, Shahidi GA, Jafari Z. Effect of Parkinson Disease on Emotion Perception Using the Persian Affective Voices Test. J Voice 2019; 33:580.e1-580.e9. [DOI: 10.1016/j.jvoice.2018.01.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 01/16/2018] [Indexed: 12/01/2022]
|
40
|
Emotional prosody Stroop effect in Hindi: An event related potential study. PROGRESS IN BRAIN RESEARCH 2019. [PMID: 31196434 DOI: 10.1016/bs.pbr.2019.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Prosody processing is an important aspect of language comprehension. Previous research on emotional word-prosody conflict has shown that participants are worse when emotional prosody and word meaning are incongruent. Studies with event-related potentials have shown a congruency effect in N400 component. There has been no study on emotional processing in Hindi language in the context of conflict between emotional word meaning and prosody. We used happy and angry words spoken using happy and angry prosody. Participants had to identify whether the word had a happy or angry word meaning. The results showed a congruency effect with worse performance in incongruent trials indicating an emotional Stroop effect in Hindi. The ERP results showed that prosody information is detected very early, which can be seen in the N1 component. In addition, there was a congruency effect in N400. The results show that prosody is processed very early and emotional meaning-prosody congruency effect is obtained with Hindi. Further studies would be needed to investigate similarities and differences in cognitive control associated with language processing.
Collapse
|
41
|
Tuned to voices and faces: Cerebral responses linked to social anxiety. Neuroimage 2019; 197:450-456. [PMID: 31075391 DOI: 10.1016/j.neuroimage.2019.05.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 04/23/2019] [Accepted: 05/06/2019] [Indexed: 11/23/2022] Open
Abstract
Voices and faces are the most common sources of threat in social anxiety (SA) where the fear of negative evaluation and social exclusion is the central element. SA itself is spectrally distributed among the general population and its clinical manifestation, termed social anxiety disorder, is one of the most common anxiety disorders. While heightened cerebral responses to angry or contemptuous facial or vocal expressions are well documented, it remains unclear if the brain of socially anxious individuals is generally more sensitive to voices and faces. Using functional magnetic resonance imaging, we investigated how SA affects the cerebral processing of voices and faces as compared to various other stimulus types in a study population with greatly varying SA (N = 50, 26 female). While cerebral voice-sensitivity correlated positively with SA in the left temporal voice area (TVA) and the left amygdala, an association of face-sensitivity and SA was observed in the right fusiform face area (FFA) and the face processing area of the right posterior superior temporal sulcus (pSTSFA). These results demonstrate that the increase of cerebral responses associated with social anxiety is not limited to facial or vocal expressions of social threat but that the respective sensory and emotion processing structures are also generally tuned to voices and faces.
Collapse
|
42
|
Dondé C, Silipo G, Dias EC, Javitt DC. Hierarchical deficits in auditory information processing in schizophrenia. Schizophr Res 2019; 206:135-141. [PMID: 30551982 PMCID: PMC6526044 DOI: 10.1016/j.schres.2018.12.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Revised: 08/11/2018] [Accepted: 12/04/2018] [Indexed: 01/31/2023]
Abstract
Deficits in auditory processing contribute significantly to impaired functional outcome in schizophrenia (SZ), but mediating factors remain under investigation. Here we evaluated two hierarchical components of early auditory processing: pitch-change detection (i.e. identifying if 2 tones have "same" or "different" pitch), which is preferentially associated with early auditory cortex, and serial pitch-pattern detection (i.e. identifying if 3 tones have "same" or "different" pitch, and, if "different", which one differed from the others), which depends also on auditory association regions. Deficits in pitch-change detection deficits in SZ have been widely reported and correlated with higher auditory disturbances such as Auditory Emotion Recognition (AER). Deficits in serial pitch-pattern discrimination have been less studied. Here, we investigated both pitch perception components, along with integrity of AER in SZ patients vs. controls using behavioral paradigms. We hypothesized that the deficits could be viewed as hierarchically organized in SZ, with deficits in low-level function propagating sequentially through subsequent levels of processing. Participants included 27 SZ and 40 controls. The magnitude of the deficits in SZ participants was large in both the pitch-change (d = 1.15) and serial pitch-pattern tasks (d = 1.21) with no significant differential task effect. The effect size of the AER deficits was extremely large (d = 2.82). In the SZ group, performance in both pitch tasks correlated significantly with impaired AER performance. However, a mediation analysis showed that serial pitch-pattern detection mediated the relationship between simpler pitch-change detection and AER in patients. Findings are consistent with hierarchical models of cognitive dysfunction in SZ with deficits in early information processing contributing to higher level impairments. Furthermore, findings are consistent with recent neurophysiological results suggesting similar level impairments for processing of simple vs. more complex tonal dysfunction in SZ.
Collapse
Affiliation(s)
- Clément Dondé
- INSERM, U1028, CNRS, UMR5292, Lyon Neuroscience Research Center, Psychiatric Disorders: from Resistance to Response Team, Lyon F-69000, France; University Lyon 1, Villeurbanne F-69000, France; Centre Hospitalier Le Vinatier, Bron, France; Nathan Kline Institute, Orangeburg, NY, USA; Dept. of Psychiatry, Columbia University Medical Center, New York, NY, USA.
| | - Gail Silipo
- Nathan Kline Institute, Orangeburg, NY, USA.
| | | | - Daniel C. Javitt
- Nathan Kline Institute, Orangeburg, NY USA,Dept. of Psychiatry, Columbia University Medical Center, New York, NY USA
| |
Collapse
|
43
|
Altered attentional processing of happy prosody in schizophrenia. Schizophr Res 2019; 206:217-224. [PMID: 30554811 DOI: 10.1016/j.schres.2018.11.024] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 11/17/2018] [Accepted: 11/19/2018] [Indexed: 11/21/2022]
Abstract
BACKGROUND Abnormalities in emotional prosody processing have been consistently reported in schizophrenia. Emotionally salient changes in vocal expressions attract attention in social interactions. However, it remains to be clarified how attention and emotion interact during voice processing in schizophrenia. The current study addressed this question by examining the P3b event-related potential (ERP) component. METHOD The P3b was elicited with a modified oddball task, in which frequent (p = .84) neutral stimuli were intermixed with infrequent (p = .16) task-relevant emotional (happy or angry) targets. Prosodic speech was presented in two conditions - with intelligible (semantic content condition - SCC) or unintelligible semantic content (prosody-only condition - POC). Fifteen chronic schizophrenia patients and 15 healthy controls were instructed to silently count the target vocal sounds. RESULTS Compared to controls, P3b amplitude was specifically reduced for happy prosodic stimuli in schizophrenia, irrespective of semantic status. Groups did not differ in the processing of neutral standards or angry targets. DISCUSSION The selectively reduced P3b for happy prosody in schizophrenia suggests top-down attentional resources were less strongly engaged by positive relative to negative prosody, reflecting alterations in the evaluation of the emotional salience of the voice. These results highlight the role played by higher-order processes in emotional prosody dysfunction in schizophrenia.
Collapse
|
44
|
Zhang D, Chen Y, Hou X, Wu YJ. Near-infrared spectroscopy reveals neural perception of vocal emotions in human neonates. Hum Brain Mapp 2019; 40:2434-2448. [PMID: 30697881 DOI: 10.1002/hbm.24534] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Revised: 01/19/2019] [Accepted: 01/20/2019] [Indexed: 12/20/2022] Open
Abstract
Processing affective prosody, that is the emotional tone of a speaker, is fundamental to human communication and adaptive behaviors. Previous studies have mainly focused on adults and infants; thus the neural mechanisms underlying the processing of affective prosody in newborns remain unclear. Here, we used near-infrared spectroscopy to examine the ability of 0-to-4-day-old neonates to discriminate emotions conveyed by speech prosody in their maternal language and a foreign language. Happy, fearful, and angry prosodies enhanced neural activation in the right superior temporal gyrus relative to neutral prosody in the maternal but not the foreign language. Happy prosody elicited greater activation than negative prosody in the left superior frontal gyrus and the left angular gyrus, regions that have not been associated with affective prosody processing in infants or adults. These findings suggest that sensitivity to affective prosody is formed through prenatal exposure to vocal stimuli of the maternal language. Furthermore, the sensitive neural correlates appeared more distributed in neonates than infants, indicating a high-level of neural specialization between the neonatal stage and early infancy. Finally, neonates showed preferential neural responses to positive over negative prosody, which is contrary to the "negativity bias" phenomenon established in adult and infant studies.
Collapse
Affiliation(s)
- Dandan Zhang
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China.,Shenzhen Key Laboratory of Affective and Social Cognitive Science, Shenzhen University, Shenzhen, China
| | - Yu Chen
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Yan Jing Wu
- Faculty of Foreign Languages, Ningbo University, Ningbo, China
| |
Collapse
|
45
|
Liu P, Cole PM, Gilmore RO, Pérez-Edgar KE, Vigeant MC, Moriarty P, Scherf KS. Young children's neural processing of their mother's voice: An fMRI study. Neuropsychologia 2019; 122:11-19. [PMID: 30528586 PMCID: PMC6334756 DOI: 10.1016/j.neuropsychologia.2018.12.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 11/13/2018] [Accepted: 12/03/2018] [Indexed: 12/20/2022]
Abstract
In addition to semantic content, human speech carries paralinguistic information that conveys important social cues such as a speaker's identity. For young children, their own mothers' voice is one of the most salient vocal inputs in their daily environment. Indeed, qualities of mothers' voices are shown to contribute to children's social development. Our knowledge of how the mother's voice is processed at the neural level, however, is limited. This study investigated whether the voice of a mother modulates activation in the network of regions activated by the human voice in young children differently than the voice of an unfamiliar mother. We collected fMRI data from 32 typically developing 7- and 8-year-olds as they listened to natural speech produced by their mother and another child's mother. We used emotionally-varied natural speech stimuli to approximate the range of children's day-to-day experience. We individually-defined functional ROIs in children's voice-sensitive neural network and then independently investigated the extent to which activation in these regions is modulated by speaker identity. The bilateral posterior auditory cortex, superior temporal gyrus (STG), and inferior frontal gyrus (IFG) exhibit enhanced activation in response to the voice of one's own mother versus that of an unfamiliar mother. The findings indicate that children process the voice of their own mother uniquely, and pave the way for future studies of how social information processing contributes to the trajectory of child social development.
Collapse
Affiliation(s)
- Pan Liu
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Pamela M Cole
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA.
| | - Rick O Gilmore
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Koraly E Pérez-Edgar
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| | - Michelle C Vigeant
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, PA, USA
| | - Peter Moriarty
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, PA, USA
| | - K Suzanne Scherf
- Department of Psychology, Child Study Center, The Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
46
|
Schirmer A. Is the voice an auditory face? An ALE meta-analysis comparing vocal and facial emotion processing. Soc Cogn Affect Neurosci 2018; 13:1-13. [PMID: 29186621 PMCID: PMC5793823 DOI: 10.1093/scan/nsx142] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Accepted: 11/19/2017] [Indexed: 11/13/2022] Open
Abstract
This meta-analysis compares the brain structures and mechanisms involved in facial and vocal emotion recognition. Neuroimaging studies contrasting emotional with neutral (face: N = 76, voice: N = 34) and explicit with implicit emotion processing (face: N = 27, voice: N = 20) were collected to shed light on stimulus and goal-driven mechanisms, respectively. Activation likelihood estimations were conducted on the full data sets for the separate modalities and on reduced, modality-matched data sets for modality comparison. Stimulus-driven emotion processing engaged large networks with significant modality differences in the superior temporal (voice-specific) and the medial temporal (face-specific) cortex. Goal-driven processing was associated with only a small cluster in the dorsomedial prefrontal cortex for voices but not faces. Neither stimulus- nor goal-driven processing showed significant modality overlap. Together, these findings suggest that stimulus-driven processes shape activity in the social brain more powerfully than goal-driven processes in both the visual and the auditory domains. Yet, whereas faces emphasize subcortical emotional and mnemonic mechanisms, voices emphasize cortical mechanisms associated with perception and effortful stimulus evaluation (e.g. via subvocalization). These differences may be due to sensory stimulus properties and highlight the need for a modality-specific perspective when modeling emotion processing in the brain.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology.,Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, Hong Kong.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
47
|
Nevler N, Ash S, Irwin DJ, Liberman M, Grossman M. Validated automatic speech biomarkers in primary progressive aphasia. Ann Clin Transl Neurol 2018; 6:4-14. [PMID: 30656179 PMCID: PMC6331511 DOI: 10.1002/acn3.653] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Revised: 08/21/2018] [Accepted: 08/22/2018] [Indexed: 12/13/2022] Open
Abstract
Objective To automatically extract and quantify specific disease biomarkers of prosody from the acoustic properties of speech in patients with primary progressive aphasia. Methods We analyzed speech samples from 59 progressive aphasic patients (non‐fluent/agrammatic = 15, semantic = 21, logopenic = 23; ages 50–85 years) and 31 matched healthy controls (ages 54–89 years). Using a novel, automated speech analysis protocol, we extracted acoustic measurements of prosody, including fundamental frequency and speech and silent pause durations, and compared these between groups. We then examined their relationships with clinical tests, gray matter atrophy, and cerebrospinal fluid analytes. Results We found a narrowed range of fundamental frequency in patients with non‐fluent/agrammatic variant aphasia (mean 3.86 ± 1.15 semitones) compared with healthy controls (6.06 ± 1.95 semitones; P < 0.001) and patients with semantic variant aphasia (6.12 ± 1.77 semitones; P = 0.001). Mean pause rate was significantly increased in the non‐fluent/agrammatic group (mean 61.4 ± 20.8 pauses per minute) and the logopenic group (58.7 ± 16.4 pauses per minute) compared to controls. In an exploratory analysis, narrowed fundamental frequency range was associated with atrophy in the left inferior frontal cortex. Cerebrospinal level of phosphorylated tau was associated with an acoustic classifier combining fundamental frequency range and pause rate (r = 0.58, P = 0.007). Receiver operating characteristic analysis with this combined classifier distinguished non‐fluent/agrammatic speakers from healthy controls (AUC = 0.94) and from semantic variant patients (AUC = 0.86). Interpretation Restricted fundamental frequency range and increased pause rate are characteristic markers of speech in non‐fluent/agrammatic primary progressive aphasia. These can be extracted with automated speech analysis and are associated with left inferior frontal atrophy and cerebrospinal phosphorylated tau level.
Collapse
Affiliation(s)
- Naomi Nevler
- Penn Frontotemporal Degeneration Center Department of Neurology University of Pennsylvania Philadelphia Pennsylvania
| | - Sharon Ash
- Penn Frontotemporal Degeneration Center Department of Neurology University of Pennsylvania Philadelphia Pennsylvania
| | - David J Irwin
- Penn Frontotemporal Degeneration Center Department of Neurology University of Pennsylvania Philadelphia Pennsylvania
| | - Mark Liberman
- Linguistic Data Consortium Department of Linguistics University of Pennsylvania Philadelphia Pennsylvania
| | - Murray Grossman
- Penn Frontotemporal Degeneration Center Department of Neurology University of Pennsylvania Philadelphia Pennsylvania
| |
Collapse
|
48
|
Jiang X, Sanford R, Pell MD. Neural architecture underlying person perception from in-group and out-group voices. Neuroimage 2018; 181:582-597. [DOI: 10.1016/j.neuroimage.2018.07.042] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2017] [Revised: 07/04/2018] [Accepted: 07/16/2018] [Indexed: 01/02/2023] Open
|
49
|
Ocklenburg S, Packheiser J, Schmitz J, Rook N, Güntürkün O, Peterburs J, Grimshaw GM. Hugs and kisses - The role of motor preferences and emotional lateralization for hemispheric asymmetries in human social touch. Neurosci Biobehav Rev 2018; 95:353-360. [PMID: 30339836 DOI: 10.1016/j.neubiorev.2018.10.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2018] [Revised: 08/27/2018] [Accepted: 10/15/2018] [Indexed: 12/30/2022]
Abstract
Social touch is an important aspect of human social interaction - across all cultures, humans engage in kissing, cradling and embracing. These behaviors are necessarily asymmetric, but the factors that determine their lateralization are not well-understood. Because the hands are often involved in social touch, motor preferences may give rise to asymmetric behavior. However, social touch often occurs in emotional contexts, suggesting that biases might be modulated by asymmetries in emotional processing. Social touch may therefore provide unique insights into lateralized brain networks that link emotion and action. Here, we review the literature on lateralization of cradling, kissing and embracing with respect to motor and emotive bias theories. Lateral biases in all three forms of social touch are influenced, but not fully determined by handedness. Thus, motor bias theory partly explains side biases in social touch. However, emotional context also affects side biases, most strongly for embracing. Taken together, literature analysis reveals that side biases in social touch are most likely determined by a combination of motor and emotive biases.
Collapse
Affiliation(s)
- Sebastian Ocklenburg
- Institute of Cognitive Neuroscience, Biopsychology, Department of Psychology, Ruhr-University Bochum, Germany.
| | - Julian Packheiser
- Institute of Cognitive Neuroscience, Biopsychology, Department of Psychology, Ruhr-University Bochum, Germany
| | - Judith Schmitz
- Institute of Cognitive Neuroscience, Biopsychology, Department of Psychology, Ruhr-University Bochum, Germany
| | - Noemi Rook
- Institute of Cognitive Neuroscience, Biopsychology, Department of Psychology, Ruhr-University Bochum, Germany
| | - Onur Güntürkün
- Institute of Cognitive Neuroscience, Biopsychology, Department of Psychology, Ruhr-University Bochum, Germany
| | - Jutta Peterburs
- Biological Psychology, Heinrich-Heine-University Düsseldorf, Germany
| | - Gina M Grimshaw
- Cognitive and Affective Neuroscience Lab, School of Psychology, Victoria University of Wellington, New Zealand
| |
Collapse
|
50
|
The P300 component decreases in a bimodal oddball task in individuals with depression: An event-related potentials study. Clin Neurophysiol 2018; 129:2525-2533. [PMID: 30366168 DOI: 10.1016/j.clinph.2018.09.012] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 08/26/2018] [Accepted: 09/11/2018] [Indexed: 11/20/2022]
Abstract
OBJECTIVE In this study, we investigated auditory-visual stimulation-induced P300 and examined whether P300 was differentially modulated between individuals with clinical depression and healthy controls. We hypothesized that the P300 component would significantly differ between individuals with depression and healthy individuals Specifically, we predicted that the P300 component induced by the bimodal oddball task would be significantly different from that induced by the unimodal task. METHODS Forty-five individuals with depression and forty-five healthy controls participated in this study. All participants were instructed to complete three oddball tasks-auditory (A), visual (V), and bimodal (AV)-while their electroencephalographic signals were recorded. RESULTS Individuals with depression had a lower P300 amplitude and a longer latency than controls in the bimodal task. P300 amplitudes in the bimodal task were significantly higher than in the auditory or visual tasks in both groups. In the depression group, the P300 amplitude was negatively correlated with Hamilton Depression Rating Scale (HAM-D) scores in the bimodal task. CONCLUSIONS Our results, which agree with those reported previously, suggest that there is a heightened P300 amplitude sensitivity in the bimodal task in individuals with depression. Our data also suggest that P300 amplitudes in the bimodal task may reflect the severity of depression. SIGNIFICANCE The reduced task-related ERP response in individuals with depression suggests significant impairments in these individuals in stimulus integration and response functions.
Collapse
|