1
|
Neves L, Martins M, Correia AI, Castro SL, Schellenberg EG, Lima CF. Does music training improve emotion recognition and cognitive abilities? Longitudinal and correlational evidence from children. Cognition 2025; 259:106102. [PMID: 40064075 DOI: 10.1016/j.cognition.2025.106102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 01/22/2025] [Accepted: 02/14/2025] [Indexed: 04/09/2025]
Abstract
Music training is widely claimed to enhance nonmusical abilities, yet causal evidence remains inconclusive. Moreover, research tends to focus on cognitive over socioemotional outcomes. In two studies, we investigated whether music training improves emotion recognition in voices and faces among school-aged children. We also examined music-training effects on musical abilities, motor skills (fine and gross), broader socioemotional functioning, and cognitive abilities including nonverbal reasoning, executive functions, and auditory memory (short-term and working memory). Study 1 (N = 110) was a 2-year longitudinal intervention conducted in a naturalistic school setting, comparing music training to basketball training (active control) and no training (passive control). Music training improved fine-motor skills and auditory memory relative to controls, but it had no effect on emotion recognition or other cognitive and socioemotional abilities. Both music and basketball training improved gross-motor skills. Study 2 (N = 192) compared children without music training to peers attending a music school. Although music training correlated with better emotion recognition in speech prosody (tone of voice), this association disappeared after controlling for socioeconomic status, musical abilities, or short-term memory. In contrast, musical abilities correlated with emotion recognition in both prosody and faces, independently of training or other confounding variables. These findings suggest that music training enhances fine-motor skills and auditory memory, but it does not causally improve emotion recognition, other cognitive abilities, or socioemotional functioning. Observed advantages in emotion recognition likely stem from preexisting musical abilities and other confounding factors such as socioeconomic status.
Collapse
Affiliation(s)
- Leonor Neves
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Marta Martins
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Ana Isabel Correia
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - São Luís Castro
- Centro de Psicologia da Universidade do Porto (CPUP), Faculdade de Psicologia e de Ciências da Educação da Universidade do Porto (FPCEUP), Porto, Portugal
| | - E Glenn Schellenberg
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Department of Psychology, University of Toronto Mississauga, Mississauga, Canada
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal.
| |
Collapse
|
2
|
van ‘t Hooft JJ, Hartog WL, Braun M, Boessen D, Fieldhouse JLP, van Engelen MPE, Singleton EH, Jaschke AC, Schaefer RS, Venkatraghavan V, Barkhof F, van Harten AC, Duits FH, Schouws SNTM, Oudega ML, Warren JD, Tijms BM, Pijnenburg YAL. Musicality and social cognition in dementia: clinical and anatomical associations. Brain Commun 2024; 6:fcae429. [PMID: 39678365 PMCID: PMC11642622 DOI: 10.1093/braincomms/fcae429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 10/17/2024] [Accepted: 12/11/2024] [Indexed: 12/17/2024] Open
Abstract
Human musicality might have co-evolved with social cognition abilities, but common neuroanatomical substrates remain largely unclear. In behavioural variant frontotemporal dementia, social cognitive abilities are profoundly impaired, whereas these are typically spared in Alzheimer's disease. If musicality indeed shares a neuroanatomical basis with social cognition, it could be hypothesized that clinical and neuroanatomical associations of musicality and social cognition should differ between these causes of dementia. We recruited 73 participants from the Amsterdam Dementia Cohort (n = 30 female; aged 50-78), of whom 23 had behavioural variant frontotemporal dementia, 22 Alzheimer's disease and 28 were healthy controls. Musicality was assessed using a music-emotion recognition test, melody, tempo, accent and tuning subscores, a musicality summed score, the identification of auditory hedonic phenotypes and music emotion induction using skin conductance responses. Social cognition was assessed across multiple levels, including emotion recognition, theory of mind, socio-emotional sensitivity and understanding of social norms. We used ANCOVA to investigate subgroup differences in musicality and social cognition and linear regressions to investigate associations between musicality and social cognition. All analyses were adjusted for age, sex, musical training and mini mental state examination. Finally, we performed voxel-based morphometry analyses on T1-weighted MRI to study whether regions for musicality and social cognition overlapped anatomically. We found that patients with behavioural variant frontotemporal dementia performed worse on music-emotion recognition (all P < 0.001) and tempo recognition (all P < 0.05) compared with Alzheimer's disease and on musicality summed score (all P = 0.02) compared to controls only. Furthermore, patients with behavioural variant frontotemporal dementia had lower mean skin conductance responses during emotion-inducing music, compared to Alzheimer's disease (all P < 0.045). Worse music emotion recognition scores were associated with worse facial emotion recognition (P < 0.0001), worse theory of mind (P = 0.0005) and worse understanding of social norms (P = 0.01). Melody and tempo recognition were associated with facial emotion recognition and theory of mind, and accent recognition was associated with the theory of mind. Music emotion recognition and tempo recognition were also associated with executive functions. Worse music emotion recognition, melody recognition, tempo recognition, facial emotion recognition and theory of mind scores were all related to atrophy in the anterior temporal regions and the fusiform gyri, which play a role in multisensory integration, and worse tempo recognition was associated with atrophy of the anterior cingulate cortex. These results support the idea that musicality and social cognition may share a neurobiological basis, which may be vulnerable in behavioural variant frontotemporal dementia.
Collapse
Affiliation(s)
- Jochum J van ‘t Hooft
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Willem L Hartog
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Michelle Braun
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Dewi Boessen
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Jay L P Fieldhouse
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Marie-Paule E van Engelen
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Ellen H Singleton
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Artur C Jaschke
- Music Therapy, ArtEZ University of the Arts, 7511 PN Enschede, The Netherlands
- Department of Psychiatry, University of Cambridge, Cambridge, UK
- Department of Neonatology, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
- Cambridge Institute for Music Therapy Research, Cambridge, UK
| | - Rebecca S Schaefer
- Health, Medical and Neuropsychology Unit, Institute of Psychology, Leiden University, 2333 AK Leiden, The Netherlands
- Academy for Creative and Performing Arts, Leiden University, 2311 GZ Leiden, The Netherlands
| | - Vikram Venkatraghavan
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, 1081 HV Amsterdam, The Netherlands
- UCL Institutes of Neurology and Healthcare Engineering, University College London, UK
| | - Argonde C van Harten
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Flora H Duits
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
- Neurochemistry Lab, Department of Laboratory Medicine, Amsterdam UMC Location VUmc, 1081 HV Amsterdam, The Netherlands
| | - Sigfried N T M Schouws
- Department of Psychiatry, Amsterdam UMC Location Vrije Universiteit Amsterdam, 1081 HV Amsterdam, The Netherlands
- GGZ, InGeest Specialized Mental Health Care, Old Age Psychiatry, 1081 JC Amsterdam, The Netherlands
| | - Mardien L Oudega
- Department of Psychiatry, Amsterdam UMC Location Vrije Universiteit Amsterdam, 1081 HV Amsterdam, The Netherlands
- GGZ, InGeest Specialized Mental Health Care, Old Age Psychiatry, 1081 JC Amsterdam, The Netherlands
- Amsterdam Neuroscience, Mood, Anxiety, Psychosis, Sleep and Stress Program, 1081 HV Amsterdam, The Netherlands
| | - Jason D Warren
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College, London, UK
| | - Betty M Tijms
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| | - Yolande A L Pijnenburg
- Department of Neurology, Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC Location VUmc, 1081 HZ Amsterdam, The Netherlands
- Amsterdam Neuroscience, Neurodegeneration, 1081 HV Amsterdam, The Netherlands
| |
Collapse
|
3
|
Hoarau C, Pralus A, Moulin A, Bedoin N, Ginzburg J, Fornoni L, Aguera PE, Tillmann B, Caclin A. Deficits in congenital amusia: Pitch, music, speech, and beyond. Neuropsychologia 2024; 202:108960. [PMID: 39032629 DOI: 10.1016/j.neuropsychologia.2024.108960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 07/17/2024] [Accepted: 07/17/2024] [Indexed: 07/23/2024]
Abstract
Congenital amusia is a neurodevelopmental disorder characterized by deficits of music perception and production, which are related to altered pitch processing. The present study used a wide variety of tasks to test potential patterns of processing impairment in individuals with congenital amusia (N = 18) in comparison to matched controls (N = 19), notably classical pitch processing tests (i.e., pitch change detection, pitch direction of change identification, and pitch short-term memory tasks) together with tasks assessing other aspects of pitch-related auditory cognition, such as emotion recognition in speech, sound segregation in tone sequences, and speech-in-noise perception. Additional behavioral measures were also collected, including text reading/copying tests, visual control tasks, and a subjective assessment of hearing abilities. As expected, amusics' performance was impaired for the three pitch-specific tasks compared to controls. This deficit of pitch perception had a self-perceived impact on amusics' quality of hearing. Moreover, participants with amusia were impaired in emotion recognition in vowels compared to controls, but no group difference was observed for emotion recognition in sentences, replicating previous data. Despite pitch processing deficits, participants with amusia did not differ from controls in sound segregation and speech-in-noise perception. Text reading and visual control tests did not reveal any impairments in participants with amusia compared to controls. However, the copying test revealed more numerous eye-movements and a smaller memory span. These results allow us to refine the pattern of pitch processing and memory deficits in congenital amusia, thus contributing further to understand pitch-related auditory cognition. Together with previous reports suggesting a comorbidity between congenital amusia and dyslexia, the findings call for further investigation of language-related abilities in this disorder even in the absence of neurodevelopmental language disorder diagnosis.
Collapse
Affiliation(s)
- Caliani Hoarau
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France; Humans Matter, Lyon, France.
| | - Agathe Pralus
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France; Humans Matter, Lyon, France
| | - Annie Moulin
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France
| | - Nathalie Bedoin
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France; Université Lumière Lyon 2, Lyon, France
| | - Jérémie Ginzburg
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France
| | - Lesly Fornoni
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France
| | - Pierre-Emmanuel Aguera
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France
| | - Barbara Tillmann
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France; Laboratory for Research on Learning and Development, Université de Bourgogne, LEAD-CNRS UMR5022, Dijon, France
| | - Anne Caclin
- Université Claude Bernard Lyon 1, INSERM, CNRS, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, F-69500, Bron, France
| |
Collapse
|
4
|
Vigl J, Talamini F, Strauss H, Zentner M. Prosodic discrimination skills mediate the association between musical aptitude and vocal emotion recognition ability. Sci Rep 2024; 14:16462. [PMID: 39014043 PMCID: PMC11252295 DOI: 10.1038/s41598-024-66889-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Accepted: 07/04/2024] [Indexed: 07/18/2024] Open
Abstract
The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 (N = 85) and Study 2 (N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 (N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.
Collapse
Affiliation(s)
- Julia Vigl
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria.
| | - Francesca Talamini
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| | - Hannah Strauss
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| |
Collapse
|
5
|
Nussbaum C, Schirmer A, Schweinberger SR. Musicality - Tuned to the melody of vocal emotions. Br J Psychol 2024; 115:206-225. [PMID: 37851369 DOI: 10.1111/bjop.12684] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/12/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
6
|
van’t Hooft JJ, Benhamou E, Albero Herreros C, Jiang J, Levett B, Core LB, Requena-Komuro MC, Hardy CJD, Tijms BM, Pijnenburg YAL, Warren JD. Musical experience influences socio-emotional functioning in behavioural variant frontotemporal dementia. Front Neurol 2024; 15:1341661. [PMID: 38333611 PMCID: PMC10851745 DOI: 10.3389/fneur.2024.1341661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 01/11/2024] [Indexed: 02/10/2024] Open
Abstract
Objectives On phenotypic and neuroanatomical grounds, music exposure might potentially affect the clinical expression of behavioural variant frontotemporal dementia (bvFTD). However, this has not been clarified. Methods 14 consecutive patients with bvFTD fulfilling consensus diagnostic criteria were recruited via a specialist cognitive clinic. Earlier life musical experience, current musical listening habits and general socio-emotional behaviours were scored using a bespoke semi-quantitative musical survey and standardised functional scales, completed with the assistance of patients' primary caregivers. Associations of musical scores with behavioural scales were assessed using a linear regression model adjusted for age, sex, educational attainment and level of executive and general cognitive impairment. Results Greater earlier life musical experience was associated with significantly lower Cambridge Behavioural Inventory (Revised) scores (β ± SE = -17.2 ± 5.2; p = 0.01) and higher Modified Interpersonal Reactivity Index (MIRI) perspective-taking scores (β ± SE = 2.8 ± 1.1; p = 0.03), after adjusting for general cognitive ability. Number of hours each week currently spent listening to music was associated with higher MIRI empathic concern (β ± SE = 0.7 ± 0.21; p = 0.015) and MIRI total scores (β ± SE = 1.1 ± 0.34; p = 0.014). Discussion Musical experience in earlier life and potentially ongoing regular music listening may ameliorate socio-emotional functioning in bvFTD. Future work in larger cohorts is required to substantiate the robustness of this association, establish its mechanism and evaluate its clinical potential.
Collapse
Affiliation(s)
- Jochum J. van’t Hooft
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Alzheimer Centre Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Amsterdam Neuroscience—Neurodegeneration, Amsterdam, Netherlands
| | - Elia Benhamou
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Claudia Albero Herreros
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Jessica Jiang
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Benjamin Levett
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Lucy B. Core
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Mai-Carmen Requena-Komuro
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Chris J. D. Hardy
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Betty M. Tijms
- Alzheimer Centre Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Amsterdam Neuroscience—Neurodegeneration, Amsterdam, Netherlands
| | - Yolande A. L. Pijnenburg
- Alzheimer Centre Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Amsterdam Neuroscience—Neurodegeneration, Amsterdam, Netherlands
| | - Jason D. Warren
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
7
|
Bufacchi RJ, Battaglia-Mayer A, Iannetti GD, Caminiti R. Cortico-spinal modularity in the parieto-frontal system: A new perspective on action control. Prog Neurobiol 2023; 231:102537. [PMID: 37832714 DOI: 10.1016/j.pneurobio.2023.102537] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 08/22/2023] [Accepted: 10/04/2023] [Indexed: 10/15/2023]
Abstract
Classical neurophysiology suggests that the motor cortex (MI) has a unique role in action control. In contrast, this review presents evidence for multiple parieto-frontal spinal command modules that can bypass MI. Five observations support this modular perspective: (i) the statistics of cortical connectivity demonstrate functionally-related clusters of cortical areas, defining functional modules in the premotor, cingulate, and parietal cortices; (ii) different corticospinal pathways originate from the above areas, each with a distinct range of conduction velocities; (iii) the activation time of each module varies depending on task, and different modules can be activated simultaneously; (iv) a modular architecture with direct motor output is faster and less metabolically expensive than an architecture that relies on MI, given the slow connections between MI and other cortical areas; (v) lesions of the areas composing parieto-frontal modules have different effects from lesions of MI. Here we provide examples of six cortico-spinal modules and functions they subserve: module 1) arm reaching, tool use and object construction; module 2) spatial navigation and locomotion; module 3) grasping and observation of hand and mouth actions; module 4) action initiation, motor sequences, time encoding; module 5) conditional motor association and learning, action plan switching and action inhibition; module 6) planning defensive actions. These modules can serve as a library of tools to be recombined when faced with novel tasks, and MI might serve as a recombinatory hub. In conclusion, the availability of locally-stored information and multiple outflow paths supports the physiological plausibility of the proposed modular perspective.
Collapse
Affiliation(s)
- R J Bufacchi
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy; International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai, China
| | - A Battaglia-Mayer
- Department of Physiology and Pharmacology, University of Rome, Sapienza, Italy
| | - G D Iannetti
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy; Department of Neuroscience, Physiology and Pharmacology, University College London (UCL), London, UK
| | - R Caminiti
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy.
| |
Collapse
|
8
|
Nussbaum C, Schirmer A, Schweinberger SR. Electrophysiological Correlates of Vocal Emotional Processing in Musicians and Non-Musicians. Brain Sci 2023; 13:1563. [PMID: 38002523 PMCID: PMC10670383 DOI: 10.3390/brainsci13111563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Institute of Psychology, University of Innsbruck, 6020 Innsbruck, Austria
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| |
Collapse
|
9
|
Tillmann B, Graves JE, Talamini F, Lévêque Y, Fornoni L, Hoarau C, Pralus A, Ginzburg J, Albouy P, Caclin A. Auditory cortex and beyond: Deficits in congenital amusia. Hear Res 2023; 437:108855. [PMID: 37572645 DOI: 10.1016/j.heares.2023.108855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 06/14/2023] [Accepted: 07/21/2023] [Indexed: 08/14/2023]
Abstract
Congenital amusia is a neuro-developmental disorder of music perception and production, with the observed deficits contrasting with the sophisticated music processing reported for the general population. Musical deficits within amusia have been hypothesized to arise from altered pitch processing, with impairments in pitch discrimination and, notably, short-term memory. We here review research investigating its behavioral and neural correlates, in particular the impairments at encoding, retention, and recollection of pitch information, as well as how these impairments extend to the processing of pitch cues in speech and emotion. The impairments have been related to altered brain responses in a distributed fronto-temporal network, which can be observed also at rest. Neuroimaging studies revealed changes in connectivity patterns within this network and beyond, shedding light on the brain dynamics underlying auditory cognition. Interestingly, some studies revealed spared implicit pitch processing in congenital amusia, showing the power of implicit cognition in the music domain. Building on these findings, together with audiovisual integration and other beneficial mechanisms, we outline perspectives for training and rehabilitation and the future directions of this research domain.
Collapse
Affiliation(s)
- Barbara Tillmann
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France; Laboratory for Research on Learning and Development, Université de Bourgogne, LEAD - CNRS UMR5022, Dijon, France; LEAD-CNRS UMR5022; Université Bourgogne Franche-Comté; Pôle AAFE; 11 Esplanade Erasme; 21000 Dijon, France.
| | - Jackson E Graves
- Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, Paris 75005, France
| | | | - Yohana Lévêque
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Lesly Fornoni
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Caliani Hoarau
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Agathe Pralus
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Jérémie Ginzburg
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Philippe Albouy
- CERVO Brain Research Center, School of Psychology, Laval University, Québec, G1J 2G3; International Laboratory for Brain, Music and Sound Research (BRAMS), CRBLM, Montreal QC, H2V 2J2, Canada
| | - Anne Caclin
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France.
| |
Collapse
|
10
|
MacGregor C, Ruth N, Müllensiefen D. Development and validation of the first adaptive test of emotion perception in music. Cogn Emot 2023; 37:284-302. [PMID: 36592153 DOI: 10.1080/02699931.2022.2162003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
ABSTRACTThe Musical Emotion Discrimination Task (MEDT) is a short, non-adaptive test of the ability to discriminate emotions in music. Test-takers hear two performances of the same melody, both played by the same performer but each trying to communicate a different basic emotion, and are asked to determine which one is "happier", for example. The goal of the current study was to construct a new version of the MEDT using a larger set of shorter, more diverse music clips and an adaptive framework to expand the ability range for which the test can deliver measurements. The first study analysed responses from a large sample of participants (N = 624) to determine how musical features contributed to item difficulty, which resulted in a quantitative model of musical emotion discrimination ability rooted in Item Response Theory (IRT). This model informed the construction of the adaptive MEDT. A second study contributed preliminary evidence for the validity and reliability of the adaptive MEDT, and demonstrated that the new version of the test is suitable for a wider range of abilities. This paper therefore presents the first adaptive musical emotion discrimination test, a new resource for investigating emotion processing which is freely available for research use.
Collapse
Affiliation(s)
- Chloe MacGregor
- Department of Psychology, Goldsmiths, University of London, London, England
| | - Nicolas Ruth
- Institute for Cultural Management and Media, University of Music and Performing Arts Munich, Munchen, Germany
| | | |
Collapse
|
11
|
Shakuf V, Ben-David B, Wegner TGG, Wesseling PBC, Mentzel M, Defren S, Allen SEM, Lachmann T. Processing emotional prosody in a foreign language: the case of German and Hebrew. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2022; 6:251-268. [PMID: 35996660 PMCID: PMC9386669 DOI: 10.1007/s41809-022-00107-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 06/13/2022] [Accepted: 07/16/2022] [Indexed: 11/09/2022]
Abstract
This study investigated the universality of emotional prosody in perception of discrete emotions when semantics is not available. In two experiments the perception of emotional prosody in Hebrew and German by listeners who speak one of the languages but not the other was investigated. Having a parallel tool in both languages allowed to conduct controlled comparisons. In Experiment 1, 39 native German speakers with no knowledge of Hebrew and 80 native Israeli speakers rated Hebrew sentences spoken with four different emotional prosodies (anger, fear, happiness, sadness) or neutral. The Hebrew version of the Test for Rating of Emotions in Speech (T-RES) was used for this purpose. Ratings indicated participants’ agreement on how much the sentence conveyed each of four discrete emotions (anger, fear, happiness and sadness). In Experient 2, 30 native speakers of German, and 24 Israeli native speakers of Hebrew who had no knowledge of German rated sentences of the German version of the T-RES. Based only on the prosody, German-speaking participants were able to accurately identify the emotions in the Hebrew sentences and Hebrew-speaking participants were able to identify the emotions in the German sentences. In both experiments ratings between the groups were similar. These findings show that individuals are able to identify emotions in a foreign language even if they do not have access to semantics. This ability goes beyond identification of target emotion; similarities between languages exist even for “wrong” perception. This adds to accumulating evidence in the literature on the universality of emotional prosody.
Collapse
|
12
|
Zhang G, Shao J, Zhang C, Wang L. The Perception of Lexical Tone and Intonation in Whispered Speech by Mandarin-Speaking Congenital Amusics. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1331-1348. [PMID: 35377182 DOI: 10.1044/2021_jslhr-21-00345] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE A fundamental feature of human speech is variation, including the manner of phonation, as exemplified in the case of whispered speech. In this study, we employed whispered speech to examine an unresolved issue about congenital amusia, a neurodevelopmental disorder of musical pitch processing, which also affects speech pitch processing such as lexical tone and intonation perception. The controversy concerns whether amusia is a pitch-processing disorder or can affect speech processing beyond pitch. METHOD We examined lexical tone and intonation recognition in 19 Mandarin-speaking amusics and 19 matched controls in phonated and whispered speech, where fundamental frequency (f o) information is either present or absent. RESULTS The results revealed that the performance of congenital amusics was inferior to that of controls in lexical tone identification in both phonated and whispered speech. These impairments were also detected in identifying intonation (statements/questions) in phonated and whispered modes. Across the experiments, regression models revealed that f o and non-f o (duration, intensity, and formant frequency) acoustic cues predicted tone and intonation recognition in phonated speech, whereas non-f o cues predicted tone and intonation recognition in whispered speech. There were significant differences between amusics and controls in the use of both f o and non-f o cues. CONCLUSION The results provided the first evidence that the impairments of amusics in lexical tone and intonation identification prevail into whispered speech and support the hypothesis that the deficits of amusia extend beyond pitch processing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.19302275.
Collapse
Affiliation(s)
- Gaoyuan Zhang
- Department of Chinese Language and Literature, Peking University, Beijing, China
| | - Jing Shao
- Department of English Language and Literature, Hong Kong Baptist University, Hong Kong SAR, China
| | - Caicai Zhang
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Lan Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
13
|
The Time Course of Emotional Authenticity Detection in Nonverbal Vocalizations. Cortex 2022; 151:116-132. [DOI: 10.1016/j.cortex.2022.02.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/23/2021] [Accepted: 02/16/2022] [Indexed: 11/24/2022]
|
14
|
Abstract
OBJECTIVE The ability to recognize others' emotions is a central aspect of socioemotional functioning. Emotion recognition impairments are well documented in Alzheimer's disease and other dementias, but it is less understood whether they are also present in mild cognitive impairment (MCI). Results on facial emotion recognition are mixed, and crucially, it remains unclear whether the potential impairments are specific to faces or extend across sensory modalities. METHOD In the current study, 32 MCI patients and 33 cognitively intact controls completed a comprehensive neuropsychological assessment and two forced-choice emotion recognition tasks, including visual and auditory stimuli. The emotion recognition tasks required participants to categorize emotions in facial expressions and in nonverbal vocalizations (e.g., laughter, crying) expressing neutrality, anger, disgust, fear, happiness, pleasure, surprise, or sadness. RESULTS MCI patients performed worse than controls for both facial expressions and vocalizations. The effect was large, similar across tasks and individual emotions, and it was not explained by sensory losses or affective symptomatology. Emotion recognition impairments were more pronounced among patients with lower global cognitive performance, but they did not correlate with the ability to perform activities of daily living. CONCLUSIONS These findings indicate that MCI is associated with emotion recognition difficulties and that such difficulties extend beyond vision, plausibly reflecting a failure at supramodal levels of emotional processing. This highlights the importance of considering emotion recognition abilities as part of standard neuropsychological testing in MCI, and as a target of interventions aimed at improving social cognition in these patients.
Collapse
|
15
|
Pinheiro AP, Anikin A, Conde T, Sarzedas J, Chen S, Scott SK, Lima CF. Emotional authenticity modulates affective and social trait inferences from voices. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200402. [PMID: 34719249 PMCID: PMC8558771 DOI: 10.1098/rstb.2020.0402] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2021] [Indexed: 01/31/2023] Open
Abstract
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Ana P. Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche em Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, 42023 Saint-Etienne, France
- Division of Cognitive Science, Lund University, 221 00 Lund, Sweden
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - Sinead Chen
- National Taiwan University, Taipei City, 10617 Taiwan
| | - Sophie K. Scott
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK
| | - César F. Lima
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK
- Instituto Universitário de Lisboa (ISCTE-IUL), Avenida das Forças Armadas, 1649-026 Lisboa, Portugal
| |
Collapse
|
16
|
Neves L, Martins M, Correia AI, Castro SL, Lima CF. Associations between vocal emotion recognition and socio-emotional adjustment in children. ROYAL SOCIETY OPEN SCIENCE 2021; 8:211412. [PMID: 34804582 PMCID: PMC8595998 DOI: 10.1098/rsos.211412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/20/2021] [Indexed: 06/13/2023]
Abstract
The human voice is a primary channel for emotional communication. It is often presumed that being able to recognize vocal emotions is important for everyday socio-emotional functioning, but evidence for this assumption remains scarce. Here, we examined relationships between vocal emotion recognition and socio-emotional adjustment in children. The sample included 141 6- to 8-year-old children, and the emotion tasks required them to categorize five emotions (anger, disgust, fear, happiness, sadness, plus neutrality), as conveyed by two types of vocal emotional cues: speech prosody and non-verbal vocalizations such as laughter. Socio-emotional adjustment was evaluated by the children's teachers using a multidimensional questionnaire of self-regulation and social behaviour. Based on frequentist and Bayesian analyses, we found that, for speech prosody, higher emotion recognition related to better general socio-emotional adjustment. This association remained significant even when the children's cognitive ability, age, sex and parental education were held constant. Follow-up analyses indicated that higher emotional prosody recognition was more robustly related to the socio-emotional dimensions of prosocial behaviour and cognitive and behavioural self-regulation. For emotion recognition in non-verbal vocalizations, no associations with socio-emotional adjustment were found. A similar null result was obtained for an additional task focused on facial emotion recognition. Overall, these results support the close link between children's emotional prosody recognition skills and their everyday social behaviour.
Collapse
Affiliation(s)
- Leonor Neves
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - Marta Martins
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - Ana Isabel Correia
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - São Luís Castro
- Centro de Psicologia da Universidade do Porto (CPUP), Faculdade de Psicologia e de Ciências da Educação da Universidade do Porto (FPCEUP), Porto, Portugal
| | - César F. Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
17
|
Lima CF, Arriaga P, Anikin A, Pires AR, Frade S, Neves L, Scott SK. Authentic and posed emotional vocalizations trigger distinct facial responses. Cortex 2021; 141:280-292. [PMID: 34102411 DOI: 10.1016/j.cortex.2021.04.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 04/21/2021] [Accepted: 04/27/2021] [Indexed: 11/28/2022]
Abstract
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.
Collapse
Affiliation(s)
- César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK.
| | | | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France; Division of Cognitive Science, Lund University, Lund, Sweden
| | - Ana Rita Pires
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sofia Frade
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Leonor Neves
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
18
|
Rapid Assessment of Non-Verbal Auditory Perception in Normal-Hearing Participants and Cochlear Implant Users. J Clin Med 2021; 10:jcm10102093. [PMID: 34068067 PMCID: PMC8152499 DOI: 10.3390/jcm10102093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 04/26/2021] [Accepted: 05/06/2021] [Indexed: 01/17/2023] Open
Abstract
In the case of hearing loss, cochlear implants (CI) allow for the restoration of hearing. Despite the advantages of CIs for speech perception, CI users still complain about their poor perception of their auditory environment. Aiming to assess non-verbal auditory perception in CI users, we developed five listening tests. These tests measure pitch change detection, pitch direction identification, pitch short-term memory, auditory stream segregation, and emotional prosody recognition, along with perceived intensity ratings. In order to test the potential benefit of visual cues for pitch processing, the three pitch tests included half of the trials with visual indications to perform the task. We tested 10 normal-hearing (NH) participants with material being presented as original and vocoded sounds, and 10 post-lingually deaf CI users. With the vocoded sounds, the NH participants had reduced scores for the detection of small pitch differences, and reduced emotion recognition and streaming abilities compared to the original sounds. Similarly, the CI users had deficits for small differences in the pitch change detection task and emotion recognition, as well as a decreased streaming capacity. Overall, this assessment allows for the rapid detection of specific patterns of non-verbal auditory perception deficits. The current findings also open new perspectives about how to enhance pitch perception capacities using visual cues.
Collapse
|
19
|
Cheung YL, Zhang C, Zhang Y. Emotion processing in congenital amusia: the deficits do not generalize to written emotion words. CLINICAL LINGUISTICS & PHONETICS 2021; 35:101-116. [PMID: 31986915 DOI: 10.1080/02699206.2020.1719209] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 01/10/2020] [Accepted: 01/17/2020] [Indexed: 06/10/2023]
Abstract
Congenital amusia is a lifelong impairment in musical ability. Individuals with amusia are found to show reduced sensitivity to emotion recognition in speech prosody and silent facial expressions, implying a possible cross-modal emotion-processing deficit. However, it is not clear whether the observed deficits are primarily confined to socio-emotional contexts, where visual cues (facial expression) often co-occur with auditory cues (emotion prosody) to express intended emotions, or extend to linguistic emotion processing. In order to better understand the underlying deficiency mechanism of emotion processing in individuals with amusia, we examined whether reduced sensitivity to emotional processing extends to the recognition of emotion category and valence of written words in individuals with amusia. Twenty Cantonese speakers with amusia and 17 controls were tested in three experiments: (1) emotion prosody rating, in which participants rated how much each spoken sentence was expressed in each of the four emotions on 7-point rating scales; (2) written word emotion recognition, in which participants recognized the emotion of written emotion words; and (3) written word valence judgment, in which participants judged the valence of written words. Results showed that participants with amusia preformed significantly less accurately than controls in emotion prosody recognition; in contrast, the two groups showed no significant difference in accuracy rates in both written word tasks (emotion recognition and valence judgment). The results indicate that the impairment of individuals with amusia in emotion processing may not generalize to linguistic emotion processing in written words, implying that the emotion deficit is likely to be restricted to socio-emotional contexts in individuals with amusia.
Collapse
Affiliation(s)
- Yi Lam Cheung
- School of Management, Cranfield University , Cranfield, UK
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University , Hong Kong, SAR, China
- Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University , Hong Kong, SAR, China
| | - Yubin Zhang
- Department of Linguistics, University of Southern California , Los Angeles, California, USA
| |
Collapse
|
20
|
Fernandez NB, Vuilleumier P, Gosselin N, Peretz I. Influence of Background Musical Emotions on Attention in Congenital Amusia. Front Hum Neurosci 2021; 14:566841. [PMID: 33568976 PMCID: PMC7868440 DOI: 10.3389/fnhum.2020.566841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 11/30/2020] [Indexed: 11/13/2022] Open
Abstract
Congenital amusia in its most common form is a disorder characterized by a musical pitch processing deficit. Although pitch is involved in conveying emotion in music, the implications for pitch deficits on musical emotion judgements is still under debate. Relatedly, both limited and spared musical emotion recognition was reported in amusia in conditions where emotion cues were not determined by musical mode or dissonance. Additionally, assumed links between musical abilities and visuo-spatial attention processes need further investigation in congenital amusics. Hence, we here test to what extent musical emotions can influence attentional performance. Fifteen congenital amusic adults and fifteen healthy controls matched for age and education were assessed in three attentional conditions: executive control (distractor inhibition), alerting, and orienting (spatial shift) while music expressing either joy, tenderness, sadness, or tension was presented. Visual target detection was in the normal range for both accuracy and response times in the amusic relative to the control participants. Moreover, in both groups, music exposure produced facilitating effects on selective attention that appeared to be driven by the arousal dimension of musical emotional content, with faster correct target detection during joyful compared to sad music. These findings corroborate the idea that pitch processing deficits related to congenital amusia do not impede other cognitive domains, particularly visual attention. Furthermore, our study uncovers an intact influence of music and its emotional content on the attentional abilities of amusic individuals. The results highlight the domain-selectivity of the pitch disorder in congenital amusia, which largely spares the development of visual attention and affective systems.
Collapse
Affiliation(s)
- Natalia B Fernandez
- Laboratory of Behavioral Neurology and Imaging of Cognition, Department of Fundamental Neuroscience, University of Geneva, Geneva, Switzerland.,Swiss Center of Affective Sciences, Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Patrik Vuilleumier
- Laboratory of Behavioral Neurology and Imaging of Cognition, Department of Fundamental Neuroscience, University of Geneva, Geneva, Switzerland.,Swiss Center of Affective Sciences, Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Nathalie Gosselin
- International Laboratory for Brain, Music and Sound Research, University of Montreal, Montreal, QC, Canada.,Department of Psychology, University of Montreal, Montreal, QC, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound Research, University of Montreal, Montreal, QC, Canada.,Department of Psychology, University of Montreal, Montreal, QC, Canada
| |
Collapse
|
21
|
Liao X, Sun J, Jin Z, Wu D, Liu J. Cortical Morphological Changes in Congenital Amusia: Surface-Based Analyses. Front Psychiatry 2021; 12:721720. [PMID: 35095585 PMCID: PMC8794692 DOI: 10.3389/fpsyt.2021.721720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 12/07/2021] [Indexed: 11/25/2022] Open
Abstract
Background: Congenital amusia (CA) is a rare disorder characterized by deficits in pitch perception, and many structural and functional magnetic resonance imaging studies have been conducted to better understand its neural bases. However, a structural magnetic resonance imaging analysis using a surface-based morphology method to identify regions with cortical features abnormalities at the vertex-based level has not yet been performed. Methods: Fifteen participants with CA and 13 healthy controls underwent structural magnetic resonance imaging. A surface-based morphology method was used to identify anatomical abnormalities. Then, the surface parameters' mean value of the identified clusters with statistically significant between-group differences were extracted and compared. Finally, Pearson's correlation analysis was used to assess the correlation between the Montreal Battery of Evaluation of Amusia (MBEA) scores and surface parameters. Results: The CA group had significantly lower MBEA scores than the healthy controls (p = 0.000). The CA group exhibited a significant higher fractal dimension in the right caudal middle frontal gyrus and a lower sulcal depth in the right pars triangularis gyrus (p < 0.05; false discovery rate-corrected at the cluster level) compared to healthy controls. There were negative correlations between the mean fractal dimension values in the right caudal middle frontal gyrus and MBEA score, including the mean MBEA score (r = -0.5398, p = 0.0030), scale score (r = -0.5712, p = 0.0015), contour score (r = -0.4662, p = 0.0124), interval score (r = -0.4564, p = 0.0146), rhythmic score (r = -0.5133, p = 0.0052), meter score (r = -0.3937, p = 0.0382), and memory score (r = -0.3879, p = 0.0414). There was a significant positive correlation between the mean sulcal depth in the right pars triangularis gyrus and the MBEA score, including the mean score (r = 0.5130, p = 0.0052), scale score (r = 0.5328, p = 0.0035), interval score (r = 0.4059, p = 0.0321), rhythmic score (r = 0.5733, p = 0.0014), meter score (r = 0.5061, p = 0.0060), and memory score (r = 0.4001, p = 0.0349). Conclusion: Individuals with CA exhibit cortical morphological changes in the right hemisphere. These findings may indicate that the neural basis of speech perception and memory impairments in individuals with CA is associated with abnormalities in the right pars triangularis gyrus and middle frontal gyrus, and that these cortical abnormalities may be a neural marker of CA.
Collapse
Affiliation(s)
- Xuan Liao
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Junjie Sun
- Department of Radiology, The Sir Run Run Shaw Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Zhishuai Jin
- Medical Psychological Center, The Second Xiangya Hospital of Central South University, Changsha, China
| | - DaXing Wu
- Medical Psychological Center, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China.,Clinical Research Center for Medical Imaging in Hunan Province, Changsha, China.,Department of Radiology Quality Control Center, The Second Xiangya Hospital of Central South University, Changsha, China
| |
Collapse
|
22
|
Vilaverde RF, Correia AI, Lima CF. Higher trait mindfulness is associated with empathy but not with emotion recognition abilities. ROYAL SOCIETY OPEN SCIENCE 2020; 7:192077. [PMID: 32968498 PMCID: PMC7481693 DOI: 10.1098/rsos.192077] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 06/12/2020] [Indexed: 06/11/2023]
Abstract
Mindfulness involves an intentional and non-judgemental attention or awareness of present-moment experiences. It can be cultivated by meditation practice or present as an inherent disposition or trait. Higher trait mindfulness has been associated with improved emotional skills, but evidence comes primarily from studies on emotion regulation. It remains unclear whether improvements extend to other aspects of emotional processing, namely the ability to recognize emotions in others. In the current study, 107 participants (M age = 25.48 years) completed a measure of trait mindfulness, the Five Facet Mindfulness Questionnaire, and two emotion recognition tasks. These tasks required participants to categorize emotions in facial expressions and in speech prosody (modulations of the tone of voice). They also completed an empathy questionnaire and attention tasks. We found that higher trait mindfulness was associated positively with cognitive empathy, but not with the ability to recognize emotions. In fact, Bayesian analyses provided substantial evidence for the null hypothesis, both for emotion recognition in faces and in speech. Moreover, no associations were observed between mindfulness and attention performance. These findings suggest that the positive effects of trait mindfulness on emotional processing do not extend to emotion recognition abilities.
Collapse
Affiliation(s)
| | | | - César F. Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Avenida das Forças Armadas, 1649-026Portugal
| |
Collapse
|
23
|
Pralus A, Fornoni L, Bouet R, Gomot M, Bhatara A, Tillmann B, Caclin A. Emotional prosody in congenital amusia: Impaired and spared processes. Neuropsychologia 2019; 134:107234. [DOI: 10.1016/j.neuropsychologia.2019.107234] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 08/12/2019] [Accepted: 10/16/2019] [Indexed: 12/15/2022]
|
24
|
MacGregor C, Müllensiefen D. The Musical Emotion Discrimination Task: A New Measure for Assessing the Ability to Discriminate Emotions in Music. Front Psychol 2019; 10:1955. [PMID: 31551857 PMCID: PMC6736617 DOI: 10.3389/fpsyg.2019.01955] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 08/08/2019] [Indexed: 11/13/2022] Open
Abstract
Previous research has shown that levels of musical training and emotional engagement with music are associated with an individual's ability to decode the intended emotional expression from a music performance. The present study aimed to assess traits and abilities that might influence emotion recognition, and to create a new test of emotion discrimination ability. The first experiment investigated musical features that influenced the difficulty of the stimulus items (length, type of melody, instrument, target-/comparison emotion) to inform the creation of a short test of emotion discrimination. The second experiment assessed the contribution of individual differences measures of emotional and musical abilities as well as psychoacoustic abilities. Finally, the third experiment established the validity of the new test against other measures currently used to assess similar abilities. Performance on the Musical Emotion Discrimination Task (MEDT) was significantly associated with high levels of self-reported emotional engagement with music as well as with performance on a facial emotion recognition task. Results are discussed in the context of a process model for emotion discrimination in music and psychometric properties of the MEDT are provided. The MEDT is freely available for research use.
Collapse
Affiliation(s)
- Chloe MacGregor
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | - Daniel Müllensiefen
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| |
Collapse
|
25
|
Neves L, Cordeiro C, Scott SK, Castro SL, Lima CF. High emotional contagion and empathy are associated with enhanced detection of emotional authenticity in laughter. Q J Exp Psychol (Hove) 2018; 71:2355-2363. [PMID: 30362411 DOI: 10.1177/1747021817741800] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Nonverbal vocalisations such as laughter pervade social interactions, and the ability to accurately interpret them is an important skill. Previous research has probed the general mechanisms supporting vocal emotional processing, but the factors that determine individual differences in this ability remain poorly understood. Here, we ask whether the propensity to resonate with others' emotions-as measured by trait levels of emotional contagion and empathy-relates to the ability to perceive different types of laughter. We focus on emotional authenticity detection in spontaneous and voluntary laughs: spontaneous laughs reflect a less controlled and genuinely felt emotion, and voluntary laughs reflect a more deliberate communicative act (e.g., polite agreement). In total, 119 participants evaluated the authenticity and contagiousness of spontaneous and voluntary laughs and completed two self-report measures of resonance with others' emotions: the Emotional Contagion Scale and the Empathic Concern scale of the Interpersonal Reactivity Index. We found that higher scores on these measures predict enhanced ability to detect laughter authenticity. We further observed that perceived contagion responses during listening to laughter significantly relate to authenticity detection. These findings suggest that resonating with others' emotions provides a mechanism for processing complex aspects of vocal emotional information.
Collapse
Affiliation(s)
- Leonor Neves
- 1 Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
| | - Carolina Cordeiro
- 1 Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
| | - Sophie K Scott
- 2 Institute of Cognitive Neuroscience, University College London, London, UK
| | - São Luís Castro
- 1 Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
| | - César F Lima
- 1 Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal.,2 Institute of Cognitive Neuroscience, University College London, London, UK.,3 Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| |
Collapse
|
26
|
Tracting the neural basis of music: Deficient structural connectivity underlying acquired amusia. Cortex 2017; 97:255-273. [DOI: 10.1016/j.cortex.2017.09.028] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 06/08/2017] [Accepted: 09/29/2017] [Indexed: 11/17/2022]
|
27
|
Harrison PMC, Collins T, Müllensiefen D. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation. Sci Rep 2017; 7:3618. [PMID: 28620165 PMCID: PMC5472621 DOI: 10.1038/s41598-017-03586-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2017] [Accepted: 05/02/2017] [Indexed: 11/30/2022] Open
Abstract
Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test’s viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.
Collapse
Affiliation(s)
- Peter M C Harrison
- Queen Mary University of London, School of Electronic Engineering and Computer Science, London, E1 4NS, United Kingdom. .,Goldsmiths, University of London, Department of Psychology, London, SE14 6NW, United Kingdom.
| | - Tom Collins
- Lehigh University, Department of Psychology, Bethlehem, PA, 18015, USA.,Music Artificial Intelligence Algorithms, Inc., Davis, CA, 95617, USA
| | - Daniel Müllensiefen
- Goldsmiths, University of London, Department of Psychology, London, SE14 6NW, United Kingdom
| |
Collapse
|