1
|
Lo CY, Zendel BR, Baskent D, Boyle C, Coffey E, Gagne N, Habibi A, Harding E, Keijzer M, Kreutz G, Maat B, Schurig E, Sharma M, Dang C, Gilmore S, Henshaw H, McKay CM, Good A, Russo FA. Speech-in-noise, psychosocial, and heart rate variability outcomes of group singing or audiobook club interventions for older adults with unaddressed hearing loss: A SingWell Project multisite, randomized controlled trial, registered report protocol. PLoS One 2024; 19:e0314473. [PMID: 39630812 PMCID: PMC11616889 DOI: 10.1371/journal.pone.0314473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 10/14/2024] [Indexed: 12/07/2024] Open
Abstract
BACKGROUND Unaddressed age-related hearing loss is highly prevalent among older adults, typified by negative consequences for speech-in-noise perception and psychosocial wellbeing. There is promising evidence that group singing may enhance speech-in-noise perception and psychosocial wellbeing. However, there is a lack of robust evidence, primarily due to the literature being based on small sample sizes, single site studies, and a lack of randomized controlled trials. Hence, to address these concerns, this SingWell Project study utilizes an appropriately powered sample size, multisite, randomized controlled trial approach, with a robust preplanned statistical analysis. OBJECTIVE To explore if group singing may improve speech-in-noise perception and psychosocial wellbeing for older adults with unaddressed hearing loss. METHODS We designed an international, multisite, randomized controlled trial to explore the benefits of group singing for adults aged 60 years and older with unaddressed hearing loss (registered at clinicaltrials.gov, ID: NCT06580847). After undergoing an eligibility screening process and completing an information and consent form, we intend to recruit 210 participants that will be randomly assigned to either group singing or an audiobook club (control group) intervention for a training period of 12-weeks. The study has multiple timepoints for testing, that are broadly categorized as macro (i.e., pre- and post-measures across the 12-weeks), or micro timepoints (i.e., pre- and post-measures across a weekly training session). Macro measures include behavioural measures of speech and music perception, and psychosocial questionnaires. Micro measures include psychosocial questionnaires and heart-rate variability. HYPOTHESES We hypothesize that group singing may be effective at improving speech perception and psychosocial outcomes for adults aged 60 years and older with unaddressed hearing loss-more so than participants in the control group.
Collapse
Affiliation(s)
- Chi Yhun Lo
- Department of Psychology, Toronto Metropolitan University, Toronto, ON, Canada
| | | | - Deniz Baskent
- Faculty of Medicine, University of Groningen, Groningen, GR, Netherlands
| | - Christian Boyle
- College of Nursing and Health Sciences, Flinders University, Adelaide, SA, Australia
| | - Emily Coffey
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Nathan Gagne
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, United States of America
| | - Ellie Harding
- Faculty of Arts, University of Groningen, Groningen, GR, Netherlands
| | - Merel Keijzer
- Faculty of Arts, University of Groningen, Groningen, GR, Netherlands
| | - Gunter Kreutz
- Institute of Music, Carl von Ossietzky University of Oldenburg, Oldenburg, NI, Germany
| | - Bert Maat
- Department of Otorhinolaryngology, University of Groningen, Groningen, GR, Netherlands
| | - Eva Schurig
- Institute of Music, Carl von Ossietzky University of Oldenburg, Oldenburg, NI, Germany
| | - Mridula Sharma
- College of Nursing and Health Sciences, Flinders University, Adelaide, SA, Australia
| | - Carmen Dang
- Department of Psychology, Toronto Metropolitan University, Toronto, ON, Canada
| | - Sean Gilmore
- Department of Psychology, Toronto Metropolitan University, Toronto, ON, Canada
| | - Helen Henshaw
- NIHR Nottingham Biomedical Research Centre, Hearing Sciences, School of Medicine, Mental Health and Clinical Neurosciences, University of Nottingham, Nottingham, United Kingdom
| | | | - Arla Good
- Department of Psychology, Toronto Metropolitan University, Toronto, ON, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, ON, Canada
| |
Collapse
|
2
|
Cui AX, Kraeutner SN, Kepinska O, Motamed Yeganeh N, Hermiston N, Werker JF, Boyd LA. Musical Sophistication and Multilingualism: Effects on Arcuate Fasciculus Characteristics. Hum Brain Mapp 2024; 45:e70035. [PMID: 39360580 PMCID: PMC11447524 DOI: 10.1002/hbm.70035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 09/05/2024] [Accepted: 09/17/2024] [Indexed: 10/04/2024] Open
Abstract
The processing of auditory stimuli which are structured in time is thought to involve the arcuate fasciculus, the white matter tract which connects the temporal cortex and the inferior frontal gyrus. Research has indicated effects of both musical and language experience on the structural characteristics of the arcuate fasciculus. Here, we investigated in a sample of n = 84 young adults whether continuous conceptualizations of musical and multilingual experience related to structural characteristics of the arcuate fasciculus, measured using diffusion tensor imaging. Probabilistic tractography was used to identify the dorsal and ventral parts of the white matter tract. Linear regressions indicated that different aspects of musical sophistication related to the arcuate fasciculus' volume (emotional engagement with music), volumetric asymmetry (musical training and music perceptual abilities), and fractional anisotropy (music perceptual abilities). Our conceptualization of multilingual experience, accounting for participants' proficiency in reading, writing, understanding, and speaking different languages, was not related to the structural characteristics of the arcuate fasciculus. We discuss our results in the context of other research on hemispheric specializations and a dual-stream model of auditory processing.
Collapse
Affiliation(s)
- Anja-Xiaoxing Cui
- Department of Musicology, University of Vienna, Vienna, Austria
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Sarah N Kraeutner
- Department of Psychology, University of British Columbia Okanagan, Kelowna, British Columbia, Canada
| | - Olga Kepinska
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
- Department of Behavioral and Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| | - Negin Motamed Yeganeh
- Brain Behaviour Lab, Department of Physical Therapy, University of British Columbia, Vancouver, British Columbia, Canada
| | - Nancy Hermiston
- School of Music, University of British Columbia, Vancouver, British Columbia, Canada
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Lara A Boyd
- Brain Behaviour Lab, Department of Physical Therapy, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
3
|
Chen Y, Tierney A, Pfordresher PQ. Speech-to-song transformation in perception and production. Cognition 2024; 254:105933. [PMID: 39270521 DOI: 10.1016/j.cognition.2024.105933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 08/20/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024]
Abstract
The speech-to-song transformation is an illusion in which certain spoken phrases are perceived as more song-like after being repeated several times. The present study addresses whether this perceptual transformation leads to a corresponding change in how accurately participants imitate pitch/time patterns in speech. We used illusion-inducing (illusion stimuli) and non-inducing (control stimuli) spoken phrases as stimuli. In each trial, one stimulus was presented eight times in succession. Participants were asked to reproduce the phrase and rate how music-like the phrase sounded after the first and final (eighth) repetitions. The ratings of illusion stimuli reflected more song-like perception after the final repetition than the first repetition, but the ratings of control stimuli did not change over repetitions. The results from imitative production mirrored the perceptual effects: pitch matching of illusion stimuli improved from the first to the final repetition, but pitch matching of control stimuli did not improve. These findings suggest a consistent pattern of speech-to-song transformation in both perception and production, suggesting that distinctions between music and language may be more malleable than originally thought both in perception and production.
Collapse
Affiliation(s)
- Yan Chen
- Department of Psychology, University at Buffalo, State University of New York, Buffalo, USA
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Peter Q Pfordresher
- Department of Psychology, University at Buffalo, State University of New York, Buffalo, USA.
| |
Collapse
|
4
|
Cheng S, Wang J, Luo R, Hao N. Brain to brain musical interaction: A systematic review of neural synchrony in musical activities. Neurosci Biobehav Rev 2024; 164:105812. [PMID: 39029879 DOI: 10.1016/j.neubiorev.2024.105812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Revised: 07/02/2024] [Accepted: 07/13/2024] [Indexed: 07/21/2024]
Abstract
The use of hyperscanning technology has revealed the neural mechanisms underlying multi-person interaction in musical activities. However, there is currently a lack of integration among various research findings. This systematic review aims to provide a comprehensive understanding of the social dynamics and brain synchronization in music activities through the analysis of 32 studies. The findings illustrate a strong correlation between inter-brain synchronization (IBS) and various musical activities, with the frontal, central, parietal, and temporal lobes as the primary regions involved. The application of hyperscanning not only advances theoretical research but also holds practical significance in enhancing the effectiveness of music-based interventions in therapy and education. The review also utilizes Predictive Coding Models (PCM) to provide a new perspective for interpreting neural synchronization in music activities. To address the limitations of current research, future studies could integrate multimodal data, adopt novel technologies, use non-invasive techniques, and explore additional research directions.
Collapse
Affiliation(s)
- Shate Cheng
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China; Key Laboratory of Philosophy and Social Science of Anhui Province on Adolescent Mental Health and Crisis Intelligence Intervention, Hefei Normal University, Hefei 200062, China.
| | - Jiayi Wang
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China; Key Laboratory of Philosophy and Social Science of Anhui Province on Adolescent Mental Health and Crisis Intelligence Intervention, Hefei Normal University, Hefei 200062, China.
| | - Ruiyi Luo
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China; Key Laboratory of Philosophy and Social Science of Anhui Province on Adolescent Mental Health and Crisis Intelligence Intervention, Hefei Normal University, Hefei 200062, China.
| | - Ning Hao
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China; Key Laboratory of Philosophy and Social Science of Anhui Province on Adolescent Mental Health and Crisis Intelligence Intervention, Hefei Normal University, Hefei 200062, China.
| |
Collapse
|
5
|
Sihvonen AJ, Pitkäniemi A, Siponkoski ST, Kuusela L, Martínez-Molina N, Laitinen S, Särkämö ER, Pekkola J, Melkas S, Schlaug G, Sairanen V, Särkämö T. Structural Neuroplasticity Effects of Singing in Chronic Aphasia. eNeuro 2024; 11:ENEURO.0408-23.2024. [PMID: 38688718 PMCID: PMC11091951 DOI: 10.1523/eneuro.0408-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 02/28/2024] [Accepted: 04/18/2024] [Indexed: 05/02/2024] Open
Abstract
Singing-based treatments of aphasia can improve language outcomes, but the neural benefits of group-based singing in aphasia are unknown. Here, we set out to determine the structural neuroplasticity changes underpinning group-based singing-induced treatment effects in chronic aphasia. Twenty-eight patients with at least mild nonfluent poststroke aphasia were randomized into two groups that received a 4-month multicomponent singing intervention (singing group) or standard care (control group). High-resolution T1 images and multishell diffusion-weighted MRI data were collected in two time points (baseline/5 months). Structural gray matter (GM) and white matter (WM) neuroplasticity changes were assessed using language network region of interest-based voxel-based morphometry (VBM) and quantitative anisotropy-based connectometry, and their associations to improved language outcomes (Western Aphasia Battery Naming and Repetition) were evaluated. Connectometry analyses showed that the singing group enhanced structural WM connectivity in the left arcuate fasciculus (AF) and corpus callosum as well as in the frontal aslant tract (FAT), superior longitudinal fasciculus, and corticostriatal tract bilaterally compared with the control group. Moreover, in VBM, the singing group showed GM volume increase in the left inferior frontal cortex (Brodmann area 44) compared with the control group. The neuroplasticity effects in the left BA44, AF, and FAT correlated with improved naming abilities after the intervention. These findings suggest that in the poststroke aphasia group, singing can bring about structural neuroplasticity changes in left frontal language areas and in bilateral language pathways, which underpin treatment-induced improvement in speech production.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit and Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki 00014, Finland
- School of Health and Rehabilitation Sciences, Queensland Aphasia Research Centre and UQ Centre for Clinical Research, The University of Queensland, Brisbane QLD 4072, Australia
- Department of Neurology, University of Helsinki and Helsinki University Hospital, Helsinki 00029, Finland
| | - Anni Pitkäniemi
- Cognitive Brain Research Unit and Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki 00014, Finland
| | - Sini-Tuuli Siponkoski
- Cognitive Brain Research Unit and Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki 00014, Finland
| | - Linda Kuusela
- HUS Helsinki Medical Imaging Center, Helsinki University Hospital, Helsinki 00029, Finland
| | - Noelia Martínez-Molina
- Cognitive Brain Research Unit and Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki 00014, Finland
| | | | | | - Johanna Pekkola
- HUS Helsinki Medical Imaging Center, Helsinki University Hospital, Helsinki 00029, Finland
| | - Susanna Melkas
- Department of Neurology, University of Helsinki and Helsinki University Hospital, Helsinki 00029, Finland
| | - Gottfried Schlaug
- Department of Neurology, UMass Medical School, Springfield, Massachusetts 01655
- Department of Biomedical Engineering and Institute of Applied Life Sciences, UMass Amherst, Amherst, Massachusetts 01655
| | - Viljami Sairanen
- HUS Helsinki Medical Imaging Center, Helsinki University Hospital, Helsinki 00029, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit and Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki 00014, Finland
| |
Collapse
|
6
|
Jacobs S, Izzetoglu M, Holtzer R. The impact of music making on neural efficiency & dual-task walking performance in healthy older adults. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2024; 31:438-456. [PMID: 36999570 PMCID: PMC10544664 DOI: 10.1080/13825585.2023.2195615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 03/22/2023] [Indexed: 04/01/2023]
Abstract
Music making is linked to improved cognition and related neuroanatomical changes in children and adults; however, this has been relatively under-studied in aging. The purpose of this study was to assess neural, cognitive, and physical correlates of music making in aging using a dual-task walking (DTW) paradigm. Study participants (N = 415) were healthy adults aged 65 years or older, including musicians (n = 70) who were identified by current weekly engagement in musical activity. A DTW paradigm consisting of single- and dual-task conditions, as well as portable neuroimaging (functional near-infrared spectroscopy), was administered. Outcome measures included neural activation in the prefrontal cortex assessed across task conditions by recording changes in oxygenated hemoglobin, cognitive performance, and gait velocity. Linear mixed effects models examined the impact of music making on outcome measures in addition to moderating their change between task conditions. Across participants (53.3% women; 76 ± 6.55 years), neural activation increased from single- to dual-task conditions (p < 0.001); however, musicians demonstrated attenuated activation between a single cognitive interference task and dual-task walking (p = 0.014). Musicians also displayed significantly smaller decline in behavioral performance (p < 0.001) from single- to dual-task conditions and faster gait overall (p = 0.014). Given evidence of lower prefrontal cortex activation in the context of similar or improved behavioral performance, results indicate the presence of enhanced neural efficiency in older adult musicians. Furthermore, improved dual-task performance in older adult musicians was observed. Results have important clinical implications for healthy aging, as executive functioning plays an essential role in maintaining functional ability in older adulthood.
Collapse
Affiliation(s)
- Sydney Jacobs
- Ferkauf Graduate School of Psychology, Yeshiva University, Bronx, NY, USA
| | - Meltem Izzetoglu
- Department of Electrical and Computer Engineering, Villanova University, Villanova, PA, USA
| | - Roee Holtzer
- Ferkauf Graduate School of Psychology, Yeshiva University, Bronx, NY, USA
- Department of Neurology, Albert Einstein College of Medicine, Bronx, NY, USA
| |
Collapse
|
7
|
Tang L, Xu Y, Yang S, Meng X, Du B, Sun C, Liu L, Dong Q, Nan Y. Mandarin-Speaking Amusics' Online Recognition of Tone and Intonation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1107-1116. [PMID: 38470842 DOI: 10.1044/2024_jslhr-23-00520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
PURPOSE Congenital amusia is a neurogenetic disorder of musical pitch processing. Its linguistic consequences have been examined separately for speech intonations and lexical tones. However, in a tonal language such as Chinese, the processing of intonations and lexical tones interacts with each other during online speech perception. Whether and how the musical pitch disorder might affect linguistic pitch processing during online speech perception remains unknown. METHOD We investigated this question with intonation (question vs. statement) and lexical tone (rising Tone 2 vs. falling Tone 4) identification tasks using the same set of sentences, comparing behavioral and event-related potential measurements between Mandarin-speaking amusics and matched controls. We specifically focused on the amusics without behavioral lexical tone deficits (the majority, i.e., pure amusics). RESULTS Results showed that, despite relative to normal performance when tested in word lexical tone test, pure amusics demonstrated inferior recognition than controls during sentence tone and intonation identification. Compared to controls, pure amusics had larger N400 amplitudes in question stimuli during tone task and smaller P600 amplitudes in intonation task. CONCLUSION These data indicate that musical pitch disorder affects both tone and intonation processing during sentence processing even for pure amusics, whose lexical tone processing was intact when tested with words.
Collapse
Affiliation(s)
- Lirong Tang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Yangxiaoxue Xu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Shiting Yang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Xiangyun Meng
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Boqi Du
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Chen Sun
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Li Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Qi Dong
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Yun Nan
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| |
Collapse
|
8
|
Pitkäniemi A, Särkämö T, Siponkoski ST, Brownsett SLE, Copland DA, Sairanen V, Sihvonen AJ. Hodological organization of spoken language production and singing in the human brain. Commun Biol 2023; 6:779. [PMID: 37495670 PMCID: PMC10371982 DOI: 10.1038/s42003-023-05152-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 07/18/2023] [Indexed: 07/28/2023] Open
Abstract
Theories expounding the neural relationship between speech and singing range from sharing neural circuitry, to relying on opposite hemispheres. Yet, hodological studies exploring their shared and distinct neural networks remain scarce. In this study, we combine a white matter connectometry approach together with comprehensive and naturalistic appraisal of verbal expression during spoken language production and singing in a sample of individuals with post-stroke aphasia. Our results reveal that both spoken language production and singing are mainly supported by the left hemisphere language network and projection pathways. However, while spoken language production mostly engaged dorsal and ventral streams of speech processing, singing was associated primarily with the left ventral stream. These findings provide evidence that speech and singing share core neuronal circuitry within the left hemisphere, while distinct ventral stream contributions explain frequently observed dissociations in aphasia. Moreover, the results suggest prerequisite biomarkers for successful singing-based therapeutic interventions.
Collapse
Affiliation(s)
- Anni Pitkäniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.
- Centre of Excellence in Music, Mind, Body and Brain, University of Helsinki, Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Centre of Excellence in Music, Mind, Body and Brain, University of Helsinki, Helsinki, Finland
| | - Sini-Tuuli Siponkoski
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Centre of Excellence in Music, Mind, Body and Brain, University of Helsinki, Helsinki, Finland
| | - Sonia L E Brownsett
- Queensland Aphasia Research Centre, Brisbane, QLD, Australia
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, QLD, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, VIC, Australia
| | - David A Copland
- Queensland Aphasia Research Centre, Brisbane, QLD, Australia
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, QLD, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, VIC, Australia
| | - Viljami Sairanen
- BABA Center, Pediatric Research Center, Department of Clinical Neurophysiology, Children's Hospital, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Centre of Excellence in Music, Mind, Body and Brain, University of Helsinki, Helsinki, Finland
- Queensland Aphasia Research Centre, Brisbane, QLD, Australia
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, QLD, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, VIC, Australia
- Department of Neurology, Helsinki University Hospital and Department of Neurosciences, University of Helsinki, Helsinki, Finland
| |
Collapse
|
9
|
Amateur singing benefits speech perception in aging under certain conditions of practice: behavioural and neurobiological mechanisms. Brain Struct Funct 2022; 227:943-962. [PMID: 35013775 DOI: 10.1007/s00429-021-02433-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 11/19/2021] [Indexed: 12/21/2022]
Abstract
Limited evidence has shown that practising musical activities in aging, such as choral singing, could lessen age-related speech perception in noise (SPiN) difficulties. However, the robustness and underlying mechanism of action of this phenomenon remain unclear. In this study, we used surface-based morphometry combined with a moderated mediation analytic approach to examine whether singing-related plasticity in auditory and dorsal speech stream regions is associated with better SPiN capabilities. 36 choral singers and 36 non-singers aged 20-87 years underwent cognitive, auditory, and SPiN assessments. Our results provide important new insights into experience-dependent plasticity by revealing that, under certain conditions of practice, amateur choral singing is associated with age-dependent structural plasticity within auditory and dorsal speech regions, which is associated with better SPiN performance in aging. Specifically, the conditions of practice that were associated with benefits on SPiN included frequent weekly practice at home, several hours of weekly group singing practice, singing in multiple languages, and having received formal singing training. These results suggest that amateur choral singing is associated with improved SPiN through a dual mechanism involving auditory processing and auditory-motor integration and may be dose dependent, with more intense singing associated with greater benefit. Our results, thus, reveal that the relationship between singing practice and SPiN is complex, and underscore the importance of considering singing practice behaviours in understanding the effects of musical activities on the brain-behaviour relationship.
Collapse
|
10
|
What do less accurate singers remember? Pitch-matching ability and long-term memory for music. Atten Percept Psychophys 2021; 84:260-269. [PMID: 34796466 DOI: 10.3758/s13414-021-02391-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/05/2021] [Indexed: 11/08/2022]
Abstract
We have only a partial understanding of how people remember nonverbal information such as melodies. Although once learned, melodies can be retained well over long periods of time, remembering newly presented melodies is on average quite difficult. People vary considerably, however, in their level of success in both memory situations. Here, we examine a skill we anticipated would be correlated with memory for melodies: the ability to accurately reproduce pitches. Such a correlation would constitute evidence that melodic memory involves at least covert sensorimotor codes. Experiment 1 looked at episodic memory for new melodies among nonmusicians, both overall and with respect to the Vocal Memory Advantage (VMA): the superiority in remembering melodies presented as sung on a syllable compared to rendered on an instrument. Although we replicated the VMA, our prediction that better pitch matchers would have a larger VMA was not supported, although there was a modest correlation with memory for melodies presented in a piano timbre. Experiment 2 examined long-term memory for the starting pitch of familiar recorded music. Participants selected the starting note of familiar songs on a keyboard, without singing. Nevertheless, we found that better pitch-matchers were more accurate in reproducing the correct starting note. We conclude that sensorimotor coding may be used in storing and retrieving exact melodic information, but is not so useful during early encounters with melodies, as initial coding seems to involve more derived properties such as pitch contour and tonality.
Collapse
|
11
|
Wang L, Pfordresher PQ, Jiang C, Liu F. Individuals with autism spectrum disorder are impaired in absolute but not relative pitch and duration matching in speech and song imitation. Autism Res 2021; 14:2355-2372. [PMID: 34214243 DOI: 10.1002/aur.2569] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 05/03/2021] [Accepted: 06/22/2021] [Indexed: 11/08/2022]
Abstract
Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation. However, few studies have identified clear quantitative characteristics of vocal imitation in ASD. This study investigated imitation of speech and song in English-speaking individuals with and without ASD and its modulation by age. Participants consisted of 25 autistic children and 19 autistic adults, who were compared to 25 children and 19 adults with typical development matched on age, gender, musical training, and cognitive abilities. The task required participants to imitate speech and song stimuli with varying pitch and duration patterns. Acoustic analyses of the imitation performance suggested that individuals with ASD were worse than controls on absolute pitch and duration matching for both speech and song imitation, although they performed as well as controls on relative pitch and duration matching. Furthermore, the two groups produced similar numbers of pitch contour, pitch interval-, and time errors. Across both groups, sung pitch was imitated more accurately than spoken pitch, whereas spoken duration was imitated more accurately than sung duration. Children imitated spoken pitch more accurately than adults when it came to speech stimuli, whereas age showed no significant relationship to song imitation. These results reveal a vocal imitation deficit across speech and music domains in ASD that is specific to absolute pitch and duration matching. This finding provides evidence for shared mechanisms between speech and song imitation, which involves independent implementation of relative versus absolute features. LAY SUMMARY: Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation of actions and gestures. Characteristics of vocal imitation in ASD remain unclear. By comparing speech and song imitation, this study shows that individuals with ASD have a vocal imitative deficit that is specific to absolute pitch and duration matching, while performing as well as controls on relative pitch and duration matching, across speech and music domains.
Collapse
Affiliation(s)
- Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Peter Q Pfordresher
- Department of Psychology, University at Buffalo, State University of New York, Buffalo, New York, USA
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
12
|
Shekari E, Goudarzi S, Shahriari E, Joghataei MT. Extreme capsule is a bottleneck for ventral pathway. IBRO Neurosci Rep 2021; 10:42-50. [PMID: 33861816 PMCID: PMC8019950 DOI: 10.1016/j.ibneur.2020.11.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 11/30/2020] [Indexed: 11/25/2022] Open
Abstract
As neuroscience literature suggests, extreme capsule is considered a whiter matter tract. Nevertheless, it is not clear whether extreme capsule itself is an association fiber pathway or only a bottleneck for other association fibers to pass. Via our review, investigating anatomical position, connectivity and cognitive role of the bundles in extreme capsule, and by analyzing data from the dissection, it can be argued that extreme capsule is probably a bottleneck for the passage of uncinated fasciculus (UF) and inferior fronto-occipital fasciculus (IFOF), and these fasciculi are responsible for the respective roles in language processing.
Collapse
Affiliation(s)
- Ehsan Shekari
- Department of Advanced Technologies in Medicine, Iran University of Medical Science, Tehran, Iran
| | - Sepideh Goudarzi
- Department of pharmacology, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran
| | - Elahe Shahriari
- Department of Physiology, Faculty of Medicine, Iran University of Medical Science, Tehran, Iran
| | - Mohammad Taghi Joghataei
- Department of Advanced Technologies in Medicine, Iran University of Medical Science, Tehran, Iran
- Corresponding author.
| |
Collapse
|
13
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18. [PMID: 33690177 DOI: 10.1101/2020.08.03.234997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 05/24/2023]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
14
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18:10.1088/1741-2552/abecf0. [PMID: 33690177 PMCID: PMC8738965 DOI: 10.1088/1741-2552/abecf0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
15
|
Vaquero L, Ramos-Escobar N, Cucurell D, François C, Putkinen V, Segura E, Huotilainen M, Penhune V, Rodríguez-Fornells A. Arcuate fasciculus architecture is associated with individual differences in pre-attentive detection of unpredicted music changes. Neuroimage 2021; 229:117759. [PMID: 33454403 DOI: 10.1016/j.neuroimage.2021.117759] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 12/16/2020] [Accepted: 01/06/2021] [Indexed: 12/12/2022] Open
Abstract
The mismatch negativity (MMN) is an event related brain potential (ERP) elicited by unpredicted sounds presented in a sequence of repeated auditory stimuli. The neural sources of the MMN have been previously attributed to a fronto-temporo-parietal network which crucially overlaps with the so-called auditory dorsal stream, involving inferior and middle frontal, inferior parietal, and superior and middle temporal regions. These cortical areas are structurally connected by the arcuate fasciculus (AF), a three-branch pathway supporting the feedback-feedforward loop involved in auditory-motor integration, auditory working memory, storage of acoustic templates, as well as comparison and update of those templates. Here, we characterized the individual differences in the white-matter macrostructural properties of the AF and explored their link to the electrophysiological marker of passive change detection gathered in a melodic multifeature MMN-EEG paradigm in 26 healthy young adults without musical training. Our results show that left fronto-temporal white-matter connectivity plays an important role in the pre-attentive detection of rhythm modulations within a melody. Previous studies have shown that this AF segment is also critical for language processing and learning. This strong coupling between structure and function in auditory change detection might be related to life-time linguistic (and possibly musical) exposure and experiences, as well as to timing processing specialization of the left auditory cortex. To the best of our knowledge, this is the first time in which the relationship between neurophysiological (EEG) and brain white-matter connectivity indexes using DTI-tractography are studied together. Thus, the present results, although still exploratory, add to the existing evidence on the importance of studying the constraints imposed on cognitive functions by the underlying structural connectivity.
Collapse
Affiliation(s)
- Lucía Vaquero
- Laboratory of Cognitive and Computational Neuroscience, Complutense University of Madrid and Polytechnic University of Madrid, Campus Científico y Tecnológico de la UPM, Pozuelo de Alarcón, 28223 Madrid, Spain.
| | - Neus Ramos-Escobar
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - David Cucurell
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - Clément François
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain; Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| | - Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland
| | - Emma Segura
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - Minna Huotilainen
- Cicero Learning and Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Virginia Penhune
- Penhune Laboratory for Motor Learning and Neural Plasticity, Concordia University, Montreal, QC, Canada; International Laboratory for Brain, Music and Sound Research (BRAMS). Montreal, QC, Canada; Center for Research on Brain, Language and Music (CRBLM), McGill University. Montreal, QC, Canada
| | - Antoni Rodríguez-Fornells
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain; Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
16
|
Rini J, Ochoa J. Mapping musical automatism: Further insights from epileptic high-frequency oscillation analysis. NEUROLOGY AND CLINICAL NEUROSCIENCE 2020; 8:177-182. [PMID: 33425352 PMCID: PMC7793560 DOI: 10.1111/ncn3.12375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Accepted: 02/19/2020] [Indexed: 06/12/2023]
Abstract
As ictal semiology is increasingly understood to arise from epileptogenic networks, high-frequency oscillation propagation patterns are helping elucidate networks relevant for surgical planning. Musical automatisms, a well-documented but very rare phenomenon of epilepsy, have yet to be examined as a manifestation of high-frequency propagation in the public literature. In our current study, we report a rare case intractable epilepsy with ictal humming whose epileptogenic zone was associated with the non-dominant left anterior medial temporal region. Mapping our case's ictal semiology and high-frequency propagation pattern both facilitated treatment and further supports prior observations that the rare phenomena of musical automatisms localize to a non-dominant frontal-temporal network rather than a specific cortical territory.
Collapse
Affiliation(s)
- James Rini
- Behavioral Neurology, Memory and Aging Center, University of California, San Francisco, San Francisco, CA, USA
| | - Juan Ochoa
- Department of Neurology, University of South Alabama Medical Center, Mobile, AL, USA
| |
Collapse
|
17
|
Belfi AM, Loui P. Musical anhedonia and rewards of music listening: current advances and a proposed model. Ann N Y Acad Sci 2019; 1464:99-114. [PMID: 31549425 DOI: 10.1111/nyas.14241] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 08/20/2019] [Accepted: 09/05/2019] [Indexed: 12/22/2022]
Abstract
Music frequently elicits intense emotional responses, a phenomenon that has been scrutinized from multiple disciplines that span the sciences and arts. While most people enjoy music and find it rewarding, there is substantial individual variability in the experience and degree of music-induced reward. Here, we review current work on the neural substrates of hedonic responses to music. In particular, we focus the present review on specific musical anhedonia, a selective lack of pleasure from music. Based on evidence from neuroimaging, neuropsychology, and brain stimulation studies, we derive a neuroanatomical model of the experience of pleasure during music listening. Our model posits that hedonic responses to music are the result of connectivity between structures involved in auditory perception as a predictive process, and those involved in the brain's dopaminergic reward system. We conclude with open questions and implications of this model for future research on why humans appreciate music.
Collapse
Affiliation(s)
- Amy M Belfi
- Department of Psychological Science, Missouri University of Science and Technology, Rolla, Missouri
| | - Psyche Loui
- Department of Music and Department of Psychology, Northeastern University, Boston, Massachusetts
| |
Collapse
|
18
|
Sihvonen AJ, Särkämö T, Rodríguez-Fornells A, Ripollés P, Münte TF, Soinila S. Neural architectures of music - Insights from acquired amusia. Neurosci Biobehav Rev 2019; 107:104-114. [PMID: 31479663 DOI: 10.1016/j.neubiorev.2019.08.023] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 08/27/2019] [Accepted: 08/29/2019] [Indexed: 12/27/2022]
Abstract
The ability to perceive and produce music is a quintessential element of human life, present in all known cultures. Modern functional neuroimaging has revealed that music listening activates a large-scale bilateral network of cortical and subcortical regions in the healthy brain. Even the most accurate structural studies do not reveal which brain areas are critical and causally linked to music processing. Such questions may be answered by analysing the effects of focal brain lesions in patients´ ability to perceive music. In this sense, acquired amusia after stroke provides a unique opportunity to investigate the neural architectures crucial for normal music processing. Based on the first large-scale longitudinal studies on stroke-induced amusia using modern multi-modal magnetic resonance imaging (MRI) techniques, such as advanced lesion-symptom mapping, grey and white matter morphometry, tractography and functional connectivity, we discuss neural structures critical for music processing, consider music processing in light of the dual-stream model in the right hemisphere, and propose a neural model for acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Department of Neurosciences, University of Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland
| | - Antoni Rodríguez-Fornells
- Department of Cognition, University of Barcelona, Cognition & Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL), Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Pablo Ripollés
- Department of Psychology, New York University and Music and Audio Research Laboratory, New York University, USA
| | - Thomas F Münte
- Department of Neurology and Institute of Psychology II, University of Lübeck, Germany
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| |
Collapse
|
19
|
Leo V, Sihvonen AJ, Linnavalli T, Tervaniemi M, Laine M, Soinila S, Särkämö T. Cognitive and neural mechanisms underlying the mnemonic effect of songs after stroke. NEUROIMAGE-CLINICAL 2019; 24:101948. [PMID: 31419766 PMCID: PMC6706631 DOI: 10.1016/j.nicl.2019.101948] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 04/05/2019] [Accepted: 07/19/2019] [Indexed: 01/28/2023]
Abstract
Sung melody provides a mnemonic cue that can enhance the acquisition of novel verbal material in healthy subjects. Recent evidence suggests that also stroke patients, especially those with mild aphasia, can learn and recall novel narrative stories better when they are presented in sung than spoken format. Extending this finding, the present study explored the cognitive mechanisms underlying this effect by determining whether learning and recall of novel sung vs. spoken stories show a differential pattern of serial position effects (SPEs) and chunking effects in non-aphasic and aphasic stroke patients (N = 31) studied 6 months post-stroke. The structural neural correlates of these effects were also explored using voxel-based morphometry (VBM) and deterministic tractography (DT) analyses of structural MRI data. Non-aphasic patients showed more stable recall with reduced SPEs in the sung than spoken task, which was coupled with greater volume and integrity (indicated by fractional anisotropy, FA) of the left arcuate fasciculus. In contrast, compared to non-aphasic patients, the aphasic patients showed a larger recency effect (better recall of the last vs. middle part of the story) and enhanced chunking (larger units of correctly recalled consecutive items) in the sung than spoken task. In aphasics, the enhanced chunking and better recall on the middle verse in the sung vs. spoken task correlated also with better ability to perceive emotional prosody in speech. Neurally, the sung > spoken recency effect in aphasic patients was coupled with greater grey matter volume in a bilateral network of temporal, frontal, and parietal regions and also greater volume of the right inferior fronto-occipital fasciculus (IFOF). These results provide novel cognitive and neurobiological insight on how a repetitive sung melody can function as a verbal mnemonic aid after stroke. Non-aphasic stroke patients show more stable recall of sung than spoken stories. Aphasic patients show larger recency and chunking effects to sung vs. spoken stories. The left dorsal pathway mediates better recall of sung stories in non-aphasics. The right ventral pathway mediates better recall of sung stories in aphasics. Large-scale bilateral cortical networks are linked to musical mnemonics in aphasia.
Collapse
Affiliation(s)
- Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Neurosciences, Faculty of Medicine, University of Helsinki, Finland
| | - Tanja Linnavalli
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; CICERO Learning, University of Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| |
Collapse
|
20
|
Hartwigsen G, Scharinger M, Sammler D. Editorial: Modulating Cortical Dynamics in Language, Speech and Music. Front Integr Neurosci 2018; 12:58. [PMID: 30538623 PMCID: PMC6277569 DOI: 10.3389/fnint.2018.00058] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 11/06/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Gesa Hartwigsen
- Research Group Modulation of Language Networks, Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Mathias Scharinger
- Phonetics Research Group, Department of German Linguistics, Center for Mind, Brain and Behavior, Philipps-Universität Marburg, Marburg, Germany
| | - Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
21
|
Loui P. Rapid and flexible creativity in musical improvisation: review and a model. Ann N Y Acad Sci 2018; 1423:138-145. [PMID: 29577331 DOI: 10.1111/nyas.13628] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Revised: 01/05/2018] [Accepted: 01/11/2018] [Indexed: 11/30/2022]
Abstract
Creativity has been defined as the ability to produce output that is novel, useful, beneficial, and desired by an audience. But what is musical creativity, and relatedly, to what extent does creativity depend on domain-general or domain-specific neural and cognitive processes? To what extent can musical creativity be taught? To answer these questions from a reductionist scientific approach, we must attempt to isolate the creative process as it pertains to music. Recent work in the neuroscience of creativity has turned to musical improvisation as a window into real-time musical creative process in the brain. Here, I provide an overview of recent research in the neuroscience of musical improvisation, especially focusing on multimodal neuroimaging studies. This research informs a model of creativity as a combination of generative and reactive processes that coordinate their functions to give rise to perpetually novel and aesthetically rewarding improvised musical output.
Collapse
Affiliation(s)
- Psyche Loui
- Department of Psychology and Program in Neuroscience & Behavior, Wesleyan University, Middletown, Connecticut
| |
Collapse
|
22
|
Tracting the neural basis of music: Deficient structural connectivity underlying acquired amusia. Cortex 2017; 97:255-273. [DOI: 10.1016/j.cortex.2017.09.028] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 06/08/2017] [Accepted: 09/29/2017] [Indexed: 11/17/2022]
|
23
|
Loui P, Patterson S, Sachs ME, Leung Y, Zeng T, Przysinda E. White Matter Correlates of Musical Anhedonia: Implications for Evolution of Music. Front Psychol 2017; 8:1664. [PMID: 28993748 PMCID: PMC5622186 DOI: 10.3389/fpsyg.2017.01664] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Accepted: 09/11/2017] [Indexed: 12/27/2022] Open
Abstract
Recent theoretical advances in the evolution of music posit that affective communication is an evolutionary function of music through which the mind and brain are transformed. A rigorous test of this view should entail examining the neuroanatomical mechanisms for affective communication of music, specifically by comparing individual differences in the general population with a special population who lacks specific affective responses to music. Here we compare white matter connectivity in BW, a case with severe musical anhedonia, with a large sample of control subjects who exhibit normal variability in reward sensitivity to music. We show for the first time that structural connectivity within the reward system can predict individual differences in musical reward in a large population, but specific patterns in connectivity between auditory and reward systems are special in an extreme case of specific musical anhedonia. Results support and extend the Mixed Origins of Music theory by identifying multiple neural pathways through which music might operate as an affective signaling system.
Collapse
Affiliation(s)
- Psyche Loui
- Music, Imaging and Neural Dynamics Lab, Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University, MiddletownCT, United States
| | - Sean Patterson
- Music, Imaging and Neural Dynamics Lab, Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University, MiddletownCT, United States
| | - Matthew E. Sachs
- Department of Psychology, Brain and Creativity Institute, University of Southern California, Los AngelesCA, United States
| | - Yvonne Leung
- Music, Imaging and Neural Dynamics Lab, Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University, MiddletownCT, United States
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, PenrithNSW, Australia
| | - Tima Zeng
- Music, Imaging and Neural Dynamics Lab, Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University, MiddletownCT, United States
| | - Emily Przysinda
- Music, Imaging and Neural Dynamics Lab, Department of Psychology, Program in Neuroscience and Behavior, Wesleyan University, MiddletownCT, United States
| |
Collapse
|
24
|
Sihvonen AJ, Särkämö T, Ripollés P, Leo V, Saunavaara J, Parkkola R, Rodríguez-Fornells A, Soinila S. Functional neural changes associated with acquired amusia across different stages of recovery after stroke. Sci Rep 2017; 7:11390. [PMID: 28900231 PMCID: PMC5595783 DOI: 10.1038/s41598-017-11841-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 08/30/2017] [Indexed: 11/09/2022] Open
Abstract
Brain damage causing acquired amusia disrupts the functional music processing system, creating a unique opportunity to investigate the critical neural architectures of musical processing in the brain. In this longitudinal fMRI study of stroke patients (N = 41) with a 6-month follow-up, we used natural vocal music (sung with lyrics) and instrumental music stimuli to uncover brain activation and functional network connectivity changes associated with acquired amusia and its recovery. In the acute stage, amusic patients exhibited decreased activation in right superior temporal areas compared to non-amusic patients during instrumental music listening. During the follow-up, the activation deficits expanded to comprise a wide-spread bilateral frontal, temporal, and parietal network. The amusics showed less activation deficits to vocal music, suggesting preserved processing of singing in the amusic brain. Compared to non-recovered amusics, recovered amusics showed increased activation to instrumental music in bilateral frontoparietal areas at 3 months and in right middle and inferior frontal areas at 6 months. Amusia recovery was also associated with increased functional connectivity in right and left frontoparietal attention networks to instrumental music. Overall, our findings reveal the dynamic nature of deficient activation and connectivity patterns in acquired amusia and highlight the role of dorsal networks in amusia recovery.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Faculty of Medicine, University of Turku, 20520, Turku, Finland. .,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, 08907, Barcelona, Spain.,Department of Cognition, Development and Education Psychology, University of Barcelona, 08035, Barcelona, Spain.,Poeppel Lab, Department of Psychology, New York University, 10003, NY, USA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland
| | - Jani Saunavaara
- Department of Medical Physics, Turku University Hospital, 20521, Turku, Finland
| | - Riitta Parkkola
- Department of Radiology, Turku University and Turku University Hospital, 20521, Turku, Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, 08907, Barcelona, Spain.,Department of Cognition, Development and Education Psychology, University of Barcelona, 08035, Barcelona, Spain.,Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital and Department of Neurology, University of Turku, 20521, Turku, Finland
| |
Collapse
|
25
|
Sensorimotor Mismapping in Poor-pitch Singing. J Voice 2017; 31:645.e23-645.e32. [DOI: 10.1016/j.jvoice.2017.02.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Revised: 02/22/2017] [Accepted: 02/24/2017] [Indexed: 11/19/2022]
|
26
|
Verbal and musical short-term memory: Variety of auditory disorders after stroke. Brain Cogn 2017; 113:10-22. [PMID: 28088063 DOI: 10.1016/j.bandc.2017.01.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 01/01/2017] [Accepted: 01/02/2017] [Indexed: 12/28/2022]
Abstract
Auditory cognitive deficits after stroke may concern language and/or music processing, resulting in aphasia and/or amusia. The aim of the present study was to assess the potential deficits of auditory short-term memory for verbal and musical material after stroke and their underlying cerebral correlates with a Voxel-based Lesion Symptom Mapping approach (VLSM). Patients with an ischemic stroke in the right (N=10) or left (N=10) middle cerebral artery territory and matched control participants (N=14) were tested with a detailed neuropsychological assessment including global cognitive functions, music perception and language tasks. All participants then performed verbal and musical auditory short-term memory (STM) tasks that were implemented in the same way for both materials. Participants had to indicate whether series of four words or four tones presented in pairs, were the same or different. To detect domain-general STM deficits, they also had to perform a visual STM task. Behavioral results showed that patients had lower performance for the STM tasks in comparison with control participants, regardless of the material (words, tones, visual) and the lesion side. The individual patient data showed a double dissociation between some patients exhibiting verbal deficits without musical deficits or the reverse. Exploratory VLSM analyses suggested that dorsal pathways are involved in verbal (phonetic), musical (melodic), and visual STM, while the ventral auditory pathway is involved in musical STM.
Collapse
|
27
|
Neural networks for harmonic structure in music perception and action. Neuroimage 2016; 142:454-464. [DOI: 10.1016/j.neuroimage.2016.08.025] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Revised: 06/30/2016] [Accepted: 08/15/2016] [Indexed: 01/21/2023] Open
|
28
|
Peretz I. Neurobiology of Congenital Amusia. Trends Cogn Sci 2016; 20:857-867. [DOI: 10.1016/j.tics.2016.09.002] [Citation(s) in RCA: 82] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Revised: 08/26/2016] [Accepted: 09/06/2016] [Indexed: 01/05/2023]
|
29
|
Liu F, Chan AHD, Ciocca V, Roquet C, Peretz I, Wong PCM. Pitch perception and production in congenital amusia: Evidence from Cantonese speakers. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:563. [PMID: 27475178 PMCID: PMC4958102 DOI: 10.1121/1.4955182] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2015] [Revised: 06/14/2016] [Accepted: 06/21/2016] [Indexed: 06/06/2023]
Abstract
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.
Collapse
Affiliation(s)
- Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Reading RG6 6AL, United Kingdom
| | - Alice H D Chan
- Division of Linguistics and Multilingual Studies, School of Humanities and Social Sciences, Nanyang Technological University, S637332, Singapore, Singapore
| | - Valter Ciocca
- School of Audiology and Speech Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Catherine Roquet
- International Laboratory for Brain, Music and Sound Research, Université de Montréal, Montreal, Quebec, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound Research, Université de Montréal, Montreal, Quebec, Canada
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages and Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
30
|
Neural Mechanisms Underlying Musical Pitch Perception and Clinical Applications Including Developmental Dyslexia. Curr Neurol Neurosci Rep 2016; 15:51. [PMID: 26092314 DOI: 10.1007/s11910-015-0574-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Music production and perception invoke a complex set of cognitive functions that rely on the integration of sensorimotor, cognitive, and emotional pathways. Pitch is a fundamental perceptual attribute of sound and a building block for both music and speech. Although the cerebral processing of pitch is not completely understood, recent advances in imaging and electrophysiology have provided insight into the functional and anatomical pathways of pitch processing. This review examines the current understanding of pitch processing and behavioral and neural variations that give rise to difficulties in pitch processing, and potential applications of music education for language processing disorders such as dyslexia.
Collapse
|
31
|
Belyk M, Pfordresher PQ, Liotti M, Brown S. The Neural Basis of Vocal Pitch Imitation in Humans. J Cogn Neurosci 2015; 28:621-35. [PMID: 26696298 DOI: 10.1162/jocn_a_00914] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Vocal imitation is a phenotype that is unique to humans among all primate species, and so an understanding of its neural basis is critical in explaining the emergence of both speech and song in human evolution. Two principal neural models of vocal imitation have emerged from a consideration of nonhuman animals. One hypothesis suggests that putative mirror neurons in the inferior frontal gyrus pars opercularis of Broca's area may be important for imitation. An alternative hypothesis derived from the study of songbirds suggests that the corticostriate motor pathway performs sensorimotor processes that are specific to vocal imitation. Using fMRI with a sparse event-related sampling design, we investigated the neural basis of vocal imitation in humans by comparing imitative vocal production of pitch sequences with both nonimitative vocal production and pitch discrimination. The strongest difference between these tasks was found in the putamen bilaterally, providing a striking parallel to the role of the analogous region in songbirds. Other areas preferentially activated during imitation included the orofacial motor cortex, Rolandic operculum, and SMA, which together outline the corticostriate motor loop. No differences were seen in the inferior frontal gyrus. The corticostriate system thus appears to be the central pathway for vocal imitation in humans, as predicted from an analogy with songbirds.
Collapse
|