1
|
Lesiuk T, Dillon K, Ripani G, Iliadis I, Perez G, Levin B, Sun X, McIntosh R. Fractional amplitude of low-frequency fluctuations during music-evoked autobiographical memories in neurotypical older adults. Front Neurosci 2025; 18:1479150. [PMID: 39917247 PMCID: PMC11800146 DOI: 10.3389/fnins.2024.1479150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2024] [Accepted: 12/23/2024] [Indexed: 02/09/2025] Open
Abstract
Introduction Researchers have shown that music-evoked autobiographical memories (MEAMs) can stimulate long-term memory mechanisms while requiring little retrieval effort and may therefore be used in promising non-pharmacological interventions to mitigate memory deficits. Despite an increasing number of studies on MEAMs, few researchers have explored how MEAMs are bound in the brain. Methods In the current study activation indexed by fractional amplitude of low frequency fluctuations (fALFF) during familiar and unfamiliar MEAM retrieval was compared in a sample of 24 healthy older adults. Additionally, we aimed to investigate the impact of age-related gray matter volume (GMV) reduction in key regions associated with MEAM-related activation. In addition to a T1 structural scan, neuroimaging data were collected while participants listened to familiar music (MEAM retrieval) versus unfamiliar music. Results When listening to familiar compared to unfamiliar music, greater fALFF activation patterns were observed in the right parahippocampal gyrus, controlling for age and GMV. The current findings for the familiar (MEAM) condition have implications for cognitive aging as persons experiencing age-related memory decline are particularly susceptible to volumetric reduction in the parahippocampal cortex. Post-hoc analyses to explore correlations between brain activity and the content of MEAMs were performed using the text analysis program Linguistic Inquiry and Word Count. Discussion Our findings suggest that MEAM-related activation of the parahippocampal cortex is evident in normative older adults. However, it is yet to be determined whether such brain states are attainable in older adult populations diagnosed with mild cognitive impairment and/or prodromal Alzheimer's disease.
Collapse
Affiliation(s)
- Teresa Lesiuk
- Department of Music Therapy, Frost School of Music, University of Miami, Coral Gables, FL, United States
| | - Kaitlyn Dillon
- Department of Psychology, University of Miami, Coral Gables, FL, United States
| | - Giulia Ripani
- Department of Music Therapy, Frost School of Music, University of Miami, Coral Gables, FL, United States
| | - Ioannis Iliadis
- Department of Music Therapy, Frost School of Music, University of Miami, Coral Gables, FL, United States
| | - Gabriel Perez
- Department of Music Therapy, Frost School of Music, University of Miami, Coral Gables, FL, United States
| | - Bonnie Levin
- Department of Neurology, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Xiaoyan Sun
- Department of Neurology, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Roger McIntosh
- Department of Psychology, University of Miami, Coral Gables, FL, United States
| |
Collapse
|
2
|
Bedford O, Noly-Gandon A, Ara A, Wiesman AI, Albouy P, Baillet S, Penhune V, Zatorre RJ. Human Auditory-Motor Networks Show Frequency-Specific Phase-Based Coupling in Resting-State MEG. Hum Brain Mapp 2025; 46:e70045. [PMID: 39757971 DOI: 10.1002/hbm.70045] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 08/31/2024] [Accepted: 09/25/2024] [Indexed: 01/07/2025] Open
Abstract
Perception and production of music and speech rely on auditory-motor coupling, a mechanism which has been linked to temporally precise oscillatory coupling between auditory and motor regions of the human brain, particularly in the beta frequency band. Recently, brain imaging studies using magnetoencephalography (MEG) have also shown that accurate auditory temporal predictions specifically depend on phase coherence between auditory and motor cortical regions. However, it is not yet clear whether this tight oscillatory phase coupling is an intrinsic feature of the auditory-motor loop, or whether it is only elicited by task demands. Further, we do not know if phase synchrony is uniquely enhanced in the auditory-motor system compared to other sensorimotor modalities, or to which degree it is amplified by musical training. In order to resolve these questions, we measured the degree of phase locking between motor regions and auditory or visual areas in musicians and non-musicians using resting-state MEG. We derived phase locking values (PLVs) and phase transfer entropy (PTE) values from 90 healthy young participants. We observed significantly higher PLVs across all auditory-motor pairings compared to all visuomotor pairings in all frequency bands. The pairing with the highest degree of phase synchrony was right primary auditory cortex with right ventral premotor cortex, a connection which has been highlighted in previous literature on auditory-motor coupling. Additionally, we observed that auditory-motor and visuomotor PLVs were significantly higher across all structures in the right hemisphere, and we found the highest differences between auditory and visual PLVs in the theta, alpha, and beta frequency bands. Last, we found that the theta and beta bands exhibited a preference for a motor-to-auditory PTE direction and that the alpha and gamma bands exhibited the opposite preference for an auditory-to-motor PTE direction. Taken together, these findings confirm our hypotheses that motor phase synchrony is significantly enhanced in auditory compared to visual cortical regions at rest, that these differences are highest across the theta-beta spectrum of frequencies, and that there exist alternating information flow loops across auditory-motor structures as a function of frequency. In our view, this supports the existence of an intrinsic, time-based coupling for low-latency integration of sounds and movements which involves synchronized phasic activity between primary auditory cortex with motor and premotor cortical areas.
Collapse
Affiliation(s)
- Oscar Bedford
- Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Quebec, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montréal, Quebec, Canada
| | - Alix Noly-Gandon
- Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Quebec, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montréal, Quebec, Canada
| | - Alberto Ara
- Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Quebec, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montréal, Quebec, Canada
| | - Alex I Wiesman
- Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Philippe Albouy
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Quebec, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montréal, Quebec, Canada
- CERVO Brain Research Centre, School of Psychology, Université Laval, Québec City, Quebec, Canada
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Virginia Penhune
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Quebec, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montréal, Quebec, Canada
- Department of Psychology, Concordia University, Montréal, Quebec, Canada
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Quebec, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montréal, Quebec, Canada
| |
Collapse
|
3
|
Yusif Rodriguez N, Ahuja A, Basu D, McKim TH, Desrochers TM. Different Subregions of Monkey Lateral Prefrontal Cortex Respond to Abstract Sequences and Their Components. J Neurosci 2024; 44:e1353242024. [PMID: 39379151 PMCID: PMC11580767 DOI: 10.1523/jneurosci.1353-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Revised: 09/08/2024] [Accepted: 09/26/2024] [Indexed: 10/10/2024] Open
Abstract
Sequential information permeates daily activities, such as when watching for the correct series of buildings to determine when to get off the bus or train. These sequences include periodicity (the spacing of the buildings), the identity of the stimuli (the kind of house), and higher-order more abstract rules that may not depend on the exact stimulus (e.g., house, house, house, business). Previously, we found that the posterior fundus of area 46 in the monkey lateral prefrontal cortex (LPFC) responds to rule changes in such abstract visual sequences. However, it is unknown if this region responds to other components of the sequence, i.e., image periodicity and identity, in isolation. Further, it is unknown if this region dissociates from other, more ventral LPFC subregions that have been associated with sequences and their components. To address these questions, we used awake functional magnetic resonance imaging in three male macaque monkeys during two no-report visual tasks. One task contained abstract visual sequences, and the other contained no visual sequences but maintained the same image periodicity and identities. We found the fundus of area 46 responded only to abstract sequence rule violations. In contrast, the ventral bank of area 46 responded to changes in image periodicity and identity, but not changes in the abstract sequence. These results suggest a functional specialization within anatomical substructures of LPFC to signal different kinds of stimulus regularities. This specialization may provide key scaffolding to identify abstract patterns and construct complex models of the world for daily living.
Collapse
Affiliation(s)
| | - Aarit Ahuja
- Department of Neuroscience, Brown University, Providence, Rhode Island 02912
| | - Debaleena Basu
- Department of Neuroscience, Brown University, Providence, Rhode Island 02912
| | - Theresa H McKim
- Department of Neuroscience, Brown University, Providence, Rhode Island 02912
| | - Theresa M Desrochers
- Department of Neuroscience, Brown University, Providence, Rhode Island 02912
- Department of Psychiatry and Human Behavior, Brown University, Providence, Rhode Island 02912
- Robert J. and Nancy D. Carney Institute for Brain Sciences, Brown University, Providence, Rhode Island 02912
| |
Collapse
|
4
|
Malekmohammadi A, Cheng G. Music Familiarization Elicits Functional Connectivity Between Right Frontal/Temporal and Parietal Areas in the Theta and Alpha Bands. Brain Topogr 2024; 38:2. [PMID: 39367155 PMCID: PMC11452474 DOI: 10.1007/s10548-024-01081-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 08/13/2024] [Indexed: 10/06/2024]
Abstract
Frequent listening to unfamiliar music excerpts forms functional connectivity in the brain as music becomes familiar and memorable. However, where these connections spectrally arise in the cerebral cortex during music familiarization has yet to be determined. This study investigates electrophysiological changes in phase-based functional connectivity recorded with electroencephalography (EEG) from twenty participants' brains during thrice passive listening to initially unknown classical music excerpts. Functional connectivity is evaluated based on measuring phase synchronization between all pairwise combinations of EEG electrodes across all repetitions via repeated measures ANOVA and between every two repetitions of listening to unknown music with the weighted phase lag index (WPLI) method in different frequency bands. The results indicate an increased phase synchronization during gradual short-term familiarization between the right frontal and the right parietal areas in the theta and alpha bands. In addition, the increased phase synchronization is discovered between the right temporal areas and the right parietal areas at the theta band during gradual music familiarization. Overall, this study explores the short-term music familiarization effects on neural responses by revealing that repetitions form phasic coupling in the theta and alpha bands in the right hemisphere during passive listening.
Collapse
Affiliation(s)
- Alireza Malekmohammadi
- Electrical Engineering, Institute for Cognitive Systems, Technical University of Munich, 80333, Munich, Germany.
| | - Gordon Cheng
- Electrical Engineering, Institute for Cognitive Systems, Technical University of Munich, 80333, Munich, Germany
| |
Collapse
|
5
|
Rodriguez NY, Ahuja A, Basu D, McKim TH, Desrochers TM. Different subregions of monkey lateral prefrontal cortex respond to abstract sequences and their components. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.13.580192. [PMID: 38405897 PMCID: PMC10888850 DOI: 10.1101/2024.02.13.580192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Sequential information permeates daily activities, such as when watching for the correct series of buildings to determine when to get off the bus or train. These sequences include periodicity (the spacing of the buildings), the identity of the stimuli (the kind of house), and higher-order more abstract rules that may not depend on the exact stimulus (e.g. house, house, house, business). Previously, we found that the posterior fundus of area 46 in the monkey lateral prefrontal cortex (LPFC) responds to rule changes in such abstract visual sequences. However, it is unknown if this region responds to other components of the sequence, i.e., image periodicity and identity, in isolation. Further, it is unknown if this region dissociates from other, more ventral LPFC subregions that have been associated with sequences and their components. To address these questions, we used awake functional magnetic resonance imaging in three male macaque monkeys during two no-report visual tasks. One task contained abstract visual sequences, and the other contained no visual sequences but maintained the same image periodicity and identities. We found the fundus of area 46 responded only to abstract sequence rule violations. In contrast, the ventral bank of area 46 responded to changes in image periodicity and identity, but not changes in the abstract sequence. These results suggest a functional specialization within anatomical substructures of LPFC to signal different kinds of stimulus regularities. This specialization may provide key scaffolding to identify abstract patterns and construct complex models of the world for daily living. Significance Statement Daily tasks, such as a bus commute, require tracking or monitoring your place (same, same, same, different building) until your stop. Sequence components such as rule, periodicity (timing), and item identity are involved in this process. While prior work located responses to sequence rule changes to area 46 of monkey lateral prefrontal cortex (LPFC) using awake monkey fMRI, less was known about other components. We found that LPFC subregions differentiated between sequence components. Area 46 posterior fundus responded to abstract visual sequence rule changes, but not to changes in image periodicity or identity. The converse was true for the more ventral, adjacent shoulder region. These results suggest that interactions between adjacent LPFC subregions provide key scaffolding for complex daily behaviors.
Collapse
Affiliation(s)
| | - Aarit Ahuja
- Department of Neuroscience, Brown University
| | - Debaleena Basu
- Department of Neuroscience, Brown University
- Department of Biosciences and Bioengineering, IIT Bombay, Mumbai, Maharashtra, India
| | - Theresa H McKim
- Department of Biology & Institute for Neuroscience, University of Nevada, Reno
| | - Theresa M Desrochers
- Department of Neuroscience, Brown University
- Department of Psychiatry and Human Behavior, Brown University
- Robert J. and Nancy D. Carney Institute for Brain Sciences, Brown University
| |
Collapse
|
6
|
Della Vedova G, Proverbio AM. Neural signatures of imaginary motivational states: desire for music, movement and social play. Brain Topogr 2024; 37:806-825. [PMID: 38625520 PMCID: PMC11393278 DOI: 10.1007/s10548-024-01047-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/12/2024] [Indexed: 04/17/2024]
Abstract
The literature has demonstrated the potential for detecting accurate electrical signals that correspond to the will or intention to move, as well as decoding the thoughts of individuals who imagine houses, faces or objects. This investigation examines the presence of precise neural markers of imagined motivational states through the combining of electrophysiological and neuroimaging methods. 20 participants were instructed to vividly imagine the desire to move, listen to music or engage in social activities. Their EEG was recorded from 128 scalp sites and analysed using individual standardized Low-Resolution Brain Electromagnetic Tomographies (LORETAs) in the N400 time window (400-600 ms). The activation of 1056 voxels was examined in relation to the 3 motivational states. The most active dipoles were grouped in eight regions of interest (ROI), including Occipital, Temporal, Fusiform, Premotor, Frontal, OBF/IF, Parietal, and Limbic areas. The statistical analysis revealed that all motivational imaginary states engaged the right hemisphere more than the left hemisphere. Distinct markers were identified for the three motivational states. Specifically, the right temporal area was more relevant for "Social Play", the orbitofrontal/inferior frontal cortex for listening to music, and the left premotor cortex for the "Movement" desire. This outcome is encouraging in terms of the potential use of neural indicators in the realm of brain-computer interface, for interpreting the thoughts and desires of individuals with locked-in syndrome.
Collapse
Affiliation(s)
- Giada Della Vedova
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano, Bicocca, Italy
| | - Alice Mado Proverbio
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano, Bicocca, Italy.
- NeuroMI, Milan Center for Neuroscience, Milan, Italy.
- Department of Psychology of University of Milano-Bicocca, Piazza dell'Ateneo nuovo 1, Milan, 20162, Italy.
| |
Collapse
|
7
|
Vuong V, Hewan P, Perron M, Thaut MH, Alain C. The neural bases of familiar music listening in healthy individuals: An activation likelihood estimation meta-analysis. Neurosci Biobehav Rev 2023; 154:105423. [PMID: 37839672 DOI: 10.1016/j.neubiorev.2023.105423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 10/06/2023] [Accepted: 10/08/2023] [Indexed: 10/17/2023]
Abstract
Accumulating evidence suggests that the neural activations during music listening differs as a function of familiarity with the excerpts. However, the implicated brain areas are unclear. After an extensive literature search, we conducted an Activation Likelihood Estimation analysis on 23 neuroimaging studies (232 foci, 364 participants) to identify consistently activated brain regions when healthy adults listen to familiar music, compared to unfamiliar music or an equivalent condition. The results revealed a left cortical-subcortical co-activation pattern comprising three significant clusters localized to the supplementary motor areas (BA 6), inferior frontal gyrus (IFG, BA 44), and the claustrum/insula. Our results are discussed in a predictive coding framework, whereby temporal expectancies and familiarity may drive motor activations, despite any overt movement. Though conventionally associated with syntactic violation, our observed activation in the IFG may support a recent proposal of its involvement in a network that subserves both violation and prediction. Finally, the claustrum/insula plays an integral role in auditory processing, functioning as a hub that integrates sensory and limbic information to (sub)cortical structures.
Collapse
Affiliation(s)
- Veronica Vuong
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada; Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada; Music and Health Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON M5S 2C5, Canada.
| | - Patrick Hewan
- Department of Psychology, York University, Toronto, ON M3J 1P3, Canada
| | - Maxime Perron
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| | - Michael H Thaut
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada; Music and Health Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON M5S 2C5, Canada; Rehabilitation Sciences Institute, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Claude Alain
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada; Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada; Music and Health Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON M5S 2C5, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| |
Collapse
|
8
|
Malekmohammadi A, Ehrlich SK, Cheng G. Modulation of theta and gamma oscillations during familiarization with previously unknown music. Brain Res 2023; 1800:148198. [PMID: 36493897 DOI: 10.1016/j.brainres.2022.148198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/24/2022] [Accepted: 12/04/2022] [Indexed: 12/12/2022]
Abstract
Repeated listening to unknown music leads to gradual familiarization with musical sequences. Passively listening to musical sequences could involve an array of dynamic neural responses in reaching familiarization with the musical excerpts. This study elucidates the dynamic brain response and its variation over time by investigating the electrophysiological changes during the familiarization with initially unknown music. Twenty subjects were asked to familiarize themselves with previously unknown 10 s classical music excerpts over three repetitions while their electroencephalogram was recorded. Dynamic spectral changes in neural oscillations are monitored by time-frequency analyses for all frequency bands (theta: 5-9 Hz, alpha: 9-13 Hz, low-beta: 13-21 Hz, high beta: 21-32 Hz, and gamma: 32-50 Hz). Time-frequency analyses reveal sustained theta event-related desynchronization (ERD) in the frontal-midline and the left pre-frontal electrodes which decreased gradually from 1st to 3rd time repetition of the same excerpts (frontal-midline: 57.90 %, left-prefrontal: 75.93 %). Similarly, sustained gamma ERD decreased in the frontal-midline and bilaterally frontal/temporal areas (frontal-midline: 61.47 %, left-frontal: 90.88 %, right-frontal: 87.74 %). During familiarization, the decrease of theta ERD is superior in the first part (1-5 s) whereas the decrease of gamma ERD is superior in the second part (5-9 s) of music excerpts. The results suggest that decreased theta ERD is associated with successfully identifying familiar sequences, whereas decreased gamma ERD is related to forming unfamiliar sequences.
Collapse
Affiliation(s)
- Alireza Malekmohammadi
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany.
| | - Stefan K Ehrlich
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany
| | - Gordon Cheng
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany
| |
Collapse
|
9
|
Lu L, Han M, Zou G, Zheng L, Gao JH. Common and distinct neural representations of imagined and perceived speech. Cereb Cortex 2022; 33:6486-6493. [PMID: 36587299 DOI: 10.1093/cercor/bhac519] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 12/06/2022] [Accepted: 12/09/2022] [Indexed: 01/02/2023] Open
Abstract
Humans excel at constructing mental representations of speech streams in the absence of external auditory input: the internal experience of speech imagery. Elucidating the neural processes underlying speech imagery is critical to understanding this higher-order brain function in humans. Here, using functional magnetic resonance imaging, we investigated the shared and distinct neural correlates of imagined and perceived speech by asking participants to listen to poems articulated by a male voice (perception condition) and to imagine hearing poems spoken by that same voice (imagery condition). We found that compared to baseline, speech imagery and perception activated overlapping brain regions, including the bilateral superior temporal gyri and supplementary motor areas. The left inferior frontal gyrus was more strongly activated by speech imagery than by speech perception, suggesting functional specialization for generating speech imagery. Although more research with a larger sample size and a direct behavioral indicator is needed to clarify the neural systems underlying the construction of complex speech imagery, this study provides valuable insights into the neural mechanisms of the closely associated but functionally distinct processes of speech imagery and perception.
Collapse
Affiliation(s)
- Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing 100083, China
| | - Meizhen Han
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Guangyuan Zou
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Li Zheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China.,PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China.,Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China.,National Biomedical Imaging Center, Peking University, Beijing 100871, China
| |
Collapse
|
10
|
The possibility of an impetus heuristic. Psychon Bull Rev 2022; 29:2015-2033. [PMID: 35705791 DOI: 10.3758/s13423-022-02130-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/18/2022] [Indexed: 11/08/2022]
Abstract
Evidence consistent with a belief in impetus is drawn from studies of naïve physics, perception of causality, perception of force, and representational momentum, and the possibility of an impetus heuristic is discussed. An impetus heuristic suggests the motion path of an object that was previously constrained or influenced by an external source (e.g., object, force) appears to exhibit the same constraint or influence even after that constraint or influence is removed. Impetus is not a valid physical principle, but use of an impetus heuristic can in some circumstances provide approximately correct predictions regarding future object motion, and such predictions require less cognitive effort and resources than would predictions based upon objective physical principles. The relationship of an impetus heuristic to naïve impetus theory and to objective physical principles is discussed, and use of an impetus heuristic significantly challenges claims that causality or force can be visually perceived. Alternatives to an impetus heuristic are considered, and potential boundary conditions and falsification of the impetus notion are discussed. Overall, use of an impetus heuristic offers a parsimonious explanation for findings across a wide range of perceptual domains and could potentially be extended to more metaphorical types of motion.
Collapse
|
11
|
Wang F, Huang X, Zeb S, Liu D, Wang Y. Impact of Music Education on Mental Health of Higher Education Students: Moderating Role of Emotional Intelligence. Front Psychol 2022; 13:938090. [PMID: 35783702 PMCID: PMC9240095 DOI: 10.3389/fpsyg.2022.938090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 05/26/2022] [Indexed: 11/13/2022] Open
Abstract
Music education is one of human kind most universal forms of expression and communication, and it can be found in the daily lives of people of all ages and cultures all over the world. As university life is a time when students are exposed to a great deal of stress, it can have a negative impact on their mental health. Therefore, it is critical to intervene at this stage in their life so that they are prepared to deal with the pressures they will face in the future. The aim of this study was to see how music education affects university students’ mental health, with emotional intelligence functioning as a moderator. The participants in this research were graduate students pursuing degrees in music education. Non probability convenience sampling technique was used to collect and evaluate the data from 265 students studying in different public and private Chinese universities. The data was gathered at a time, and therefore, the study is cross-sectional. The data was collected from January 2022 till the end of March 2022. Many universities have been closed because to COVID-19, therefore data was also gathered online through emails. The data was analyzed quantitatively using the partial least squares (PLS)–structural equation modeling (SEM) technique. The findings backed up the hypotheses. The results revealed that there is a significant effect of music education on student’s mental health. Also, emotional intelligence as a moderator significantly and positively moderates the relationship between music education and students’ mental health. Music has numerous physiological aspects, and listening to it on a daily basis may be beneficial to your general health and well-being. Furthermore, musicians and music students with a high level of emotional intelligence have a better chance of not just performing well in school, college and university or in the music industry, but also of maintaining mental health and improving it.
Collapse
Affiliation(s)
- Feng Wang
- South China University of Technology, Guangzhou, China
| | | | - Sadaf Zeb
- Department of Professional Psychology, Bahria University, Islamabad, Pakistan
- *Correspondence: Sadaf Zeb,
| | - Dan Liu
- Public Basic Teaching Department, Guangzhou Traffic and Transportation Vocational School, Guangzhou, China
| | - Yue Wang
- South China University of Technology, Guangzhou, China
| |
Collapse
|
12
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 133] [Impact Index Per Article: 44.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
13
|
Whitehead JC, Armony JL. Intra-individual Reliability of Voice- and Music-elicited Responses and their Modulation by Expertise. Neuroscience 2022; 487:184-197. [PMID: 35182696 DOI: 10.1016/j.neuroscience.2022.02.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/19/2022] [Accepted: 02/10/2022] [Indexed: 10/19/2022]
Abstract
A growing number of functional neuroimaging studies have identified regions within the temporal lobe, particularly along the planum polare and planum temporale, that respond more strongly to music than other types of acoustic stimuli, including voice. This "music preferred" regions have been reported using a variety of stimulus sets, paradigms and analysis approaches and their consistency across studies confirmed through meta-analyses. However, the critical question of intra-subject reliability of these responses has received less attention. Here, we directly assessed this important issue by contrasting brain responses to musical vs. vocal stimuli in the same subjects across three consecutive fMRI runs, using different types of stimuli. Moreover, we investigated whether these music- and voice-preferred responses were reliably modulated by expertise. Results demonstrated that music-preferred activity previously reported in temporal regions, and its modulation by expertise, exhibits a high intra-subject reliability. However, we also found that activity in some extra-temporal regions, such as the precentral and middle frontal gyri, did depend on the particular stimuli employed, which may explain why these are less consistently reported in the literature. Taken together, our findings confirm and extend the notion that specific regions in the brain consistently respond more strongly to certain socially-relevant stimulus categories, such as faces, voices and music, but that some of these responses appear to depend, at least to some extent, on the specific features of the paradigm employed.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Integrated Program in Neuroscience, McGill University, Montreal, Canada.
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
14
|
Attention Control and Audiomotor Processes Underlying Anticipation of Musical Themes while Listening to Familiar Sonata-Form Pieces. Brain Sci 2022; 12:brainsci12020261. [PMID: 35204024 PMCID: PMC8870438 DOI: 10.3390/brainsci12020261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 02/09/2022] [Accepted: 02/11/2022] [Indexed: 11/17/2022] Open
Abstract
When listening to music, people are excited by the musical cues immediately before rewarding passages. More generally, listeners attend to the antecedent cues of a salient musical event irrespective of its emotional valence. The present study used functional magnetic resonance imaging to investigate the behavioral and cognitive mechanisms underlying the cued anticipation of the main theme’s recurrence in sonata form. Half of the main themes in the musical stimuli were of a joyful character, half a tragic character. Activity in the premotor cortex suggests that around the main theme’s recurrence, the participants tended to covertly hum along with music. The anterior thalamus, pre-supplementary motor area (preSMA), posterior cerebellum, inferior frontal junction (IFJ), and auditory cortex showed increased activity for the antecedent cues of the themes, relative to the middle-last part of the themes. Increased activity in the anterior thalamus may reflect its role in guiding attention towards stimuli that reliably predict important outcomes. The preSMA and posterior cerebellum may support sequence processing, fine-grained auditory imagery, and fine adjustments to humming according to auditory inputs. The IFJ might orchestrate the attention allocation to motor simulation and goal-driven attention. These findings highlight the attention control and audiomotor components of musical anticipation.
Collapse
|
15
|
Ahmadi ZZ, DiBacco ML, Pearl PL. Speech Motor Function and Auditory Perception in Succinic Semialdehyde Dehydrogenase Deficiency: Toward Pre-Supplementary Motor Area (SMA) and SMA-Proper Dysfunctions. J Child Neurol 2021; 36:1210-1217. [PMID: 33757330 DOI: 10.1177/08830738211001210] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study reviews the fundamental roles of pre-supplementary motor area (SMA) and SMA-proper responsible for speech-motor functions and auditory perception in succinic semialdehyde dehydrogenase (SSADH) deficiency. We comprehensively searched the databases of PubMed, Google Scholar, and the electronic journals Springer, PreQuest, and Science Direct associated with keywords SSADHD, SMA, auditory perception, speech, and motor with AND operator. Transcranial magnetic stimulation emerged for assessing excitability/inhibitory M1 functions, but its role in pre-SMA and SMA proper dysfunction remains unknown. There was a lack of data on resting-state and task-based functional magnetic resonance imaging (MRI), with a focus on passive and active tasks for both speech and music, in terms of analysis of SMA-related cortex and its connections. Children with SSADH deficiency likely experience a dysfunction in connectivity between SMA portions with cortical and subcortical areas contributing to disabilities in speech-motor functions and auditory perception. Early diagnosis of auditory-motor disabilities in children with SSADH deficiency by neuroimaging techniques invites opportunities for utilizing sensory-motor integration as future interventional strategies.
Collapse
Affiliation(s)
- Zohreh Ziatabar Ahmadi
- Department of Speech Therapy, School of Rehabilitation, Babol University of Medical Sciences, Babol, I.R. Iran
| | - Melissa L DiBacco
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Phillip L Pearl
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
16
|
Regev M, Halpern AR, Owen AM, Patel AD, Zatorre RJ. Mapping Specific Mental Content during Musical Imagery. Cereb Cortex 2021; 31:3622-3640. [PMID: 33749742 DOI: 10.1093/cercor/bhab036] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 02/02/2021] [Accepted: 02/05/2021] [Indexed: 11/12/2022] Open
Abstract
Humans can mentally represent auditory information without an external stimulus, but the specificity of these internal representations remains unclear. Here, we asked how similar the temporally unfolding neural representations of imagined music are compared to those during the original perceived experience. We also tested whether rhythmic motion can influence the neural representation of music during imagery as during perception. Participants first memorized six 1-min-long instrumental musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject correlation analysis showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were reinstated in more extensive temporal-lobe regions bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.
Collapse
Affiliation(s)
- Mor Regev
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada
| | - Andrea R Halpern
- Department of Psychology, Bucknell University, Lewisburg, PA 17837, USA
| | - Adrian M Owen
- Brain and Mind Institute, Department of Psychology and Department of Physiology and Pharmacology, Western University, London, ON N6A 5B7, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| | - Aniruddh D Patel
- Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program.,Department of Psychology, Tufts University, Medford, MA 02155, USA
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| |
Collapse
|
17
|
Schölderle T, Haas E, Ziegler W. Dysarthria syndromes in children with cerebral palsy. Dev Med Child Neurol 2021; 63:444-449. [PMID: 32970343 DOI: 10.1111/dmcn.14679] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/21/2020] [Indexed: 11/28/2022]
Abstract
AIM To investigate whether dysarthria syndromes acquired in adulthood can also be observed in children with cerebral palsy (CP) and, if so, whether they align with children's CP subtypes. METHOD Twenty-six children with CP participated (mean age 7y 8mo [SD 1y 2mo], 5y 1mo-9y 10mo; 16 males and 10 females). Speech samples were elicited in a computer-based game and were analysed using the auditory perceptual criteria of the Bogenhausen Dysarthria Scales (BoDyS). For statistical classification, three comparison groups of adults with standard dysarthria syndromes (i.e. spastic, hyperkinetic, and ataxic) were used. Their BoDyS data were entered into a mixture discriminant analysis, with data from the comparison groups as the training sample and those from the children with CP as the test sample. Results were related to findings in a group of adults with CP. RESULTS Among the children with CP, most had spastic (n=14), while fewer had ataxic (n=9) or hyperkinetic (n=3), dysarthria. However, syndrome allocations were significantly more ambiguous than in adults with CP. For 11 children, their dysarthria syndromes did not align with their CP subtype. INTERPRETATION Dysarthria syndromes are less clear cut in children than in adults with CP because of a number of developmental factors. WHAT THIS PAPER ADDS Children with cerebral palsy (CP) show diverse patterns of dysarthric symptoms. Dysarthria syndromes do not seem to manifest fully during childhood. Dysarthria syndrome and CP subtype may not align in children with CP.
Collapse
Affiliation(s)
- Theresa Schölderle
- Clinical Neuropsychology Research Group, Institute for Phonetics and Speech Processing, Ludwig-Maximilians-University, Munich, Germany
| | - Elisabet Haas
- Clinical Neuropsychology Research Group, Institute for Phonetics and Speech Processing, Ludwig-Maximilians-University, Munich, Germany
| | - Wolfram Ziegler
- Clinical Neuropsychology Research Group, Institute for Phonetics and Speech Processing, Ludwig-Maximilians-University, Munich, Germany
| |
Collapse
|
18
|
Martín-Fernández J, Burunat I, Modroño C, González-Mora JL, Plata-Bello J. Music Style Not Only Modulates the Auditory Cortex, but Also Motor Related Areas. Neuroscience 2021; 457:88-102. [PMID: 33465413 DOI: 10.1016/j.neuroscience.2021.01.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 12/25/2020] [Accepted: 01/10/2021] [Indexed: 10/22/2022]
Abstract
The neuroscience of music has recently attracted significant attention, but the effect of music style on the activation of auditory-motor regions has not been explored. The aim of the present study is to analyze the differences in brain activity during passive listening to non-vocal excerpts of four different music genres (classical, reggaeton, electronic and folk). A functional magnetic resonance imaging (fMRI) experiment was performed. Twenty-eight participants with no musical training were included in the study. They had to passively listen to music excerpts of the above genres during fMRI acquisition. Imaging analysis was performed at the whole-brain-level and in auditory-motor regions of interest (ROIs). Furthermore, the musical competence of each participant was measured and its relationship with brain activity in the studied ROIs was analyzed. The whole brain analysis showed higher brain activity during reggaeton listening than the other music genres in auditory-related areas. The ROI-analysis showed that reggaeton led to higher activity not only in auditory related areas, but also in some motor related areas, mainly when it was compared with classical music. A positive relationship between the melodic-Music Ear Test (MET) score and brain activity during reggaeton listening was identified in some auditory and motor related areas. The findings revealed that listening to different music styles in musically inexperienced subjects elicits different brain activity in auditory and motor related areas. Reggaeton was, among the studied music genres, the one that evoked the highest activity in the auditory-motor network. These findings are discussed in connection with acoustic analyses of the musical stimuli.
Collapse
Affiliation(s)
- Jesús Martín-Fernández
- Hospital Universitario Nuestra Señora de La Candelaria (Department of Neurosurgery), Spain
| | - Iballa Burunat
- Finnish Centre for Interdisciplinary Music Research, Department of Music, Art and Culture Studies, University of Jyväskylä, Finland
| | - Cristián Modroño
- University of La Laguna (Department of Basic Medical Sciences), Spain
| | | | - Julio Plata-Bello
- Hospital Universitario de Canarias (Department of Neurosurgery), Spain.
| |
Collapse
|
19
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
20
|
Aman L, Picken S, Andreou LV, Chait M. Sensitivity to temporal structure facilitates perceptual analysis of complex auditory scenes. Hear Res 2020; 400:108111. [PMID: 33333425 PMCID: PMC7812374 DOI: 10.1016/j.heares.2020.108111] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 10/13/2020] [Accepted: 11/06/2020] [Indexed: 11/17/2022]
Abstract
Perception relies on sensitivity to predictable structure in the environment. We used artificial acoustic scenes to investigate this in the auditory modality. Listeners track the temporal structure of multiple concurrent acoustic streams. Sensitivity to predictable structure supports auditory scene analysis, even when scenes are complex. Benefit of regularity observed even when listeners are unaware of the predictable structure.
The notion that sensitivity to the statistical structure of the environment is pivotal to perception has recently garnered considerable attention. Here we investigated this issue in the context of hearing. Building on previous work (Sohoglu and Chait, 2016a; elife), stimuli were artificial ‘soundscapes’ populated by multiple (up to 14) simultaneous streams (‘auditory objects’) comprised of tone-pip sequences, each with a distinct frequency and pattern of amplitude modulation. Sequences were either temporally regular or random. We show that listeners’ ability to detect abrupt appearance or disappearance of a stream is facilitated when scene streams were characterized by a temporally regular fluctuation pattern. The regularity of the changing stream as well as that of the background (non-changing) streams contribute independently to this effect. Remarkably, listeners benefit from regularity even when they are not consciously aware of it. These findings establish that perception of complex acoustic scenes relies on the availability of detailed representations of the regularities automatically extracted from multiple concurrent streams.
Collapse
Affiliation(s)
- Lucie Aman
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, UK; Department of Psychiatry, University of Cambridge, Cambridge, UK
| | - Samantha Picken
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, UK
| | - Lefkothea-Vasiliki Andreou
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, UK; Vocational Lyceum of Zakynthos, Ministry of Education, Research and Religious Affairs, Zakynthos, Greece
| | - Maria Chait
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, UK.
| |
Collapse
|
21
|
Endestad T, Godøy RI, Sneve MH, Hagen T, Bochynska A, Laeng B. Mental Effort When Playing, Listening, and Imagining Music in One Pianist's Eyes and Brain. Front Hum Neurosci 2020; 14:576888. [PMID: 33192407 PMCID: PMC7593683 DOI: 10.3389/fnhum.2020.576888] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Accepted: 09/07/2020] [Indexed: 01/17/2023] Open
Abstract
We investigated "musical effort" with an internationally renowned, classical, pianist while playing, listening, and imagining music. We used pupillometry as an objective measure of mental effort and fMRI as an exploratory method of effort with the same musical pieces. We also compared a group of non-professional pianists and non-musicians by the use of pupillometry and a small group of non-musicians with fMRI. This combined approach of psychophysiology and neuroimaging revealed the cognitive work during different musical activities. We found that pupil diameters were largest when "playing" (regardless of whether there was sound produced or not) compared to conditions with no movement (i.e., "listening" and "imagery"). We found positive correlations between pupil diameters of the professional pianist during different conditions with the same piano piece (i.e., normal playing, silenced playing, listen, imagining), which might indicate similar degrees of load on cognitive resources as well as an intimate link between the motor imagery of sound-producing body motions and gestures. We also confirmed that musical imagery had a strong commonality with music listening in both pianists and musically naïve individuals. Neuroimaging provided evidence for a relationship between noradrenergic (NE) activity and mental workload or attentional intensity within the domain of music cognition. We found effort related activity in the superior part of the locus coeruleus (LC) and, similarly to the pupil, the listening and imagery engaged less the LC-NE network than the motor condition. The pianists attended more intensively to the most difficult piece than the non-musicians since they showed larger pupils for the most difficult piece. Non-musicians were the most engaged by the music listening task, suggesting that the amount of attention allocated for the same task may follow a hierarchy of expertise demanding less attentional effort in expert or performers than in novices. In the professional pianist, we found only weak evidence for a commonality between subjective effort (as rated measure-by-measure) and the objective effort gauged with pupil diameter during listening. We suggest that psychophysiological methods like pupillometry can index mental effort in a manner that is not available to subjective awareness or introspection.
Collapse
Affiliation(s)
- Tor Endestad
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Helgelandssykehuset, Mosjøen, Norway
| | - Rolf Inge Godøy
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | | | - Thomas Hagen
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Agata Bochynska
- Department of Psychology, University of Oslo, Oslo, Norway
- Department of Psychology, New York University, New York, NY, United States
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| |
Collapse
|
22
|
Vocal-motor interference eliminates the memory advantage for vocal melodies. Brain Cogn 2020; 145:105622. [PMID: 32949847 DOI: 10.1016/j.bandc.2020.105622] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 08/21/2020] [Accepted: 08/30/2020] [Indexed: 11/21/2022]
Abstract
Spontaneous motor cortical activity during passive perception of action has been interpreted as a sensorimotor simulation of the observed action. There is currently interest in how sensorimotor simulation can support higher-up cognitive functions, such as memory, but this is relatively unexplored in the auditory domain. In the present study, we examined whether the established memory advantage for vocal melodies over non-vocal melodies is attributable to stronger sensorimotor simulation during perception of vocal relative to non-vocal action. Participants listened to 24 unfamiliar folk melodies presented in vocal or piano timbres. These were encoded during three interference conditions: whispering (vocal-motor interference), tapping (non-vocal motor interference), and no-interference. Afterwards, participants heard the original 24 melodies presented among 24 foils and judged whether melodies were old or new. A vocal-memory advantage was found in the no-interference and tapping conditions; however, the advantage was eliminated in the whispering condition. This suggests that sensorimotor simulationduring the perception of vocal melodies is responsible for the observed vocal-memory advantage.
Collapse
|
23
|
Abstract
Neurotheology is an emerging academic discipline that examines mind-brain relationships in terms of the inter-relatedness of neuroscience, spirituality, and religion. Neurotheology originated from brain-scan studies that revealed specific correlations between certain religious thoughts and localized activated brain areas known as “God Spots.” This relatively young scholarly discipline lacks clear consensus on its definition, ideology, purpose, or prospects for future research. Of special interest is the consideration of the next steps using brain scans to develop this field of research. This review proposes nine categories of future research that could build on the foundation laid by the prior discoveries of God Spots. Specifically, this analysis identifies some sparsely addressed issues that could be usefully explored with new kinds of brain-scan studies: neural network operations, the cognitive neuroscience of prayer, biology of belief, measures of religiosity, role of the self, learning and memory, religious and secular cognitive commonalities, static and functional anatomy, and recruitment of neural processing circuitry. God Spot research is poised to move beyond observation to robust hypothesis generation and testing.
Collapse
|
24
|
de Kerangal M, Vickers D, Chait M. The effect of healthy aging on change detection and sensitivity to predictable structure in crowded acoustic scenes. Hear Res 2020; 399:108074. [PMID: 33041093 DOI: 10.1016/j.heares.2020.108074] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 08/01/2020] [Accepted: 09/01/2020] [Indexed: 01/25/2023]
Abstract
The auditory system plays a critical role in supporting our ability to detect abrupt changes in our surroundings. Here we study how this capacity is affected in the course of healthy ageing. Artifical acoustic 'scenes', populated by multiple concurrent streams of pure tones ('sources') were used to capture the challenges of listening in complex acoustic environments. Two scene conditions were included: REG scenes consisted of sources characterized by a regular temporal structure. Matched RAND scenes contained sources which were temporally random. Changes, manifested as the abrupt disappearance of one of the sources, were introduced to a subset of the trials and participants ('young' group N = 41, age 20-38 years; 'older' group N = 41, age 60-82 years) were instructed to monitor the scenes for these events. Previous work demonstrated that young listeners exhibit better change detection performance in REG scenes, reflecting sensitivity to temporal structure. Here we sought to determine: (1) Whether 'baseline' change detection ability (i.e. in RAND scenes) is affected by age. (2) Whether aging affects listeners' sensitivity to temporal regularity. (3) How change detection capacity relates to listeners' hearing and cognitive profile (a battery of tests that capture hearing and cognitive abilities hypothesized to be affected by aging). The results demonstrated that healthy aging is associated with reduced sensitivity to abrupt scene changes in RAND scenes but that performance does not correlate with age or standard audiological measures such as pure tone audiometry or speech in noise performance. Remarkably older listeners' change detection performance improved substantially (up to the level exhibited by young listeners) in REG relative to RAND scenes. This suggests that the ability to extract and track the regularity associated with scene sources, even in crowded acoustic environments, is relatively preserved in older listeners.
Collapse
Affiliation(s)
- Mathilde de Kerangal
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1 X 8EE, UK
| | - Deborah Vickers
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1 X 8EE, UK; Cambridge Hearing Group, Clinical Neurosciences Department, University of Cambridge, UK
| | - Maria Chait
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1 X 8EE, UK.
| |
Collapse
|
25
|
Toiviainen P, Burunat I, Brattico E, Vuust P, Alluri V. The chronnectome of musical beat. Neuroimage 2020; 216:116191. [DOI: 10.1016/j.neuroimage.2019.116191] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 09/10/2019] [Accepted: 09/11/2019] [Indexed: 01/03/2023] Open
|
26
|
Archakov D, DeWitt I, Kuśmierek P, Ortiz-Rios M, Cameron D, Cui D, Morin EL, VanMeter JW, Sams M, Jääskeläinen IP, Rauschecker JP. Auditory representation of learned sound sequences in motor regions of the macaque brain. Proc Natl Acad Sci U S A 2020; 117:15242-15252. [PMID: 32541016 PMCID: PMC7334521 DOI: 10.1073/pnas.1915610117] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory-motor task producing sound sequences via hand presses on a newly designed device ("monkey piano"). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a "command apparatus" similar to the control of the hand, which was crucial for the evolution of tool use.
Collapse
Affiliation(s)
- Denis Archakov
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Iain DeWitt
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Michael Ortiz-Rios
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Daniel Cameron
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Ding Cui
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Elyse L Morin
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - John W VanMeter
- Center for Functional and Molecular Imaging, Georgetown University Medical Center, Washington, DC 20057
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057;
| |
Collapse
|
27
|
Fasano MC, Glerean E, Gold BP, Sheng D, Sams M, Vuust P, Rauschecker JP, Brattico E. Inter-subject Similarity of Brain Activity in Expert Musicians After Multimodal Learning: A Behavioral and Neuroimaging Study on Learning to Play a Piano Sonata. Neuroscience 2020; 441:102-116. [PMID: 32569807 DOI: 10.1016/j.neuroscience.2020.06.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 06/11/2020] [Accepted: 06/14/2020] [Indexed: 11/26/2022]
Abstract
Human behavior is inherently multimodal and relies on sensorimotor integration. This is evident when pianists exhibit activity in motor and premotor cortices, as part of a dorsal pathway, while listening to a familiar piece of music, or when naïve participants learn to play simple patterns on the piano. Here we investigated the interaction between multimodal learning and dorsal-stream activity over the course of four weeks in ten skilled pianists by adopting a naturalistic data-driven analysis approach. We presented the pianists with audio-only, video-only and audiovisual recordings of a piano sonata during functional magnetic resonance imaging (fMRI) before and after they had learned to play the sonata by heart for a total of four weeks. We followed the learning process and its outcome with questionnaires administered to the pianists, one piano instructor following their training, and seven external expert judges. The similarity of the pianists' brain activity during stimulus presentations was examined before and after learning by means of inter-subject correlation (ISC) analysis. After learning, an increased ISC was found in the pianists while watching the audiovisual performance, particularly in motor and premotor regions of the dorsal stream. While these brain structures have previously been associated with learning simple audio-motor sequences, our findings are the first to suggest their involvement in learning a complex and demanding audiovisual-motor task. Moreover, the most motivated learners and the best performers of the sonata showed ISC in the dorsal stream and in the reward brain network.
Collapse
Affiliation(s)
- Maria C Fasano
- Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland; International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russia
| | - Benjamin P Gold
- Montreal Neurological Institute, McGill University, Montreál, Canada
| | - Dana Sheng
- Department of Neuroscience, Georgetown University Medical Center, Washington, USA
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland; Department of Computer Science, Alto University, Espoo, Finland; Advanced Magnetic Imaging (AMI) Centre, Aalto University School of Science, Espoo, Finland
| | - Peter Vuust
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, USA; Institute for Advanced Study, TUM, Munich, Germany
| | - Elvira Brattico
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy.
| |
Collapse
|
28
|
Bashwiner DM, Bacon DK, Wertz CJ, Flores RA, Chohan MO, Jung RE. Resting state functional connectivity underlying musical creativity. Neuroimage 2020; 218:116940. [PMID: 32422402 DOI: 10.1016/j.neuroimage.2020.116940] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 04/28/2020] [Accepted: 05/08/2020] [Indexed: 10/24/2022] Open
Abstract
While the behavior of "being musically creative"- improvising, composing, songwriting, etc.-is undoubtedly a complex and highly variable one, recent neuroscientific investigation has offered significant insight into the neural underpinnings of many of the creative processes contributing to such behavior. A previous study from our research group (Bashwiner et al., 2016), which examined two aspects of brain structure as a function of creative musical experience, found significantly increased cortical surface area or subcortical volume in regions of the default-mode network, a motor planning network, and a "limbic" network. The present study sought to determine how these regions coordinate with one another and with other regions of the brain in a large number of participants (n = 218) during a task-neutral period, i.e., during the "resting state." Deriving from the previous study's results a set of eleven regions of interest (ROIs), the present study analyzed the resting-state functional connectivity (RSFC) from each of these seed regions as a function of creative musical experience (assessed via our Musical Creativity Questionnaire). Of the eleven ROIs investigated, nine showed significant correlations with a total of 22 clusters throughout the brain, the most significant being located in bilateral cerebellum, right inferior frontal gyrus, midline thalamus (particularly the mediodorsal nucleus), and medial premotor regions. These results support prior reports (by ourselves and others) implicating regions of the default-mode, executive, and motor-planning networks in musical creativity, while additionally-and somewhat unanticipatedly-including a potentially much larger role for the salience network than has been previously reported in studies of musical creativity.
Collapse
Affiliation(s)
- David M Bashwiner
- University of New Mexico, Department of Music, MSC04-2570, l University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Donna K Bacon
- University of New Mexico, Department of Music, MSC04-2570, l University of New Mexico, Albuquerque, NM, 87131, USA; Brain and Behavioral Associates, 1014 Lomas Boulevard NW, Albuquerque, NM, 87102, USA; University of New Mexico, Department of Psychology, MXC03-2220, l University of New Mexico, Albuquerque, NM, 87131, USA
| | - Christopher J Wertz
- Brain and Behavioral Associates, 1014 Lomas Boulevard NW, Albuquerque, NM, 87102, USA
| | - Ranee A Flores
- Brain and Behavioral Associates, 1014 Lomas Boulevard NW, Albuquerque, NM, 87102, USA
| | - Muhammad O Chohan
- University of New Mexico, Health Sciences Center SOM, Department of Neurosurgery, MSC10-5615, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| | - Rex E Jung
- Brain and Behavioral Associates, 1014 Lomas Boulevard NW, Albuquerque, NM, 87102, USA; University of New Mexico, Department of Psychology, MXC03-2220, l University of New Mexico, Albuquerque, NM, 87131, USA; University of New Mexico, Department of Neurosurgery, MSC10-5615, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
29
|
Gelding RW, Harrison PMC, Silas S, Johnson BW, Thompson WF, Müllensiefen D. An efficient and adaptive test of auditory mental imagery. PSYCHOLOGICAL RESEARCH 2020; 85:1201-1220. [PMID: 32356009 PMCID: PMC8049941 DOI: 10.1007/s00426-020-01322-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2019] [Accepted: 03/14/2020] [Indexed: 11/27/2022]
Abstract
The ability to silently hear music in the mind has been argued to be fundamental to musicality. Objective measurements of this subjective imagery experience are needed if this link between imagery ability and musicality is to be investigated. However, previous tests of musical imagery either rely on self-report, rely on melodic memory, or do not cater in range of abilities. The Pitch Imagery Arrow Task (PIAT) was designed to address these shortcomings; however, it is impractically long. In this paper, we shorten the PIAT using adaptive testing and automatic item generation. We interrogate the cognitive processes underlying the PIAT through item response modelling. The result is an efficient online test of auditory mental imagery ability (adaptive Pitch Imagery Arrow Task: aPIAT) that takes 8 min to complete, is adaptive to participant's individual ability, and so can be used to test participants with a range of musical backgrounds. Performance on the aPIAT showed positive moderate-to-strong correlations with measures of non-musical and musical working memory, self-reported musical training, and general musical sophistication. Ability on the task was best predicted by the ability to maintain and manipulate tones in mental imagery, as well as to resist perceptual biases that can lead to incorrect responses. As such, the aPIAT is the ideal tool in which to investigate the relationship between pitch imagery ability and musicality.
Collapse
Affiliation(s)
- Rebecca W. Gelding
- Department of Cognitive Science, Macquarie University, Sydney, Australia
| | - Peter M. C. Harrison
- School of Electronic Engineering and Computer Science, Queen Mary, University Of London, London, UK
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Sebastian Silas
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Blake W. Johnson
- Department of Cognitive Science, Macquarie University, Sydney, Australia
| | | | | |
Collapse
|
30
|
Abstract
Syntax, the structure of sentences, enables humans to express an infinite range of meanings through finite means. The neurobiology of syntax has been intensely studied but with little consensus. Two main candidate regions have been identified: the posterior inferior frontal gyrus (pIFG) and the posterior middle temporal gyrus (pMTG). Integrating research in linguistics, psycholinguistics, and neuroscience, we propose a neuroanatomical framework for syntax that attributes distinct syntactic computations to these regions in a unified model. The key theoretical advances are adopting a modern lexicalized view of syntax in which the lexicon and syntactic rules are intertwined, and recognizing a computational asymmetry in the role of syntax during comprehension and production. Our model postulates a hierarchical lexical-syntactic function to the pMTG, which interconnects previously identified speech perception and conceptual-semantic systems in the temporal and inferior parietal lobes, crucial for both sentence production and comprehension. These relational hierarchies are transformed via the pIFG into morpho-syntactic sequences, primarily tied to production. We show how this architecture provides a better account of the full range of data and is consistent with recent proposals regarding the organization of phonological processes in the brain.
Collapse
Affiliation(s)
- William Matchin
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, 29208, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, 92697, USA
- Department of Language Science, University of California, Irvine, Irvine, CA, 92697, USA
| |
Collapse
|
31
|
Proverbio AM, Benedetto F, Guazzone M. Shared neural mechanisms for processing emotions in music and vocalizations. Eur J Neurosci 2019; 51:1987-2007. [DOI: 10.1111/ejn.14650] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 11/21/2019] [Accepted: 12/05/2019] [Indexed: 12/21/2022]
Affiliation(s)
- Alice Mado Proverbio
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| | - Francesco Benedetto
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| | - Martina Guazzone
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| |
Collapse
|
32
|
Krishnan S, Lima CF, Evans S, Chen S, Guldner S, Yeff H, Manly T, Scott SK. Beatboxers and Guitarists Engage Sensorimotor Regions Selectively When Listening to the Instruments They can Play. Cereb Cortex 2019; 28:4063-4079. [PMID: 30169831 PMCID: PMC6188551 DOI: 10.1093/cercor/bhy208] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 08/04/2018] [Indexed: 12/31/2022] Open
Abstract
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians-beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal "how" pathway during auditory processing.
Collapse
Affiliation(s)
- Saloni Krishnan
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Oxford, UK
| | - César F Lima
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Instituto Universitário de Lisboa (ISCTE-IUL), Avenida das Forças Armadas, Lisboa, Portugal
| | - Samuel Evans
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Department of Psychology, University of Westminster, 115 New Cavendish Street, London, UK
| | - Sinead Chen
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK
| | - Stella Guldner
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Graduate School of Economic and Social Sciences (GESS), University of Mannheim, Mannheim, Germany
| | - Harry Yeff
- Get Involved Ltd, 3 Loughborough Street, London, UK
| | - Tom Manly
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK
| |
Collapse
|
33
|
Gelding RW, Thompson WF, Johnson BW. Musical imagery depends upon coordination of auditory and sensorimotor brain activity. Sci Rep 2019; 9:16823. [PMID: 31727968 PMCID: PMC6856354 DOI: 10.1038/s41598-019-53260-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 10/28/2019] [Indexed: 11/09/2022] Open
Abstract
Recent magnetoencephalography (MEG) studies have established that sensorimotor brain rhythms are strongly modulated during mental imagery of musical beat and rhythm, suggesting that motor regions of the brain are important for temporal aspects of musical imagery. The present study examined whether these rhythms also play a role in non-temporal aspects of musical imagery including musical pitch. Brain function was measured with MEG from 19 healthy adults while they performed a validated musical pitch imagery task and two non-imagery control tasks with identical temporal characteristics. A 4-dipole source model probed activity in bilateral auditory and sensorimotor cortices. Significantly greater β-band modulation was found during imagery compared to control tasks of auditory perception and mental arithmetic. Imagery-induced β-modulation showed no significant differences between auditory and sensorimotor regions, which may reflect a tightly coordinated mode of communication between these areas. Directed connectivity analysis in the θ-band revealed that the left sensorimotor region drove left auditory region during imagery onset. These results add to the growing evidence that motor regions of the brain are involved in the top-down generation of musical imagery, and that imagery-like processes may be involved in musical perception.
Collapse
Affiliation(s)
- Rebecca W Gelding
- Department of Cognitive Science, Macquarie University, Sydney, NSW, 2109, Australia.
| | - William F Thompson
- Department of Psychology, Macquarie University, Sydney, NSW, 2109, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, Sydney, NSW, 2109, Australia
| |
Collapse
|
34
|
Lumaca M, Kleber B, Brattico E, Vuust P, Baggio G. Functional connectivity in human auditory networks and the origins of variation in the transmission of musical systems. eLife 2019; 8:48710. [PMID: 31658945 PMCID: PMC6819097 DOI: 10.7554/elife.48710] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Accepted: 10/09/2019] [Indexed: 02/02/2023] Open
Abstract
Music producers, whether original composers or performers, vary in their ability to acquire and faithfully transmit music. This form of variation may serve as a mechanism for the emergence of new traits in musical systems. In this study, we aim to investigate whether individual differences in the social learning and transmission of music relate to intrinsic neural dynamics of auditory processing systems. We combined auditory and resting-state functional magnetic resonance imaging (fMRI) with an interactive laboratory model of cultural transmission, the signaling game, in an experiment with a large cohort of participants (N=51). We found that the degree of interhemispheric rs-FC within fronto-temporal auditory networks predicts—weeks after scanning—learning, transmission, and structural modification of an artificial tone system. Our study introduces neuroimaging in cultural transmission research and points to specific neural auditory processing mechanisms that constrain and drive variation in the cultural transmission and regularization of musical systems.
Collapse
Affiliation(s)
- Massimo Lumaca
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Boris Kleber
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Giosue Baggio
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
35
|
Lifshitz-Ben-Basat A, Fostick L. Music-related abilities among readers with dyslexia. ANNALS OF DYSLEXIA 2019; 69:318-334. [PMID: 31446571 DOI: 10.1007/s11881-019-00185-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2019] [Accepted: 08/14/2019] [Indexed: 06/10/2023]
Abstract
Research suggests that a central difficulty in dyslexia may be impaired rapid temporal processing. Good temporal processing is also needed for musical perception, which relies on the ability to detect rapid changes. Our study is the first to measure the perception of adults with and without dyslexia on all three dimensions of music (rhythm, pitch, and spectrum), as well as their capacity for auditory imagery and detection of slow changes, while controlling for working memory. Participants were undergraduate students, aged 20-35 years: 26 readers with dyslexia and 30 typical readers. Participants completed a battery of tests measuring aptitude for recognizing the similarity/difference in tone pitch or rhythm, spectral resolution, vividness/control of auditory imagination, the ability to detect slow changes in auditory stimuli, and working memory. As expected, readers with dyslexia showed poorer performance in pitch and rhythm than controls, but outperformed them in spectral perception. The data for each test was analyzed separately while controlling for the letter-number sequencing score. No differences between groups were found in slow-change detection or auditory imagery. Our results demonstrated that rapid temporal processing appears to be the main difficulty of readers with dyslexia, who demonstrated poorer performance when stimuli were presented quickly rather than slowly and better performance on a task when no temporal component was involved. These findings underscore the need for further study of temporal processing in readers with dyslexia. Remediation of temporal processing deficits may unmask the preserved or even superior abilities of people with dyslexia, leading to enhanced ability in all areas that utilize the temporal component.
Collapse
Affiliation(s)
| | - Leah Fostick
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
36
|
Neural Correlates of Music Listening and Recall in the Human Brain. J Neurosci 2019; 39:8112-8123. [PMID: 31501297 DOI: 10.1523/jneurosci.1468-18.2019] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2018] [Revised: 08/13/2019] [Accepted: 08/14/2019] [Indexed: 11/21/2022] Open
Abstract
Previous neuroimaging studies have identified various brain regions that are activated by music listening or recall. However, little is known about how these brain regions represent the time course and temporal features of music during listening and recall. Here we analyzed neural activity in different brain regions associated with music listening and recall using electrocorticography recordings obtained from 10 epilepsy patients of both genders implanted with subdural electrodes. Electrocorticography signals were recorded while subjects were listening to familiar instrumental music or recalling the same music pieces by imagery. During the onset phase (0-500 ms), music listening initiated cortical activity in high-gamma band in the temporal lobe and supramarginal gyrus, followed by the precentral gyrus and the inferior frontal gyrus. In contrast, during music recall, the high-gamma band activity first appeared in the inferior frontal gyrus and precentral gyrus, and then spread to the temporal lobe, showing a reversed temporal sequential order. During the sustained phase (after 500 ms), delta band and high-gamma band responses in the supramarginal gyrus, temporal and frontal lobes dynamically tracked the intensity envelope of the music during listening or recall with distinct temporal delays. During music listening, the neural tracking by the frontal lobe lagged behind that of the temporal lobe; whereas during music recall, the neural tracking by the frontal lobe preceded that of the temporal lobe. These findings demonstrate bottom-up and top-down processes in the cerebral cortex during music listening and recall and provide important insights into music processing by the human brain.SIGNIFICANCE STATEMENT Understanding how the brain analyzes, stores, and retrieves music remains one of the most challenging problems in neuroscience. By analyzing direct neural recordings obtained from the human brain, we observed dispersed and overlapping brain regions associated with music listening and recall. Music listening initiated cortical activity in high-gamma band starting from the temporal lobe and ending at the inferior frontal gyrus. A reversed temporal flow was observed in high-gamma response during music recall. Neural responses of frontal and temporal lobes dynamically tracked the intensity envelope of music that was presented or imagined during listening or recall. These findings demonstrate bottom-up and top-down processes in the cerebral cortex during music listening and recall.
Collapse
|
37
|
Siman-Tov T, Granot RY, Shany O, Singer N, Hendler T, Gordon CR. Is there a prediction network? Meta-analytic evidence for a cortical-subcortical network likely subserving prediction. Neurosci Biobehav Rev 2019; 105:262-275. [PMID: 31437478 DOI: 10.1016/j.neubiorev.2019.08.012] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 07/25/2019] [Accepted: 08/17/2019] [Indexed: 01/24/2023]
Abstract
Predictive coding is an increasingly influential and ambitious concept in neuroscience viewing the brain as a 'hypothesis testing machine' that constantly strives to minimize prediction error, the gap between its predictions and the actual sensory input. Despite the invaluable contribution of this framework to the formulation of brain function, its neuroanatomical foundations have not been fully defined. To address this gap, we conducted activation likelihood estimation (ALE) meta-analysis of 39 neuroimaging studies of three functional domains (action perception, language and music) inherently involving prediction. The ALE analysis revealed a widely distributed brain network encompassing regions within the inferior and middle frontal gyri, anterior insula, premotor cortex, pre-supplementary motor area, temporoparietal junction, striatum, thalamus/subthalamus and the cerebellum. This network is proposed to subserve domain-general prediction and its relevance to motor control, attention, implicit learning and social cognition is discussed in light of the predictive coding scheme. Better understanding of the presented network may help advance treatments of neuropsychiatric conditions related to aberrant prediction processing and promote cognitive enhancement in healthy individuals.
Collapse
Affiliation(s)
- Tali Siman-Tov
- Sagol Brain Institute Tel Aviv, Wohl Institute for Advanced Imaging, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel; Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Roni Y Granot
- Musicology Department, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ofir Shany
- Sagol Brain Institute Tel Aviv, Wohl Institute for Advanced Imaging, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel; School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Neomi Singer
- Sagol Brain Institute Tel Aviv, Wohl Institute for Advanced Imaging, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel; Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel; Montreal Neurological Institute, Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec, Canada
| | - Talma Hendler
- Sagol Brain Institute Tel Aviv, Wohl Institute for Advanced Imaging, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel; Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel; School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel; Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Carlos R Gordon
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel; Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel; Department of Neurology, Meir Medical Center, Kfar Saba, Israel
| |
Collapse
|
38
|
Pitch-specific contributions of auditory imagery and auditory memory in vocal pitch imitation. Atten Percept Psychophys 2019; 81:2473-2481. [PMID: 31286436 DOI: 10.3758/s13414-019-01799-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Vocal imitation guides both music and language development. Despite the developmental significance of this behavior, a sizable minority of individuals are inaccurate at vocal pitch imitation. Although previous research suggested that inaccurate pitch imitation results from deficient sensorimotor associations between pitch perception and vocal motor planning, the cognitive processes involved in sensorimotor translation are not clearly defined. In the present research, we investigated the roles of basic cognitive processes in the vocal imitation of pitch, as well as the degree to which these processes rely on pitch-specific resources. In the present study, participants completed a battery of pitch and verbal tasks to measure pitch perception, pitch and verbal auditory imagery, pitch and verbal auditory short-term memory, and pitch imitation ability. Information on participants' music background was collected, as well. Pitch imagery, pitch short-term memory, pitch discrimination ability, and musical experience were unique predictors of pitch imitation ability. Furthermore, pitch imagery was a partial mediator of the relationship between pitch short-term memory and pitch imitation ability. These results indicate that vocal imitation recruits cognitive processes that rely on at least partially separate neural resources for pitch and verbal representations.
Collapse
|
39
|
Green B, Jääskeläinen IP, Sams M, Rauschecker JP. Distinct brain areas process novel and repeating tone sequences. BRAIN AND LANGUAGE 2018; 187:104-114. [PMID: 30278992 DOI: 10.1016/j.bandl.2018.09.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 09/23/2018] [Indexed: 06/08/2023]
Abstract
The auditory dorsal stream has been implicated in sensorimotor integration and concatenation of sequential sound events, both being important for processing of speech and music. The auditory ventral stream, by contrast, is characterized as subserving sound identification and recognition. We studied the respective roles of the dorsal and ventral streams, including recruitment of basal ganglia and medial temporal lobe structures, in the processing of tone sequence elements. A sequence was presented incrementally across several runs during functional magnetic resonance imaging in humans, and we compared activation by sequence elements when heard for the first time ("novel") versus when the elements were repeating ("familiar"). Our results show a shift in tone-sequence-dependent activation from posterior-dorsal cortical areas and the basal ganglia during the processing of less familiar sequence elements towards anterior and ventral cortical areas and the medial temporal lobe after the encoding of highly familiar sequence elements into identifiable auditory objects.
Collapse
Affiliation(s)
- Brannon Green
- Laboratory of Integrative Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road NW, New Research Building-WP19, Washington, DC 20007, USA.
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland; AMI Centre, Aalto NeuroImaging, Aalto University, Finland
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland
| | - Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road NW, New Research Building-WP19, Washington, DC 20007, USA; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland; Institute for Advanced Study, TUM, Munich-Garching, 80333 Munich, Germany.
| |
Collapse
|
40
|
Recruitment of the motor system during music listening: An ALE meta-analysis of fMRI data. PLoS One 2018; 13:e0207213. [PMID: 30452442 PMCID: PMC6242316 DOI: 10.1371/journal.pone.0207213] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Accepted: 10/26/2018] [Indexed: 12/04/2022] Open
Abstract
Several neuroimaging studies have shown that listening to music activates brain regions that reside in the motor system, even when there is no overt movement. However, many of these studies report the activation of varying motor system areas that include the primary motor cortex, supplementary motor area, dorsal and ventral pre-motor areas and parietal regions. In order to examine what specific roles are played by various motor regions during music perception, we used activation likelihood estimation (ALE) to conduct a meta-analysis of neuroimaging literature on passive music listening. After extensive search of the literature, 42 studies were analyzed resulting in a total of 386 unique subjects contributing 694 activation foci in total. As suspected, auditory activations were found in the bilateral superior temporal gyrus, transverse temporal gyrus, insula, pyramis, bilateral precentral gyrus, and bilateral medial frontal gyrus. We also saw the widespread activation of motor networks including left and right lateral premotor cortex, right primary motor cortex, and the left cerebellum. These results suggest a central role of the motor system in music and rhythm perception. We discuss these findings in the context of the Action Simulation for Auditory Prediction (ASAP) model and other predictive coding accounts of brain function.
Collapse
|
41
|
Agustus JL, Golden HL, Callaghan MF, Bond RL, Benhamou E, Hailstone JC, Weiskopf N, Warren JD. Melody Processing Characterizes Functional Neuroanatomy in the Aging Brain. Front Neurosci 2018; 12:815. [PMID: 30524219 PMCID: PMC6262413 DOI: 10.3389/fnins.2018.00815] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Accepted: 10/19/2018] [Indexed: 11/13/2022] Open
Abstract
The functional neuroanatomical mechanisms underpinning cognition in the normal older brain remain poorly defined, but have important implications for understanding the neurobiology of aging and the impact of neurodegenerative diseases. Auditory processing is an attractive model system for addressing these issues. Here, we used fMRI of melody processing to investigate auditory pattern processing in normal older individuals. We manipulated the temporal (rhythmic) structure and familiarity of melodies in a passive listening, 'sparse' fMRI protocol. A distributed cortico-subcortical network was activated by auditory stimulation compared with silence; and within this network, we identified separable signatures of anisochrony processing in bilateral posterior superior temporal lobes; melodic familiarity in bilateral anterior temporal and inferior frontal cortices; and melodic novelty in bilateral temporal and left parietal cortices. Left planum temporale emerged as a 'hub' region functionally partitioned for processing different melody dimensions. Activation of Heschl's gyrus by auditory stimulation correlated with the integrity of underlying cortical tissue architecture, measured using multi-parameter mapping. Our findings delineate neural substrates for analyzing perceptual and semantic properties of melodies in normal aging. Melody (auditory pattern) processing may be a useful candidate paradigm for assessing cerebral networks in the older brain and potentially, in neurodegenerative diseases of later life.
Collapse
Affiliation(s)
- Jennifer L. Agustus
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Hannah L. Golden
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Martina F. Callaghan
- Wellcome Trust Centre for Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Rebecca L. Bond
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Elia Benhamou
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Julia C. Hailstone
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Nikolaus Weiskopf
- Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jason D. Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
42
|
Jiang J, Liu F, Zhou L, Jiang C. The neural basis for understanding imitation-induced musical meaning: The role of the human mirror system. Behav Brain Res 2018; 359:362-369. [PMID: 30458161 DOI: 10.1016/j.bbr.2018.11.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Revised: 11/12/2018] [Accepted: 11/13/2018] [Indexed: 11/28/2022]
Abstract
Music can convey meanings by imitating phenomena of the extramusical world, and these imitation-induced musical meanings can be understood by listeners. Although the human mirror system (HMS) is implicated in imitation, little is known about the HMS's role in making sense of meaning that derives from musical imitation. To answer this question, we used fMRI to examine listeners' brain activities during the processing of imitation-induced musical meaning with a cross-modal semantic priming paradigm. Eleven normal individuals and 11 individuals with congenital amusia, a neurodevelopmental disorder of musical processing, participated in the experiment. Target pictures with either an upward or downward movement were primed by semantically congruent or incongruent melodic sequences characterized by the direction of pitch change (upward or downward). When contrasting the incongruent with the congruent condition between the two groups, we found greater activations in the left supramarginal gyrus/inferior parietal lobule and inferior frontal gyrus in normals but not in amusics. The implications of these findings in terms of the role of the HMS in understanding imitation-induced musical meaning are discussed.
Collapse
Affiliation(s)
- Jun Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China; Institute of Psychology, Shanghai Normal University, Shanghai, China.
| |
Collapse
|
43
|
Pruitt TA, Halpern AR, Pfordresher PQ. Covert singing in anticipatory auditory imagery. Psychophysiology 2018; 56:e13297. [PMID: 30368823 DOI: 10.1111/psyp.13297] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 09/16/2018] [Accepted: 09/19/2018] [Indexed: 11/29/2022]
Abstract
To date, several fMRI studies reveal activation in motor planning areas during musical auditory imagery. We addressed whether such activations may give rise to peripheral motor activity, termed subvocalization or covert singing, using surface electromyography. Sensors placed on extrinsic laryngeal muscles, facial muscles, and a control site on the bicep measured muscle activity during auditory imagery that preceded singing, as well as during the completion of a visual imagery task. Greater activation was found in laryngeal and lip muscles for auditory than for visual imagery tasks, whereas no differences across tasks were found for other sensors. Furthermore, less accurate singers exhibited greater laryngeal activity during auditory imagery than did more accurate singers. This suggests that subvocalization may be used as a strategy to facilitate auditory imagery, which appears to be degraded in inaccurate singers. Taken together, these results suggest that subvocalization may play a role in anticipatory auditory imagery, and possibly as a way of supplementing motor associations with auditory imagery.
Collapse
Affiliation(s)
- Tim A Pruitt
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York
| | - Andrea R Halpern
- Department of Psychology, Bucknell University, Lewisburg, Pennsylvania
| | - Peter Q Pfordresher
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York
| |
Collapse
|
44
|
Brown RM, Penhune VB. Efficacy of Auditory versus Motor Learning for Skilled and Novice Performers. J Cogn Neurosci 2018; 30:1657-1682. [PMID: 30156505 DOI: 10.1162/jocn_a_01309] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Humans must learn a variety of sensorimotor skills, yet the relative contributions of sensory and motor information to skill acquisition remain unclear. Here we compare the behavioral and neural contributions of perceptual learning to that of motor learning, and we test whether these contributions depend on the expertise of the learner. Pianists and nonmusicians learned to perform novel melodies on a piano during fMRI scanning in four learning conditions: listening (auditory learning), performing without auditory feedback (motor learning), performing with auditory feedback (auditory-motor learning), or observing visual cues without performing or listening (cue-only learning). Visual cues were present in every learning condition and consisted of musical notation for pianists and spatial cues for nonmusicians. Melodies were performed from memory with no visual cues and with auditory feedback (recall) five times during learning. Pianists showed greater improvements in pitch and rhythm accuracy at recall during auditory learning compared with motor learning. Nonmusicians demonstrated greater rhythm improvements at recall during auditory learning compared with all other learning conditions. Pianists showed greater primary motor response at recall during auditory learning compared with motor learning, and response in this region during auditory learning correlated with pitch accuracy at recall and with auditory-premotor network response during auditory learning. Nonmusicians showed greater inferior parietal response during auditory compared with auditory-motor learning, and response in this region correlated with pitch accuracy at recall. Results suggest an advantage for perceptual learning compared with motor learning that is both general and expertise-dependent. This advantage is hypothesized to depend on feedforward motor control systems that can be used during learning to transform sensory information into motor production.
Collapse
|
45
|
Not All Predictions Are Equal: "What" and "When" Predictions Modulate Activity in Auditory Cortex through Different Mechanisms. J Neurosci 2018; 38:8680-8693. [PMID: 30143578 DOI: 10.1523/jneurosci.0369-18.2018] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 07/22/2018] [Accepted: 07/26/2018] [Indexed: 11/21/2022] Open
Abstract
Using predictions based on environmental regularities is fundamental for adaptive behavior. While it is widely accepted that predictions across different stimulus attributes (e.g., time and content) facilitate sensory processing, it is unknown whether predictions across these attributes rely on the same neural mechanism. Here, to elucidate the neural mechanisms of predictions, we combine invasive electrophysiological recordings (human electrocorticography in 4 females and 2 males) with computational modeling while manipulating predictions about content ("what") and time ("when"). We found that "when" predictions increased evoked activity over motor and prefrontal regions both at early (∼180 ms) and late (430-450 ms) latencies. "What" predictability, however, increased evoked activity only over prefrontal areas late in time (420-460 ms). Beyond these dissociable influences, we found that "what" and "when" predictability interactively modulated the amplitude of early (165 ms) evoked responses in the superior temporal gyrus. We modeled the observed neural responses using biophysically realistic neural mass models, to better understand whether "what" and "when" predictions tap into similar or different neurophysiological mechanisms. Our modeling results suggest that "what" and "when" predictability rely on complementary neural processes: "what" predictions increased short-term plasticity in auditory areas, whereas "when" predictability increased synaptic gain in motor areas. Thus, content and temporal predictions engage complementary neural mechanisms in different regions, suggesting domain-specific prediction signaling along the cortical hierarchy. Encoding predictions through different mechanisms may endow the brain with the flexibility to efficiently signal different sources of predictions, weight them by their reliability, and allow for their encoding without mutual interference.SIGNIFICANCE STATEMENT Predictions of different stimulus features facilitate sensory processing. However, it is unclear whether predictions of different attributes rely on similar or different neural mechanisms. By combining invasive electrophysiological recordings of cortical activity with experimental manipulations of participants' predictions about content and time of acoustic events, we found that the two types of predictions had dissociable influences on cortical activity, both in terms of the regions involved and the timing of the observed effects. Further, our biophysical modeling analysis suggests that predictability of content and time rely on complementary neural processes: short-term plasticity in auditory areas and synaptic gain in motor areas, respectively. This suggests that predictions of different features are encoded with complementary neural mechanisms in different brain regions.
Collapse
|
46
|
Karlaftis VM, Wang R, Shen Y, Tino P, Williams G, Welchman AE, Kourtzi Z. White-Matter Pathways for Statistical Learning of Temporal Structures. eNeuro 2018; 5:ENEURO.0382-17.2018. [PMID: 30027110 PMCID: PMC6051593 DOI: 10.1523/eneuro.0382-17.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Revised: 04/21/2018] [Accepted: 04/23/2018] [Indexed: 02/02/2023] Open
Abstract
Extracting the statistics of event streams in natural environments is critical for interpreting current events and predicting future ones. The brain is known to rapidly find structure and meaning in unfamiliar streams of sensory experience, often by mere exposure to the environment (i.e., without explicit feedback). Yet, we know little about the brain pathways that support this type of statistical learning. Here, we test whether changes in white-matter (WM) connectivity due to training relate to our ability to extract temporal regularities. By combining behavioral training and diffusion tensor imaging (DTI), we demonstrate that humans adapt to the environment's statistics as they change over time from simple repetition to probabilistic combinations. In particular, we show that learning relates to the decision strategy that individuals adopt when extracting temporal statistics. We next test for learning-dependent changes in WM connectivity and ask whether they relate to individual variability in decision strategy. Our DTI results provide evidence for dissociable WM pathways that relate to individual strategy: extracting the exact sequence statistics (i.e., matching) relates to connectivity changes between caudate and hippocampus, while selecting the most probable outcomes in a given context (i.e., maximizing) relates to connectivity changes between prefrontal, cingulate and basal ganglia (caudate, putamen) regions. Thus, our findings provide evidence for distinct cortico-striatal circuits that show learning-dependent changes of WM connectivity and support individual ability to learn behaviorally-relevant statistics.
Collapse
Affiliation(s)
- Vasilis M. Karlaftis
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom CB2 3EB
| | - Rui Wang
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom CB2 3EB
- Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101
| | - Yuan Shen
- Department of Computing and Technology, Nottingham Trent University, Nottingham, NG11 8NS, United Kingdom
- School of Computer Science, University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Peter Tino
- School of Computer Science, University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Guy Williams
- Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom
| | - Andrew E. Welchman
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom CB2 3EB
| | - Zoe Kourtzi
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom CB2 3EB
| |
Collapse
|
47
|
Predictability of what or where reduces brain activity, but a bottleneck occurs when both are predictable. Neuroimage 2018; 167:224-236. [DOI: 10.1016/j.neuroimage.2016.06.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Revised: 05/31/2016] [Accepted: 06/01/2016] [Indexed: 11/22/2022] Open
|
48
|
Rauschecker JP. Where, When, and How: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex 2018; 98:262-268. [PMID: 29183630 PMCID: PMC5771843 DOI: 10.1016/j.cortex.2017.10.020] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 08/19/2017] [Accepted: 10/12/2017] [Indexed: 10/18/2022]
Abstract
Dual processing streams in sensory systems have been postulated for a long time. Much experimental evidence has been accumulated from behavioral, neuropsychological, electrophysiological, neuroanatomical and neuroimaging work supporting the existence of largely segregated cortical pathways in both vision and audition. More recently, debate has returned to the question of overlap between these pathways and whether there aren't really more than two processing streams. The present piece defends the dual-system view. Focusing on the functions of the dorsal stream in the auditory and language system I try to reconcile the various models of Where, How and When into one coherent concept of sensorimotor integration. This framework incorporates principles of internal models in feedback control systems and is applicable to the visual system as well.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA; Institute for Advanced Study, Technische Universität München, Garching bei München, Germany.
| |
Collapse
|
49
|
Leaver AM, Wade B, Vasavada M, Hellemann G, Joshi SH, Espinoza R, Narr KL. Fronto-Temporal Connectivity Predicts ECT Outcome in Major Depression. Front Psychiatry 2018; 9:92. [PMID: 29618992 PMCID: PMC5871748 DOI: 10.3389/fpsyt.2018.00092] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Accepted: 03/06/2018] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Electroconvulsive therapy (ECT) is arguably the most effective available treatment for severe depression. Recent studies have used MRI data to predict clinical outcome to ECT and other antidepressant therapies. One challenge facing such studies is selecting from among the many available metrics, which characterize complementary and sometimes non-overlapping aspects of brain function and connectomics. Here, we assessed the ability of aggregated, functional MRI metrics of basal brain activity and connectivity to predict antidepressant response to ECT using machine learning. METHODS A radial support vector machine was trained using arterial spin labeling (ASL) and blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) metrics from n = 46 (26 female, mean age 42) depressed patients prior to ECT (majority right-unilateral stimulation). Image preprocessing was applied using standard procedures, and metrics included cerebral blood flow in ASL, and regional homogeneity, fractional amplitude of low-frequency modulations, and graph theory metrics (strength, local efficiency, and clustering) in BOLD data. A 5-repeated 5-fold cross-validation procedure with nested feature-selection validated model performance. Linear regressions were applied post hoc to aid interpretation of discriminative features. RESULTS The range of balanced accuracy in models performing statistically above chance was 58-68%. Here, prediction of non-responders was slightly higher than for responders (maximum performance 74 and 64%, respectively). Several features were consistently selected across cross-validation folds, mostly within frontal and temporal regions. Among these were connectivity strength among: a fronto-parietal network [including left dorsolateral prefrontal cortex (DLPFC)], motor and temporal networks (near ECT electrodes), and/or subgenual anterior cingulate cortex (sgACC). CONCLUSION Our data indicate that pattern classification of multimodal fMRI metrics can successfully predict ECT outcome, particularly for individuals who will not respond to treatment. Notably, connectivity with networks highly relevant to ECT and depression were consistently selected as important predictive features. These included the left DLPFC and the sgACC, which are both targets of other neurostimulation therapies for depression, as well as connectivity between motor and right temporal cortices near electrode sites. Future studies that probe additional functional and structural MRI metrics and other patient characteristics may further improve the predictive power of these and similar models.
Collapse
Affiliation(s)
- Amber M Leaver
- Ahmanson-Lovelace Brain Mapping Center, Department of Neurology, University of California Los Angeles, Los Angeles, CA, United States
| | - Benjamin Wade
- Ahmanson-Lovelace Brain Mapping Center, Department of Neurology, University of California Los Angeles, Los Angeles, CA, United States
| | - Megha Vasavada
- Ahmanson-Lovelace Brain Mapping Center, Department of Neurology, University of California Los Angeles, Los Angeles, CA, United States
| | - Gerhard Hellemann
- Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Shantanu H Joshi
- Ahmanson-Lovelace Brain Mapping Center, Department of Neurology, University of California Los Angeles, Los Angeles, CA, United States
| | - Randall Espinoza
- Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Katherine L Narr
- Ahmanson-Lovelace Brain Mapping Center, Department of Neurology, University of California Los Angeles, Los Angeles, CA, United States.,Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
50
|
Tanaka S, Kirino E. Dynamic Reconfiguration of the Supplementary Motor Area Network during Imagined Music Performance. Front Hum Neurosci 2017; 11:606. [PMID: 29311870 PMCID: PMC5732967 DOI: 10.3389/fnhum.2017.00606] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Accepted: 11/28/2017] [Indexed: 11/18/2022] Open
Abstract
The supplementary motor area (SMA) has been shown to be the center for motor planning and is active during music listening and performance. However, limited data exist on the role of the SMA in music. Music performance requires complex information processing in auditory, visual, spatial, emotional, and motor domains, and this information is integrated for the performance. We hypothesized that the SMA is engaged in multimodal integration of information, distributed across several regions of the brain to prepare for ongoing music performance. To test this hypothesis, functional networks involving the SMA were extracted from functional magnetic resonance imaging (fMRI) data that were acquired from musicians during imagined music performance and during the resting state. Compared with the resting condition, imagined music performance increased connectivity of the SMA with widespread regions in the brain including the sensorimotor cortices, parietal cortex, posterior temporal cortex, occipital cortex, and inferior and dorsolateral prefrontal cortex. Increased connectivity of the SMA with the dorsolateral prefrontal cortex suggests that the SMA is under cognitive control, while increased connectivity with the inferior prefrontal cortex suggests the involvement of syntax processing. Increased connectivity with the parietal cortex, posterior temporal cortex, and occipital cortex is likely for the integration of spatial, emotional, and visual information. Finally, increased connectivity with the sensorimotor cortices was potentially involved with the translation of thought planning into motor programs. Therefore, the reconfiguration of the SMA network observed in this study is considered to reflect the multimodal integration required for imagined and actual music performance. We propose that the SMA network construct “the internal representation of music performance” by integrating multimodal information required for the performance.
Collapse
Affiliation(s)
- Shoji Tanaka
- Department of Information and Communication Sciences, Sophia University, Tokyo, Japan
| | - Eiji Kirino
- Department of Psychiatry, School of Medicine, Juntendo University, Tokyo, Japan.,Department of Psychiatry, Juntendo Shizuoka Hospital, Shizuoka, Japan
| |
Collapse
|