1
|
Whittaker HT, Khayyat L, Fortier-Lavallée J, Laverdière M, Bélanger C, Zatorre RJ, Albouy P. Information-based rhythmic transcranial magnetic stimulation to accelerate learning during auditory working memory training: a proof-of-concept study. Front Neurosci 2024; 18:1355565. [PMID: 38638697 PMCID: PMC11024337 DOI: 10.3389/fnins.2024.1355565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 03/14/2024] [Indexed: 04/20/2024] Open
Abstract
Introduction Rhythmic transcranial magnetic stimulation (rhTMS) has been shown to enhance auditory working memory manipulation, specifically by boosting theta oscillatory power in the dorsal auditory pathway during task performance. It remains unclear whether these enhancements (i) persist beyond the period of stimulation, (ii) if they can accelerate learning and (iii) if they would accumulate over several days of stimulation. In the present study, we investigated the lasting behavioral and electrophysiological effects of applying rhTMS over the left intraparietal sulcus (IPS) throughout the course of seven sessions of cognitive training on an auditory working memory task. Methods A limited sample of 14 neurologically healthy participants took part in the training protocol with an auditory working memory task while being stimulated with either theta (5 Hz) rhTMS or sham TMS. Electroencephalography (EEG) was recorded before, throughout five training sessions and after the end of training to assess to effects of rhTMS on behavioral performance and on oscillatory entrainment of the dorsal auditory network. Results We show that this combined approach enhances theta oscillatory activity within the fronto-parietal network and causes improvements in auditoryworking memory performance. We show that compared to individuals who received sham stimulation, cognitive training can be accelerated when combined with optimized rhTMS, and that task performance benefits can outlast the training period by ∼ 3 days. Furthermore, we show that there is increased theta oscillatory power within the recruited dorsal auditory network during training, and that sustained EEG changes can be observed ∼ 3 days following stimulation. Discussion The present study, while underpowered for definitive statistical analyses, serves to improve our understanding of the causal dynamic interactions supporting auditory working memory. Our results constitute an important proof of concept for the potential translational impact of non-invasive brain stimulation protocols and provide preliminary data for developing optimized rhTMS and training protocols that could be implemented in clinical populations.
Collapse
Affiliation(s)
- Heather T. Whittaker
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montréal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - Centre for Research on Brain Language and Music (CRBLM), Montreal, QC, Canada
| | - Lina Khayyat
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montréal, QC, Canada
| | | | - Megan Laverdière
- CERVO Brain Research Centre, School of Psychology, Université Laval, Québec City, QC, Canada
| | - Carole Bélanger
- CERVO Brain Research Centre, School of Psychology, Université Laval, Québec City, QC, Canada
| | - Robert J. Zatorre
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montréal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - Centre for Research on Brain Language and Music (CRBLM), Montreal, QC, Canada
| | - Philippe Albouy
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montréal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - Centre for Research on Brain Language and Music (CRBLM), Montreal, QC, Canada
- CERVO Brain Research Centre, School of Psychology, Université Laval, Québec City, QC, Canada
| |
Collapse
|
2
|
Trost W, Trevor C, Fernandez N, Steiner F, Frühholz S. Live music stimulates the affective brain and emotionally entrains listeners in real time. Proc Natl Acad Sci U S A 2024; 121:e2316306121. [PMID: 38408255 DOI: 10.1073/pnas.2316306121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/18/2024] [Indexed: 02/28/2024] Open
Abstract
Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.
Collapse
Affiliation(s)
- Wiebke Trost
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Caitlyn Trevor
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Natalia Fernandez
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Florence Steiner
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich 8050, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich 8057, Switzerland
- Department of Psychology, University of Oslo, Oslo 0373, Norway
| |
Collapse
|
3
|
Naghibi N, Jahangiri N, Khosrowabadi R, Eickhoff CR, Eickhoff SB, Coull JT, Tahmasian M. Embodying Time in the Brain: A Multi-Dimensional Neuroimaging Meta-Analysis of 95 Duration Processing Studies. Neuropsychol Rev 2024; 34:277-298. [PMID: 36857010 PMCID: PMC10920454 DOI: 10.1007/s11065-023-09588-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 10/05/2022] [Indexed: 03/02/2023]
Abstract
Time is an omnipresent aspect of almost everything we experience internally or in the external world. The experience of time occurs through such an extensive set of contextual factors that, after decades of research, a unified understanding of its neural substrates is still elusive. In this study, following the recent best-practice guidelines, we conducted a coordinate-based meta-analysis of 95 carefully-selected neuroimaging papers of duration processing. We categorized the included papers into 14 classes of temporal features according to six categorical dimensions. Then, using the activation likelihood estimation (ALE) technique we investigated the convergent activation patterns of each class with a cluster-level family-wise error correction at p < 0.05. The regions most consistently activated across the various timing contexts were the pre-SMA and bilateral insula, consistent with an embodied theory of timing in which abstract representations of duration are rooted in sensorimotor and interoceptive experience, respectively. Moreover, class-specific patterns of activation could be roughly divided according to whether participants were timing auditory sequential stimuli, which additionally activated the dorsal striatum and SMA-proper, or visual single interval stimuli, which additionally activated the right middle frontal and inferior parietal cortices. We conclude that temporal cognition is so entangled with our everyday experience that timing stereotypically common combinations of stimulus characteristics reactivates the sensorimotor systems with which they were first experienced.
Collapse
Affiliation(s)
- Narges Naghibi
- Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
| | - Nadia Jahangiri
- Faculty of Psychology & Education, Allameh Tabataba'i University, Tehran, Iran
| | - Reza Khosrowabadi
- Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
| | - Claudia R Eickhoff
- Institute of Neuroscience and Medicine Research, Structural and functional organisation of the brain (INM-1), Jülich Research Center, Jülich, Germany
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, Düsseldorf, Germany
| | - Simon B Eickhoff
- Institute of Neuroscience and Medicine Research, Brain and Behaviour (INM-7), Jülich Research Center, Wilhelm-Johnen-Straße, Jülich, Germany
- Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany
| | - Jennifer T Coull
- Laboratoire de Neurosciences Cognitives (UMR 7291), Aix-Marseille Université & CNRS, Marseille, France
| | - Masoud Tahmasian
- Institute of Neuroscience and Medicine Research, Brain and Behaviour (INM-7), Jülich Research Center, Wilhelm-Johnen-Straße, Jülich, Germany.
- Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany.
| |
Collapse
|
4
|
Nourski KV, Steinschneider M, Rhone AE, Berger JI, Dappen ER, Kawasaki H, Howard III MA. Intracranial electrophysiology of spectrally degraded speech in the human cortex. Front Hum Neurosci 2024; 17:1334742. [PMID: 38318272 PMCID: PMC10839784 DOI: 10.3389/fnhum.2023.1334742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 12/28/2023] [Indexed: 02/07/2024] Open
Abstract
Introduction Cochlear implants (CIs) are the treatment of choice for severe to profound hearing loss. Variability in CI outcomes remains despite advances in technology and is attributed in part to differences in cortical processing. Studying these differences in CI users is technically challenging. Spectrally degraded stimuli presented to normal-hearing individuals approximate input to the central auditory system in CI users. This study used intracranial electroencephalography (iEEG) to investigate cortical processing of spectrally degraded speech. Methods Participants were adult neurosurgical epilepsy patients. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1-4 bands) or presented without vocoding. The stimuli were presented in a two-alternative forced choice task. Cortical activity was recorded using depth and subdural iEEG electrodes. Electrode coverage included auditory core in posteromedial Heschl's gyrus (HGPM), superior temporal gyrus (STG), ventral and dorsal auditory-related areas, and prefrontal and sensorimotor cortex. Analysis focused on high gamma (70-150 Hz) power augmentation and alpha (8-14 Hz) suppression. Results Chance task performance occurred with 1-2 spectral bands and was near-ceiling for clear stimuli. Performance was variable with 3-4 bands, permitting identification of good and poor performers. There was no relationship between task performance and participants demographic, audiometric, neuropsychological, or clinical profiles. Several response patterns were identified based on magnitude and differences between stimulus conditions. HGPM responded strongly to all stimuli. A preference for clear speech emerged within non-core auditory cortex. Good performers typically had strong responses to all stimuli along the dorsal stream, including posterior STG, supramarginal, and precentral gyrus; a minority of sites in STG and supramarginal gyrus had a preference for vocoded stimuli. In poor performers, responses were typically restricted to clear speech. Alpha suppression was more pronounced in good performers. In contrast, poor performers exhibited a greater involvement of posterior middle temporal gyrus when listening to clear speech. Discussion Responses to noise-vocoded speech provide insights into potential factors underlying CI outcome variability. The results emphasize differences in the balance of neural processing along the dorsal and ventral stream between good and poor performers, identify specific cortical regions that may have diagnostic and prognostic utility, and suggest potential targets for neuromodulation-based CI rehabilitation strategies.
Collapse
Affiliation(s)
- Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
| | - Mitchell Steinschneider
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Ariane E. Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Joel I. Berger
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Emily R. Dappen
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Matthew A. Howard III
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
- Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, United States
| |
Collapse
|
5
|
Rauschecker JP, Afsahi RK. Anatomy of the auditory cortex then and now. J Comp Neurol 2023; 531:1883-1892. [PMID: 38010215 PMCID: PMC10872810 DOI: 10.1002/cne.25560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 08/29/2023] [Accepted: 10/13/2023] [Indexed: 11/29/2023]
Abstract
Using neuroanatomical investigations in the macaque, Deepak Pandya and his colleagues have established the framework for auditory cortex organization, with subdivisions into core and belt areas. This has aided subsequent neurophysiological and imaging studies in monkeys and humans, and a nomenclature building on Pandya's work has also been adopted by the Human Connectome Project. The foundational work by Pandya and his colleagues is highlighted here in the context of subsequent and ongoing studies on the functional anatomy and physiology of auditory cortex in primates, including humans, and their relevance for understanding cognitive aspects of speech and language.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia, USA
| | - Rosstin K Afsahi
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia, USA
| |
Collapse
|
6
|
Plaza PL, Renier L, Rosemann S, De Volder AG, Rauschecker JP. Sound-encoded faces activate the left fusiform face area in the early blind. PLoS One 2023; 18:e0286512. [PMID: 37992062 PMCID: PMC10664868 DOI: 10.1371/journal.pone.0286512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 05/17/2023] [Indexed: 11/24/2023] Open
Abstract
Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. In sighted individuals, facial information is primarily conveyed via the visual modality. Early blind individuals, on the other hand, can recognize shapes using auditory and tactile cues. Here we demonstrate that such individuals can learn to distinguish faces from houses and other shapes by using a sensory substitution device (SSD) presenting schematic faces as sound-encoded stimuli in the auditory modality. Using functional MRI, we then asked whether a face-selective brain region like the fusiform face area (FFA) shows selectivity for faces in the same subjects, and indeed, we found evidence for preferential activation of the left FFA by sound-encoded faces. These results imply that FFA development does not depend on experience with visual faces per se but may instead depend on exposure to the geometry of facial configurations.
Collapse
Affiliation(s)
- Paula L. Plaza
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Laurent Renier
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Stephanie Rosemann
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Anne G. De Volder
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Josef P. Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
7
|
Wang B, Xu X, Niu Y, Wu C, Wu X, Chen J. EEG-based auditory attention decoding with audiovisual speech for hearing-impaired listeners. Cereb Cortex 2023; 33:10972-10983. [PMID: 37750333 DOI: 10.1093/cercor/bhad325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 08/21/2023] [Accepted: 08/22/2023] [Indexed: 09/27/2023] Open
Abstract
Auditory attention decoding (AAD) was used to determine the attended speaker during an auditory selective attention task. However, the auditory factors modulating AAD remained unclear for hearing-impaired (HI) listeners. In this study, scalp electroencephalogram (EEG) was recorded with an auditory selective attention paradigm, in which HI listeners were instructed to attend one of the two simultaneous speech streams with or without congruent visual input (articulation movements), and at a high or low target-to-masker ratio (TMR). Meanwhile, behavioral hearing tests (i.e. audiogram, speech reception threshold, temporal modulation transfer function) were used to assess listeners' individual auditory abilities. The results showed that both visual input and increasing TMR could significantly enhance the cortical tracking of the attended speech and AAD accuracy. Further analysis revealed that the audiovisual (AV) gain in attended speech cortical tracking was significantly correlated with listeners' auditory amplitude modulation (AM) sensitivity, and the TMR gain in attended speech cortical tracking was significantly correlated with listeners' hearing thresholds. Temporal response function analysis revealed that subjects with higher AM sensitivity demonstrated more AV gain over the right occipitotemporal and bilateral frontocentral scalp electrodes.
Collapse
Affiliation(s)
- Bo Wang
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
| | - Xiran Xu
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
| | - Yadong Niu
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
| | - Chao Wu
- School of Nursing, Peking University, Beijing 100191, China
| | - Xihong Wu
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
- National Biomedical Imaging Center, College of Future Technology, Beijing 100871, China
| | - Jing Chen
- Speech and Hearing Research Center, Key Laboratory of Machine Perception (Ministry of Education), School of Intelligence Science and Technology, Peking University, Beijing 100871, China
- National Biomedical Imaging Center, College of Future Technology, Beijing 100871, China
| |
Collapse
|
8
|
Hikishima K, Tsurugizawa T, Kasahara K, Hayashi R, Takagi R, Yoshinaka K, Nitta N. Functional ultrasound reveals effects of MRI acoustic noise on brain function. Neuroimage 2023; 281:120382. [PMID: 37734475 DOI: 10.1016/j.neuroimage.2023.120382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/02/2023] [Accepted: 09/18/2023] [Indexed: 09/23/2023] Open
Abstract
Loud acoustic noise from the scanner during functional magnetic resonance imaging (fMRI) can affect functional connectivity (FC) observed in the resting state, but the exact effect of the MRI acoustic noise on resting state FC is not well understood. Functional ultrasound (fUS) is a neuroimaging method that visualizes brain activity based on relative cerebral blood volume (rCBV), a similar neurovascular coupling response to that measured by fMRI, but without the audible acoustic noise. In this study, we investigated the effects of different acoustic noise levels (silent, 80 dB, and 110 dB) on FC by measuring resting state fUS (rsfUS) in awake mice in an environment similar to fMRI measurement. Then, we compared the results to those of resting state fMRI (rsfMRI) conducted using an 11.7 Tesla scanner. RsfUS experiments revealed a significant reduction in FC between the retrosplenial dysgranular and auditory cortexes (0.56 ± 0.07 at silence vs 0.05 ± 0.05 at 110 dB, p=.01) and a significant increase in FC anticorrelation between the infralimbic and motor cortexes (-0.21 ± 0.08 at silence vs -0.47 ± 0.04 at 110 dB, p=.017) as acoustic noise increased from silence to 80 dB and 110 dB, with increased consistency of FC patterns between rsfUS and rsfMRI being found with the louder noise conditions. Event-related auditory stimulation experiments using fUS showed strong positive rCBV changes (16.5% ± 2.9% at 110 dB) in the auditory cortex, and negative rCBV changes (-6.7% ± 0.8% at 110 dB) in the motor cortex, both being constituents of the brain network that was altered by the presence of acoustic noise in the resting state experiments. Anticorrelation between constituent brain regions of the default mode network (such as the infralimbic cortex) and those of task-positive sensorimotor networks (such as the motor cortex) is known to be an important feature of brain network antagonism, and has been studied as a biological marker of brain disfunction and disease. This study suggests that attention should be paid to the acoustic noise level when using rsfMRI to evaluate the anticorrelation between the default mode network and task-positive sensorimotor network.
Collapse
Affiliation(s)
- Keigo Hikishima
- Health and Medical Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-2-1 Namiki, Tsukuba, Ibaraki 305-8564, Japan; Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Okinwa 904-0495, Japan.
| | - Tomokazu Tsurugizawa
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Higashi, Tsukuba 305-8568, Japan
| | - Kazumi Kasahara
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Higashi, Tsukuba 305-8568, Japan
| | - Ryusuke Hayashi
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Higashi, Tsukuba 305-8568, Japan
| | - Ryo Takagi
- Health and Medical Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-2-1 Namiki, Tsukuba, Ibaraki 305-8564, Japan
| | - Kiyoshi Yoshinaka
- Health and Medical Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-2-1 Namiki, Tsukuba, Ibaraki 305-8564, Japan
| | - Naotaka Nitta
- Health and Medical Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-2-1 Namiki, Tsukuba, Ibaraki 305-8564, Japan
| |
Collapse
|
9
|
Betancourt A, Pérez O, Gámez J, Mendoza G, Merchant H. Amodal population clock in the primate medial premotor system for rhythmic tapping. Cell Rep 2023; 42:113234. [PMID: 37838944 DOI: 10.1016/j.celrep.2023.113234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 08/09/2023] [Accepted: 09/24/2023] [Indexed: 10/17/2023] Open
Abstract
The neural substrate for beat extraction and response entrainment to rhythms is not fully understood. Here we analyze the activity of medial premotor neurons in monkeys performing isochronous tapping guided by brief flashing stimuli or auditory tones. The population dynamics shared the following properties across modalities: the circular dynamics of the neural trajectories form a regenerating loop for every produced interval; the trajectories converge in similar state space at tapping times resetting the clock; and the tempo of the synchronized tapping is encoded in the trajectories by a combination of amplitude modulation and temporal scaling. Notably, the modality induces displacement in the neural trajectories in the auditory and visual subspaces without greatly altering the time-keeping mechanism. These results suggest that the interaction between the medial premotor cortex's amodal internal representation of pulse and a modality-specific external input generates a neural rhythmic clock whose dynamics govern rhythmic tapping execution across senses.
Collapse
Affiliation(s)
- Abraham Betancourt
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Oswaldo Pérez
- Escuela Nacional de Estudios Superiores, Unidad Juriquilla, UNAM, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Jorge Gámez
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Germán Mendoza
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro 76230, México.
| |
Collapse
|
10
|
Banks MI, Krause BM, Berger DG, Campbell DI, Boes AD, Bruss JE, Kovach CK, Kawasaki H, Steinschneider M, Nourski KV. Functional geometry of auditory cortical resting state networks derived from intracranial electrophysiology. PLoS Biol 2023; 21:e3002239. [PMID: 37651504 PMCID: PMC10499207 DOI: 10.1371/journal.pbio.3002239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 09/13/2023] [Accepted: 07/07/2023] [Indexed: 09/02/2023] Open
Abstract
Understanding central auditory processing critically depends on defining underlying auditory cortical networks and their relationship to the rest of the brain. We addressed these questions using resting state functional connectivity derived from human intracranial electroencephalography. Mapping recording sites into a low-dimensional space where proximity represents functional similarity revealed a hierarchical organization. At a fine scale, a group of auditory cortical regions excluded several higher-order auditory areas and segregated maximally from the prefrontal cortex. On mesoscale, the proximity of limbic structures to the auditory cortex suggested a limbic stream that parallels the classically described ventral and dorsal auditory processing streams. Identities of global hubs in anterior temporal and cingulate cortex depended on frequency band, consistent with diverse roles in semantic and cognitive processing. On a macroscale, observed hemispheric asymmetries were not specific for speech and language networks. This approach can be applied to multivariate brain data with respect to development, behavior, and disorders.
Collapse
Affiliation(s)
- Matthew I. Banks
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Bryan M. Krause
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - D. Graham Berger
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Declan I. Campbell
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Aaron D. Boes
- Department of Neurology, The University of Iowa, Iowa City, Iowa, United States of America
| | - Joel E. Bruss
- Department of Neurology, The University of Iowa, Iowa City, Iowa, United States of America
| | - Christopher K. Kovach
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Mitchell Steinschneider
- Department of Neurology, Albert Einstein College of Medicine, New York, New York, United States of America
- Department of Neuroscience, Albert Einstein College of Medicine, New York, New York, United States of America
| | - Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, Iowa, United States of America
| |
Collapse
|
11
|
Damera SR, Chang L, Nikolov PP, Mattei JA, Banerjee S, Glezer LS, Cox PH, Jiang X, Rauschecker JP, Riesenhuber M. Evidence for a Spoken Word Lexicon in the Auditory Ventral Stream. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:420-434. [PMID: 37588129 PMCID: PMC10426387 DOI: 10.1162/nol_a_00108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 04/27/2023] [Indexed: 08/18/2023]
Abstract
The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the visual word form area. Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using functional magnetic resonance imaging rapid adaptation techniques, we provide evidence for an auditory lexicon in the auditory word form area in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the auditory word form area. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams.
Collapse
Affiliation(s)
- Srikanth R. Damera
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Lillian Chang
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Plamen P. Nikolov
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - James A. Mattei
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Suneel Banerjee
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Laurie S. Glezer
- Department of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| | - Patrick H. Cox
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Xiong Jiang
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Josef P. Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | | |
Collapse
|
12
|
Price BH, Jensen CM, Khoudary AA, Gavornik JP. Expectation violations produce error signals in mouse V1. Cereb Cortex 2023; 33:8803-8820. [PMID: 37183176 PMCID: PMC10321125 DOI: 10.1093/cercor/bhad163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/22/2023] [Accepted: 04/25/2023] [Indexed: 05/16/2023] Open
Abstract
Repeated exposure to visual sequences changes the form of evoked activity in the primary visual cortex (V1). Predictive coding theory provides a potential explanation for this, namely that plasticity shapes cortical circuits to encode spatiotemporal predictions and that subsequent responses are modulated by the degree to which actual inputs match these expectations. Here we use a recently developed statistical modeling technique called Model-Based Targeted Dimensionality Reduction (MbTDR) to study visually evoked dynamics in mouse V1 in the context of an experimental paradigm called "sequence learning." We report that evoked spiking activity changed significantly with training, in a manner generally consistent with the predictive coding framework. Neural responses to expected stimuli were suppressed in a late window (100-150 ms) after stimulus onset following training, whereas responses to novel stimuli were not. Substituting a novel stimulus for a familiar one led to increases in firing that persisted for at least 300 ms. Omitting predictable stimuli in trained animals also led to increased firing at the expected time of stimulus onset. Finally, we show that spiking data can be used to accurately decode time within the sequence. Our findings are consistent with the idea that plasticity in early visual circuits is involved in coding spatiotemporal information.
Collapse
Affiliation(s)
- Byron H Price
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA
| | - Cambria M Jensen
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
| | - Anthony A Khoudary
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
| | - Jeffrey P Gavornik
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA 02215, USA
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA
| |
Collapse
|
13
|
Choi D, Yeung HH, Werker JF. Sensorimotor foundations of speech perception in infancy. Trends Cogn Sci 2023:S1364-6613(23)00124-9. [PMID: 37302917 DOI: 10.1016/j.tics.2023.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 06/13/2023]
Abstract
The perceptual system for speech is highly organized from early infancy. This organization bootstraps young human learners' ability to acquire their native speech and language from speech input. Here, we review behavioral and neuroimaging evidence that perceptual systems beyond the auditory modality are also specialized for speech in infancy, and that motor and sensorimotor systems can influence speech perception even in infants too young to produce speech-like vocalizations. These investigations complement existing literature on infant vocal development and on the interplay between speech perception and production systems in adults. We conclude that a multimodal speech and language network is present before speech-like vocalizations emerge.
Collapse
Affiliation(s)
- Dawoon Choi
- Department of Psychology, Yale University, Yale, CT, USA.
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Burnaby, BC, Canada
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
14
|
Yu R, Han B, Wu X, Wei G, Zhang J, Ding M, Wen X. Dual-functional network regulation underlies the central executive system in working memory. Neuroscience 2023:S0306-4522(23)00245-2. [PMID: 37286158 DOI: 10.1016/j.neuroscience.2023.05.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 04/24/2023] [Accepted: 05/27/2023] [Indexed: 06/09/2023]
Abstract
The frontoparietal network (FPN) and cingulo-opercular network (CON) may exert top-down regulation corresponding to the central executive system (CES) in working memory (WM); however, contributions and regulatory mechanisms remain unclear. We examined network interaction mechanisms underpinning the CES by depicting CON- and FPN-mediated whole-brain information flow in WM. We used datasets from participants performing verbal and spatial working memory tasks, divided into encoding, maintenance, and probe stages. We used general linear models to obtain task-activated CON and FPN nodes to define regions of interest (ROI); an online meta-analysis defined alternative ROIs for validation. We calculated whole-brain functional connectivity (FC) maps seeded by CON and FPN nodes at each stage using beta sequence analysis. We used Granger causality analysis to obtain the connectivity maps and assess task-level information flow patterns. For verbal working memory, the CON functionally connected positively and negatively to task-dependent and task-independent networks, respectively, at all stages. FPN FC patterns were similar only in the encoding and maintenance stages. The CON elicited stronger task-level outputs. Main effects were: stable CON→FPN, CON→DMN, CON→visual areas, FPN→visual areas, and phonological areas→FPN. The CON and FPN both up-regulated task-dependent and down-regulated task-independent networks during encoding and probing. Task-level output was slightly stronger for the CON. CON→FPN, CON→DMN, visual areas→CON, and visual areas→FPN showed consistent effects. The CON and FPN might together underlie the CES's neural basis and achieve top-down regulation through information interaction with other large-scale functional networks, and the CON may be a higher-level regulatory core in WM.
Collapse
Affiliation(s)
- Renshu Yu
- Department of Psychology, Renmin University of China, Beijing, China, 100872; Laboratory of the Department of Psychology, Renmin University of China, Beijing, China, 100872
| | - Bukui Han
- Department of Psychology, Renmin University of China, Beijing, China, 100872; Laboratory of the Department of Psychology, Renmin University of China, Beijing, China, 100872
| | - Xia Wu
- School of Artificial Intelligence, Beijing Normal University, Beijing, China, 100093
| | - Guodong Wei
- Department of Psychology, Renmin University of China, Beijing, China, 100872; Laboratory of the Department of Psychology, Renmin University of China, Beijing, China, 100872
| | - Junhui Zhang
- Department of Psychology, Renmin University of China, Beijing, China, 100872; Laboratory of the Department of Psychology, Renmin University of China, Beijing, China, 100872
| | - Mingzhou Ding
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville FL, USA, 32611
| | - Xiaotong Wen
- Department of Psychology, Renmin University of China, Beijing, China, 100872; Laboratory of the Department of Psychology, Renmin University of China, Beijing, China, 100872; Interdisciplinary Platform of Philosophy and Cognitive Science, Renmin University of China, China, 100872.
| |
Collapse
|
15
|
Rolls ET, Rauschecker JP, Deco G, Huang CC, Feng J. Auditory cortical connectivity in humans. Cereb Cortex 2023; 33:6207-6227. [PMID: 36573464 PMCID: PMC10422925 DOI: 10.1093/cercor/bhac496] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/27/2022] [Accepted: 11/29/2022] [Indexed: 12/28/2022] Open
Abstract
To understand auditory cortical processing, the effective connectivity between 15 auditory cortical regions and 360 cortical regions was measured in 171 Human Connectome Project participants, and complemented with functional connectivity and diffusion tractography. 1. A hierarchy of auditory cortical processing was identified from Core regions (including A1) to Belt regions LBelt, MBelt, and 52; then to PBelt; and then to HCP A4. 2. A4 has connectivity to anterior temporal lobe TA2, and to HCP A5, which connects to dorsal-bank superior temporal sulcus (STS) regions STGa, STSda, and STSdp. These STS regions also receive visual inputs about moving faces and objects, which are combined with auditory information to help implement multimodal object identification, such as who is speaking, and what is being said. Consistent with this being a "what" ventral auditory stream, these STS regions then have effective connectivity to TPOJ1, STV, PSL, TGv, TGd, and PGi, which are language-related semantic regions connecting to Broca's area, especially BA45. 3. A4 and A5 also have effective connectivity to MT and MST, which connect to superior parietal regions forming a dorsal auditory "where" stream involved in actions in space. Connections of PBelt, A4, and A5 with BA44 may form a language-related dorsal stream.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057, USA
- Institute for Advanced Study, Technical University, Munich, Germany
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Roc Boronat 138, Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
16
|
Kral A. Hearing and Cognition in Childhood. Laryngorhinootologie 2023; 102:S3-S11. [PMID: 37130527 PMCID: PMC10184669 DOI: 10.1055/a-1973-5087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The human brain shows extensive development of the cerebral cortex after birth. This is extensively altered by the absence of auditory input: the development of cortical synapses in the auditory system is delayed and their degradation is increased. Recent work shows that the synapses responsible for corticocortical processing of stimuli and their embedding into multisensory interactions and cognition are particularly affected. Since the brain is heavily reciprocally interconnected, inborn deafness manifests not only in deficits in auditory processing, but also in cognitive (non-auditory) functions that are affected differently between individuals. It requires individualized approaches in therapy of deafness in childhood.
Collapse
Affiliation(s)
- Andrej Kral
- Institut für AudioNeuroTechnologie (VIANNA) & Abt. für experimentelle Otologie, Exzellenzcluster Hearing4All, Medizinische Hochschule Hannover (Abteilungsleiter und Institutsleiter: Prof. Dr. A. Kral) & Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
17
|
Zhang L, Wang X, Alain C, Du Y. Successful aging of musicians: Preservation of sensorimotor regions aids audiovisual speech-in-noise perception. SCIENCE ADVANCES 2023; 9:eadg7056. [PMID: 37126550 PMCID: PMC10132752 DOI: 10.1126/sciadv.adg7056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Musicianship can mitigate age-related declines in audiovisual speech-in-noise perception. We tested whether this benefit originates from functional preservation or functional compensation by comparing fMRI responses of older musicians, older nonmusicians, and young nonmusicians identifying noise-masked audiovisual syllables. Older musicians outperformed older nonmusicians and showed comparable performance to young nonmusicians. Notably, older musicians retained similar neural specificity of speech representations in sensorimotor areas to young nonmusicians, while older nonmusicians showed degraded neural representations. In the same region, older musicians showed higher neural alignment to young nonmusicians than older nonmusicians, which was associated with their training intensity. In older nonmusicians, the degree of neural alignment predicted better performance. In addition, older musicians showed greater activation in frontal-parietal, speech motor, and visual motion regions and greater deactivation in the angular gyrus than older nonmusicians, which predicted higher neural alignment in sensorimotor areas. Together, these findings suggest that musicianship-related benefit in audiovisual speech-in-noise processing is rooted in preserving youth-like representations in sensorimotor regions.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiuyi Wang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, ON M8V 2S4, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai 200031, China
- Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
18
|
Danieli K, Guyon A, Bethus I. Episodic Memory formation: A review of complex Hippocampus input pathways. Prog Neuropsychopharmacol Biol Psychiatry 2023; 126:110757. [PMID: 37086812 DOI: 10.1016/j.pnpbp.2023.110757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/08/2023] [Accepted: 03/22/2023] [Indexed: 04/24/2023]
Abstract
Memories of everyday experiences involve the encoding of a rich and dynamic representation of present objects and their contextual features. Traditionally, the resulting mnemonic trace is referred to as Episodic Memory, i.e. the "what", "where" and "when" of a lived episode. The journey for such memory trace encoding begins with the perceptual data of an experienced episode handled in sensory brain regions. The information is then streamed to cortical areas located in the ventral Medio Temporal Lobe, which produces multi-modal representations concerning either the objects (in the Perirhinal cortex) or the spatial and contextual features (in the parahippocampal region) of the episode. Then, this high-level data is gated through the Entorhinal Cortex and forwarded to the Hippocampal Formation, where all the pieces get bound together. Eventually, the resulting encoded neural pattern is relayed back to the Neocortex for a stable consolidation. This review will detail these different stages and provide a systematic overview of the major cortical streams toward the Hippocampus relevant for Episodic Memory encoding.
Collapse
Affiliation(s)
| | - Alice Guyon
- Université Cote d'Azur, Neuromod Institute, France; Université Cote d'Azur, CNRS UMR 7275, IPMC, Valbonne, France
| | - Ingrid Bethus
- Université Cote d'Azur, Neuromod Institute, France; Université Cote d'Azur, CNRS UMR 7275, IPMC, Valbonne, France
| |
Collapse
|
19
|
Popov T, Gips B, Weisz N, Jensen O. Brain areas associated with visual spatial attention display topographic organization during auditory spatial attention. Cereb Cortex 2023; 33:3478-3489. [PMID: 35972419 PMCID: PMC10068281 DOI: 10.1093/cercor/bhac285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 07/02/2022] [Accepted: 07/05/2022] [Indexed: 11/12/2022] Open
Abstract
Spatially selective modulation of alpha power (8-14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention.
Collapse
Affiliation(s)
- Tzvetan Popov
- Methods of Plasticity Research, Department of Psychology, University of Zurich, 1-80502-784644-50205-B15 2TT, Zurich, Switzerland
- Department of Psychology, University of Konstanz, Konstanz, Germany
| | - Bart Gips
- NATO Science and Technology Organization Centre for Maritime Research and Experimentation (CMRE) La Spezia, La Spezia 19126, Italy
| | - Nathan Weisz
- Centre for Cognitive Neuroscience and Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Ole Jensen
- School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
20
|
Stein J. Theories about Developmental Dyslexia. Brain Sci 2023; 13:brainsci13020208. [PMID: 36831750 PMCID: PMC9954267 DOI: 10.3390/brainsci13020208] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 01/22/2023] [Accepted: 01/23/2023] [Indexed: 01/28/2023] Open
Abstract
Despite proving its usefulness for over a century, the concept of developmental dyslexia (DD) is currently in severe disarray because of the recent introduction of the phonological theory of its causation. Since mastering the phonological principle is essential for all reading, failure to do so cannot be used to distinguish DD from the many other causes of such failure. To overcome this problem, many new psychological, signal detection, and neurological theories have been introduced recently. All these new theories converge on the idea that DD is fundamentally caused by impaired signalling of the timing of the visual and auditory cues that are essential for reading. These are provided by large 'magnocellular' neurones which respond rapidly to sensory transients. The evidence for this conclusion is overwhelming. Especially convincing are intervention studies that have shown that improving magnocellular function improves dyslexic children's reading, together with cohort studies that have demonstrated that the magnocellular timing deficit is present in infants who later become dyslexic, long before they begin learning to read. The converse of the magnocellular deficit in dyslexics may be that they gain parvocellular abundance. This may often impart the exceptional 'holistic' talents that have been ascribed to them and that society needs to nurture.
Collapse
Affiliation(s)
- John Stein
- Department of Physiology, Anatomy & Genetics, Oxford University, Oxford OX1 3PT, UK
| |
Collapse
|
21
|
Moberly AC, Afreen H, Schneider KJ, Tamati TN. Preoperative Reading Efficiency as a Predictor of Adult Cochlear Implant Outcomes. Otol Neurotol 2022; 43:e1100-e1106. [PMID: 36351224 PMCID: PMC9694592 DOI: 10.1097/mao.0000000000003722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
HYPOTHESES 1) Scores of reading efficiency (the Test of Word Reading Efficiency, second edition) obtained in adults before cochlear implant surgery will be predictive of speech recognition outcomes 6 months after surgery; and 2) Cochlear implantation will lead to improvements in language processing as measured through reading efficiency from preimplantation to postimplantation. BACKGROUND Adult cochlear implant (CI) users display remarkable variability in speech recognition outcomes. "Top-down" processing-the use of cognitive resources to make sense of degraded speech-contributes to speech recognition abilities in CI users. One area that has received little attention is the efficiency of lexical and phonological processing. In this study, a visual measure of word and nonword reading efficiency-relying on lexical and phonological processing, respectively-was investigated for its ability to predict CI speech recognition outcomes, as well as to identify any improvements after implantation. METHODS Twenty-four postlingually deaf adult CI candidates were tested on the Test of Word Reading Efficiency, Second Edition preoperatively and again 6 months post-CI. Six-month post-CI speech recognition measures were also assessed across a battery of word and sentence recognition. RESULTS Preoperative nonword reading scores were moderately predictive of sentence recognition outcomes, but real word reading scores were not; word recognition scores were not predicted by either. No 6-month post-CI improvement was demonstrated in either word or nonword reading efficiency. CONCLUSION Phonological processing as measured by the Test of Word Reading Efficiency, Second Edition nonword reading predicts to a moderate degree 6-month sentence recognition outcomes in adult CI users. Reading efficiency did not improve after implantation, although this could be because of the relatively short duration of CI use.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Hajera Afreen
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Kara J Schneider
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | | |
Collapse
|
22
|
Stein J. The visual basis of reading and reading difficulties. Front Neurosci 2022; 16:1004027. [PMID: 36507333 PMCID: PMC9728103 DOI: 10.3389/fnins.2022.1004027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 09/16/2022] [Indexed: 11/24/2022] Open
Abstract
Most of our knowledge about the neural networks mediating reading has derived from studies of developmental dyslexia (DD). For much of the 20th C. this was diagnosed on the basis of finding a discrepancy between children's unexpectedly low reading and spelling scores compared with their normal or high oral and non-verbal reasoning ability. This discrepancy criterion has now been replaced by the claim that the main feature of dyslexia is a phonological deficit, and it is now argued that we should test for this to identify dyslexia. However, grasping the phonological principle is essential for all learning to read; so every poor reader will show a phonological deficit. The phonological theory does not explain why dyslexic people, in particular, fail; so this phonological criterion makes it impossible to distinguish DD from any of the many other causes of reading failure. Currently therefore, there is no agreement about precisely how we should identify it. Yet, if we understood the specific neural pathways that underlie failure to acquire phonological skills specifically in people with dyslexia, we should be able to develop reliable means of identifying it. An important, though not the only, cause in people with dyslexia is impaired development of the brain's rapid visual temporal processing systems; these are required for sequencing the order of the letters in a word accurately. Such temporal, "transient," processing is carried out primarily by a distinct set of "magnocellular" (M-) neurones in the visual system; and the development of these has been found to be impaired in many people with dyslexia. Likewise, auditory sequencing of the sounds in a word is mediated by the auditory temporal processing system whose development is impaired in many dyslexics. Together these two deficits can therefore explain their problems with acquiring the phonological principle. Assessing poor readers' visual and auditory temporal processing skills should enable dyslexia to be reliably distinguished from other causes of reading failure and this will suggest principled ways of helping these children to learn to read, such as sensory training, yellow or blue filters or omega 3 fatty acid supplements. This will enable us to diagnose DD with confidence, and thus to develop educational plans targeted to exploit each individual child's strengths and compensate for his weaknesses.
Collapse
|
23
|
Interaction of bottom-up and top-down neural mechanisms in spatial multi-talker speech perception. Curr Biol 2022; 32:3971-3986.e4. [PMID: 35973430 DOI: 10.1016/j.cub.2022.07.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/08/2022] [Accepted: 07/19/2022] [Indexed: 11/20/2022]
Abstract
How the human auditory cortex represents spatially separated simultaneous talkers and how talkers' locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl's gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker's location changed the mean response level, whereas the talker's spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker's voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.
Collapse
|
24
|
Wuang YP, Wang CC, Tsai HY, Wan YT. The neural substrates of visual organization in children and adolescents: An fMRI study. APPLIED NEUROPSYCHOLOGY. CHILD 2022; 11:307-319. [PMID: 32898443 DOI: 10.1080/21622965.2020.1815536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Deficient visual organization ability not only indicates possible brain dysfunctions but further affects an individual's daily activities. This study aimed to use functional magnetic resonance imaging (fMRI) to investigate the neural network contributing to visual organization abilities in children and adolescents. A two-choice version of the Hooper Visual Organization Test (T-HVOT) was adapted as the fMRI task for the present study. The effects of age and gender on overall visual perceptual functions and related neural foundations were also analyzed. Seventy children and adolescents were administered with the Test of Visual Perceptual Skill-Third Edition and 41 completed the fMRI scans. The whole-brain fMRI mapping results showed the cortical activation of multiple brain areas relating to visual organization. The greatest cortical activities were seen in the middle occipital gyrus, middle temporal gyrus, middle frontal gyrus and inferior frontal gyrus, and two age groups showed significant differences in cortical activation patterns as well. Gender had no significant effects on visual perceptual functions nor related cortical activation patterns. The overall visual perception functions improve with age, and the different cortical activation patterns indicated that the two groups adopt different strategies while performing visual organization tasks. The sensitivity and spatial resolution of fMRI allowed us to make specific conclusions about cortical regions involved in visual organization function and to provide a reference for objectively judging rehabilitative outcomes.
Collapse
Affiliation(s)
- Yee-Pay Wuang
- Department of Occupational Therapy, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Chih-Chung Wang
- Department of Rehabilitation Medicine, Kaohsiung Medical University Chung-Ho Memorial Hospital, Kaohsiung, Taiwan
| | - Hsien-Yu Tsai
- Department of Occupational Therapy, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Yi-Ting Wan
- Department of Occupational Therapy, Kaohsiung Medical University, Kaohsiung, Taiwan
| |
Collapse
|
25
|
Johansson C, Folgerø PO. Is Reduced Visual Processing the Price of Language? Brain Sci 2022; 12:brainsci12060771. [PMID: 35741656 PMCID: PMC9221435 DOI: 10.3390/brainsci12060771] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/06/2022] [Accepted: 06/08/2022] [Indexed: 02/01/2023] Open
Abstract
We suggest a later timeline for full language capabilities in Homo sapiens, placing the emergence of language over 200,000 years after the emergence of our species. The late Paleolithic period saw several significant changes. Homo sapiens became more gracile and gradually lost significant brain volumes. Detailed realistic cave paintings disappeared completely, and iconic/symbolic ones appeared at other sites. This may indicate a shift in perceptual abilities, away from an accurate perception of the present. Language in modern humans interact with vision. One example is the McGurk effect. Studies show that artistic abilities may improve when language-related brain areas are damaged or temporarily knocked out. Language relies on many pre-existing non-linguistic functions. We suggest that an overwhelming flow of perceptual information, vision, in particular, was an obstacle to language, as is sometimes implied in autism with relative language impairment. We systematically review the recent research literature investigating the relationship between language and perception. We see homologues of language-relevant brain functions predating language. Recent findings show brain lateralization for communicative gestures in other primates without language, supporting the idea that a language-ready brain may be overwhelmed by raw perception, thus blocking overt language from evolving. We find support in converging evidence for a change in neural organization away from raw perception, thus pushing the emergence of language closer in time. A recent origin of language makes it possible to investigate the genetic origins of language.
Collapse
|
26
|
Rolls ET, Deco G, Huang CC, Feng J. The human language effective connectome. Neuroimage 2022; 258:119352. [PMID: 35659999 DOI: 10.1016/j.neuroimage.2022.119352] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 05/31/2022] [Indexed: 01/07/2023] Open
Abstract
To advance understanding of brain networks involved in language, the effective connectivity between 26 cortical regions implicated in language by a community analysis and 360 cortical regions was measured in 171 humans from the Human Connectome Project, and complemented with functional connectivity and diffusion tractography, all using the HCP multimodal parcellation atlas. A (semantic) network (Group 1) involving inferior cortical regions of the superior temporal sulcus cortex (STS) with the adjacent inferior temporal visual cortex TE1a and temporal pole TG, and the connected parietal PGi region, has effective connectivity with inferior temporal visual cortex (TE) regions; with parietal PFm which also has visual connectivity; with posterior cingulate cortex memory-related regions; with the frontal pole, orbitofrontal cortex, and medial prefrontal cortex; with the dorsolateral prefrontal cortex; and with 44 and 45 for output regions. It is proposed that this system can build in its temporal lobe (STS and TG) and parietal parts (PGi and PGs) semantic representations of objects incorporating especially their visual and reward properties. Another (semantic) network (Group 3) involving superior regions of the superior temporal sulcus cortex and more superior temporal lobe regions including STGa, auditory A5, TPOJ1, the STV and the Peri-Sylvian Language area (PSL) has effective connectivity with auditory areas (A1, A4, A5, Pbelt); with relatively early visual areas involved in motion, e.g., MT and MST, and faces/words (FFC); with somatosensory regions (frontal opercular FOP, insula and parietal PF); with other TPOJ regions; and with the inferior frontal gyrus regions (IFJa and IFSp). It is proposed that this system builds semantic representations specialising in auditory and related facial motion information useful in theory of mind and somatosensory / body image information, with outputs directed not only to regions 44 and 45, but also to premotor 55b and midcingulate premotor cortex. Both semantic networks (Groups 1 and 3) have access to the hippocampal episodic memory system via parahippocampal TF. A third largely frontal network (Group 2) (44, 45, 47l; 55b; the Superior Frontal Language region SFL; and including temporal pole TGv) receives effective connectivity from the two semantic systems, and is implicated in syntax and speech output.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China.
| | - Gustavo Deco
- Department of Information and Communication Technologies, Center for Brain and Cognition, Computational Neuroscience Group, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain; Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain; Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
27
|
Sereno MI, Sood MR, Huang RS. Topological Maps and Brain Computations From Low to High. Front Syst Neurosci 2022; 16:787737. [PMID: 35747394 PMCID: PMC9210993 DOI: 10.3389/fnsys.2022.787737] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/29/2022] [Indexed: 01/02/2023] Open
Abstract
We first briefly summarize data from microelectrode studies on visual maps in non-human primates and other mammals, and characterize differences among the features of the approximately topological maps in the three main sensory modalities. We then explore the almost 50% of human neocortex that contains straightforward topological visual, auditory, and somatomotor maps by presenting a new parcellation as well as a movie atlas of cortical area maps on the FreeSurfer average surface, fsaverage. Third, we review data on moveable map phenomena as well as a recent study showing that cortical activity during sensorimotor actions may involve spatially locally coherent traveling wave and bump activity. Finally, by analogy with remapping phenomena and sensorimotor activity, we speculate briefly on the testable possibility that coherent localized spatial activity patterns might be able to ‘escape’ from topologically mapped cortex during ‘serial assembly of content’ operations such as scene and language comprehension, to form composite ‘molecular’ patterns that can move across some cortical areas and possibly return to topologically mapped cortex to generate motor output there.
Collapse
Affiliation(s)
- Martin I. Sereno
- Department of Psychology, San Diego State University, San Diego, CA, United States
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- *Correspondence: Martin I. Sereno,
| | - Mariam Reeny Sood
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Ruey-Song Huang
- Centre for Cognitive and Brain Sciences, University of Macau, Macau, Macao SAR, China
| |
Collapse
|
28
|
Zhang L, Du Y. Lip movements enhance speech representations and effective connectivity in auditory dorsal stream. Neuroimage 2022; 257:119311. [PMID: 35589000 DOI: 10.1016/j.neuroimage.2022.119311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 05/09/2022] [Accepted: 05/11/2022] [Indexed: 11/25/2022] Open
Abstract
Viewing speaker's lip movements facilitates speech perception, especially under adverse listening conditions, but the neural mechanisms of this perceptual benefit at the phonemic and feature levels remain unclear. This fMRI study addressed this question by quantifying regional multivariate representation and network organization underlying audiovisual speech-in-noise perception. Behaviorally, valid lip movements improved recognition of place of articulation to aid phoneme identification. Meanwhile, lip movements enhanced neural representations of phonemes in left auditory dorsal stream regions, including frontal speech motor areas and supramarginal gyrus (SMG). Moreover, neural representations of place of articulation and voicing features were promoted differentially by lip movements in these regions, with voicing enhanced in Broca's area while place of articulation better encoded in left ventral premotor cortex and SMG. Next, dynamic causal modeling (DCM) analysis showed that such local changes were accompanied by strengthened effective connectivity along the dorsal stream. Moreover, the neurite orientation dispersion of the left arcuate fasciculus, the bearing skeleton of auditory dorsal stream, predicted the visual enhancements of neural representations and effective connectivity. Our findings provide novel insight to speech science that lip movements promote both local phonemic and feature encoding and network connectivity in the dorsal pathway and the functional enhancement is mediated by the microstructural architecture of the circuit.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China 200031; Chinese Institute for Brain Research, Beijing, China 102206.
| |
Collapse
|
29
|
Valzolgher C, Todeschini M, Verdelet G, Gatel J, Salemme R, Gaveau V, Truy E, Farnè A, Pavani F. Adapting to altered auditory cues: Generalization from manual reaching to head pointing. PLoS One 2022; 17:e0263509. [PMID: 35421095 PMCID: PMC9009652 DOI: 10.1371/journal.pone.0263509] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/21/2022] [Indexed: 12/02/2022] Open
Abstract
Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- * E-mail:
| | - Michela Todeschini
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Gregoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | | | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Valerie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- University of Lyon 1, Villeurbanne, France
| | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Francesco Pavani
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| |
Collapse
|
30
|
Rus-Oswald OG, Benner J, Reinhardt J, Bürki C, Christiner M, Hofmann E, Schneider P, Stippich C, Kressig RW, Blatow M. Musicianship-Related Structural and Functional Cortical Features Are Preserved in Elderly Musicians. Front Aging Neurosci 2022; 14:807971. [PMID: 35401149 PMCID: PMC8990841 DOI: 10.3389/fnagi.2022.807971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/25/2022] [Indexed: 11/13/2022] Open
Abstract
Background Professional musicians are a model population for exploring basic auditory function, sensorimotor and multisensory integration, and training-induced neuroplasticity. The brain of musicians exhibits distinct structural and functional cortical features; however, little is known about how these features evolve during aging. This multiparametric study aimed to examine the functional and structural neural correlates of lifelong musical practice in elderly professional musicians. Methods Sixteen young musicians, 16 elderly musicians (age >70), and 15 elderly non-musicians participated in the study. We assessed gray matter metrics at the whole-brain and region of interest (ROI) levels using high-resolution magnetic resonance imaging (MRI) with the Freesurfer automatic segmentation and reconstruction pipeline. We used BrainVoyager semiautomated segmentation to explore individual auditory cortex morphotypes. Furthermore, we evaluated functional blood oxygenation level-dependent (BOLD) activations in auditory and non-auditory regions by functional MRI (fMRI) with an attentive tone-listening task. Finally, we performed discriminant function analyses based on structural and functional ROIs. Results A general reduction of gray matter metrics distinguished the elderly from the young subjects at the whole-brain level, corresponding to widespread natural brain atrophy. Age- and musicianship-dependent structural correlations revealed group-specific differences in several clusters including superior, middle, and inferior frontal as well as perirolandic areas. In addition, the elderly musicians exhibited increased gyrification of auditory cortex like the young musicians. During fMRI, the elderly non-musicians activated predominantly auditory regions, whereas the elderly musicians co-activated a much broader network of auditory association areas, primary and secondary motor areas, and prefrontal and parietal regions like, albeit weaker, the young musicians. Also, group-specific age- and musicianship-dependent functional correlations were observed in the frontal and parietal regions. Moreover, discriminant function analysis could separate groups with high accuracy based on a set of specific structural and functional, mainly temporal and occipital, ROIs. Conclusion In conclusion, despite naturally occurring senescence, the elderly musicians maintained musicianship-specific structural and functional cortical features. The identified structural and functional brain regions, discriminating elderly musicians from non-musicians, might be of relevance for the aging musicians’ brain. To what extent lifelong musical activity may have a neuroprotective impact needs to be addressed further in larger longitudinal studies.
Collapse
Affiliation(s)
- Oana G. Rus-Oswald
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zürich, Switzerland
- University Department of Geriatric Medicine FELIX PLATTER, Basel, Switzerland
- *Correspondence: Oana G. Rus-Oswald,
| | - Jan Benner
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Jan Benner,
| | - Julia Reinhardt
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zürich, Switzerland
- Division of Diagnostic and Interventional Neuroradiology, Department of Radiology, University Hospital Basel, University of Basel, Basel, Switzerland
- Department of Cardiology and Cardiovascular Research Institute Basel, University Hospital Basel, University of Basel, Basel, Switzerland
- Department of Orthopedic Surgery and Traumatology, University Hospital of Basel, University of Basel, Basel, Switzerland
| | - Céline Bürki
- University Department of Geriatric Medicine FELIX PLATTER, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria
- Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Elke Hofmann
- Academy of Music, University of Applied Sciences and Arts Northwestern Switzerland (FHNW), Basel, Switzerland
| | - Peter Schneider
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Centre for Systematic Musicology, University of Graz, Graz, Austria
- Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Reto W. Kressig
- University Department of Geriatric Medicine FELIX PLATTER, Basel, Switzerland
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
31
|
Engel A, Hoefle S, Monteiro MC, Moll J, Keller PE. Neural Correlates of Listening to Varying Synchrony Between Beats in Samba Percussion and Relations to Feeling the Groove. Front Neurosci 2022; 16:779964. [PMID: 35281511 PMCID: PMC8915847 DOI: 10.3389/fnins.2022.779964] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/20/2022] [Indexed: 12/02/2022] Open
Abstract
Listening to samba percussion often elicits feelings of pleasure and the desire to move with the beat-an experience sometimes referred to as "feeling the groove"- as well as social connectedness. Here we investigated the effects of performance timing in a Brazilian samba percussion ensemble on listeners' experienced pleasantness and the desire to move/dance in a behavioral experiment, as well as on neural processing as assessed via functional magnetic resonance imaging (fMRI). Participants listened to different excerpts of samba percussion produced by multiple instruments that either were "in sync", with no additional asynchrony between instrumental parts other than what is usual in naturalistic recordings, or were presented "out of sync" by delaying the snare drums (by 28, 55, or 83 ms). Results of the behavioral experiment showed increasing pleasantness and desire to move/dance with increasing synchrony between instruments. Analysis of hemodynamic responses revealed stronger bilateral brain activity in the supplementary motor area, the left premotor area, and the left middle frontal gyrus with increasing synchrony between instruments. Listening to "in sync" percussion thus strengthens audio-motor interactions by recruiting motor-related brain areas involved in rhythm processing and beat perception to a higher degree. Such motor related activity may form the basis for "feeling the groove" and the associated desire to move to music. Furthermore, in an exploratory analysis we found that participants who reported stronger emotional responses to samba percussion in everyday life showed higher activity in the subgenual cingulate cortex, an area involved in prosocial emotions, social group identification and social bonding.
Collapse
Affiliation(s)
- Annerose Engel
- Cognitive and Behavioral Neuroscience Unit, D’Or Institute for Research and Education, Rio de Janeiro, Brazil
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Clinic for Cognitive Neurology, University Hospital Leipzig, Leipzig, Germany
| | - Sebastian Hoefle
- Cognitive and Behavioral Neuroscience Unit, D’Or Institute for Research and Education, Rio de Janeiro, Brazil
| | - Marina Carneiro Monteiro
- Cognitive and Behavioral Neuroscience Unit, D’Or Institute for Research and Education, Rio de Janeiro, Brazil
| | - Jorge Moll
- Cognitive and Behavioral Neuroscience Unit, D’Or Institute for Research and Education, Rio de Janeiro, Brazil
| | - Peter E. Keller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus, Denmark
| |
Collapse
|
32
|
Whitehead JC, Armony JL. Intra-individual Reliability of Voice- and Music-elicited Responses and their Modulation by Expertise. Neuroscience 2022; 487:184-197. [PMID: 35182696 DOI: 10.1016/j.neuroscience.2022.02.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/19/2022] [Accepted: 02/10/2022] [Indexed: 10/19/2022]
Abstract
A growing number of functional neuroimaging studies have identified regions within the temporal lobe, particularly along the planum polare and planum temporale, that respond more strongly to music than other types of acoustic stimuli, including voice. This "music preferred" regions have been reported using a variety of stimulus sets, paradigms and analysis approaches and their consistency across studies confirmed through meta-analyses. However, the critical question of intra-subject reliability of these responses has received less attention. Here, we directly assessed this important issue by contrasting brain responses to musical vs. vocal stimuli in the same subjects across three consecutive fMRI runs, using different types of stimuli. Moreover, we investigated whether these music- and voice-preferred responses were reliably modulated by expertise. Results demonstrated that music-preferred activity previously reported in temporal regions, and its modulation by expertise, exhibits a high intra-subject reliability. However, we also found that activity in some extra-temporal regions, such as the precentral and middle frontal gyri, did depend on the particular stimuli employed, which may explain why these are less consistently reported in the literature. Taken together, our findings confirm and extend the notion that specific regions in the brain consistently respond more strongly to certain socially-relevant stimulus categories, such as faces, voices and music, but that some of these responses appear to depend, at least to some extent, on the specific features of the paradigm employed.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Integrated Program in Neuroscience, McGill University, Montreal, Canada.
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
33
|
Giampiccolo D, Duffau H. Controversy over the temporal cortical terminations of the left arcuate fasciculus: a reappraisal. Brain 2022; 145:1242-1256. [PMID: 35142842 DOI: 10.1093/brain/awac057] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 12/19/2021] [Accepted: 01/20/2022] [Indexed: 11/12/2022] Open
Abstract
The arcuate fasciculus has been considered a major dorsal fronto-temporal white matter pathway linking frontal language production regions with auditory perception in the superior temporal gyrus, the so-called Wernicke's area. In line with this tradition, both historical and contemporary models of language function have assigned primacy to superior temporal projections of the arcuate fasciculus. However, classical anatomical descriptions and emerging behavioural data are at odds with this assumption. On one hand, fronto-temporal projections to Wernicke's area may not be unique to the arcuate fasciculus. On the other hand, dorsal stream language deficits have been reported also for damage to middle, inferior and basal temporal gyri which may be linked to arcuate disconnection. These findings point to a reappraisal of arcuate projections in the temporal lobe. Here, we review anatomical and functional evidence regarding the temporal cortical terminations of the left arcuate fasciculus by incorporating dissection and tractography findings with stimulation data using cortico-cortical evoked potentials and direct electrical stimulation mapping in awake patients. Firstly, we discuss the fibers of the arcuate fasciculus projecting to the superior temporal gyrus and the functional rostro-caudal gradient in this region where both phonological encoding and auditory-motor transformation may be performed. Caudal regions within the temporoparietal junction may be involved in articulation and associated with temporoparietal projections of the third branch of the superior longitudinal fasciculus, while more rostral regions may support encoding of acoustic phonetic features, supported by arcuate fibres. We then move to examine clinical data showing that multimodal phonological encoding is facilitated by projections of the arcuate fasciculus to superior, but also middle, inferior and basal temporal regions. Hence, we discuss how projections of the arcuate fasciculus may contribute to acoustic (middle-posterior superior and middle temporal gyri), visual (posterior inferior temporal/fusiform gyri comprising the visual word form area) and lexical (anterior-middle inferior temporal/fusiform gyri in the basal temporal language area) information in the temporal lobe to be processed, encoded and translated into a dorsal phonological route to the frontal lobe. Finally, we point out surgical implications for this model in terms of the prediction and avoidance of neurological deficit.
Collapse
Affiliation(s)
- Davide Giampiccolo
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University Hospital, Verona, Italy.,Institute of Neuroscience, Cleveland Clinic London, Grosvenor Place, London, UK.,Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, University College London, London, UK.,Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
| | - Hugues Duffau
- Department of Neurosurgery, Gui de Chauliac Hospital, Montpellier University Medical Center, Montpellier, France.,Team "Neuroplasticity, Stem Cells and Low-grade Gliomas," INSERM U1191, Institute of Genomics of Montpellier, University of Montpellier, Montpellier, France
| |
Collapse
|
34
|
Alipour A, Beggs JM, Brown JW, James TW. A computational examination of the two-streams hypothesis: which pathway needs a longer memory? Cogn Neurodyn 2022; 16:149-165. [PMID: 35126775 PMCID: PMC8807798 DOI: 10.1007/s11571-021-09703-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 06/26/2021] [Accepted: 07/14/2021] [Indexed: 02/03/2023] Open
Abstract
The two visual streams hypothesis is a robust example of neural functional specialization that has inspired countless studies over the past four decades. According to one prominent version of the theory, the fundamental goal of the dorsal visual pathway is the transformation of retinal information for visually-guided motor behavior. To that end, the dorsal stream processes input using absolute (or veridical) metrics only when the movement is initiated, necessitating very little, or no, memory. Conversely, because the ventral visual pathway does not involve motor behavior (its output does not influence the real world), the ventral stream processes input using relative (or illusory) metrics and can accumulate or integrate sensory evidence over long time constants, which provides a substantial capacity for memory. In this study, we tested these relations between functional specialization, processing metrics, and memory by training identical recurrent neural networks to perform either a viewpoint-invariant object classification task or an orientation/size determination task. The former task relies on relative metrics, benefits from accumulating sensory evidence, and is usually attributed to the ventral stream. The latter task relies on absolute metrics, can be computed accurately in the moment, and is usually attributed to the dorsal stream. To quantify the amount of memory required for each task, we chose two types of neural network models. Using a long-short-term memory (LSTM) recurrent network, we found that viewpoint-invariant object categorization (object task) required a longer memory than orientation/size determination (orientation task). Additionally, to dissect this memory effect, we considered factors that contributed to longer memory in object tasks. First, we used two different sets of objects, one with self-occlusion of features and one without. Second, we defined object classes either strictly by visual feature similarity or (more liberally) by semantic label. The models required greater memory when features were self-occluded and when object classes were defined by visual feature similarity, showing that self-occlusion and visual similarity among object task samples are contributing to having a long memory. The same set of tasks modeled using modified leaky-integrator echo state recurrent networks (LiESN), however, did not replicate the results, except under some conditions. This may be because LiESNs cannot perform fine-grained memory adjustments due to their network-wide memory coefficient and fixed recurrent weights. In sum, the LSTM simulations suggest that longer memory is advantageous for performing viewpoint-invariant object classification (a putative ventral stream function) because it allows for interpolation of features across viewpoints. The results further suggest that orientation/size determination (a putative dorsal stream function) does not benefit from longer memory. These findings are consistent with the two visual streams theory of functional specialization. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11571-021-09703-z.
Collapse
Affiliation(s)
- Abolfazl Alipour
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - John M Beggs
- Program in Neuroscience, Indiana University, Bloomington, IN USA
- Department of Physics, Indiana University, Bloomington, IN USA
| | - Joshua W Brown
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| |
Collapse
|
35
|
Tian X, Liu Y, Guo Z, Cai J, Tang J, Chen F, Zhang H. Cerebral Representation of Sound Localization Using Functional Near-Infrared Spectroscopy. Front Neurosci 2022; 15:739706. [PMID: 34970110 PMCID: PMC8712652 DOI: 10.3389/fnins.2021.739706] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022] Open
Abstract
Sound localization is an essential part of auditory processing. However, the cortical representation of identifying the direction of sound sources presented in the sound field using functional near-infrared spectroscopy (fNIRS) is currently unknown. Therefore, in this study, we used fNIRS to investigate the cerebral representation of different sound sources. Twenty-five normal-hearing subjects (aged 26 ± 2.7, male 11, female 14) were included and actively took part in a block design task. The test setup for sound localization was composed of a seven-speaker array spanning a horizontal arc of 180° in front of the participants. Pink noise bursts with two intensity levels (48 dB/58 dB) were randomly applied via five loudspeakers (–90°/–30°/–0°/+30°/+90°). Sound localization task performances were collected, and simultaneous signals from auditory processing cortical fields were recorded for analysis by using a support vector machine (SVM). The results showed a classification accuracy of 73.60, 75.60, and 77.40% on average at –90°/0°, 0°/+90°, and –90°/+90° with high intensity, and 70.60, 73.6, and 78.6% with low intensity. The increase of oxyhemoglobin was observed in the bilateral non-primary auditory cortex (AC) and dorsolateral prefrontal cortex (dlPFC). In conclusion, the oxyhemoglobin (oxy-Hb) response showed different neural activity patterns between the lateral and front sources in the AC and dlPFC. Our results may serve as a basic contribution for further research on the use of fNIRS in spatial auditory studies.
Collapse
Affiliation(s)
- Xuexin Tian
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yimeng Liu
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zengzhi Guo
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jieqing Cai
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jie Tang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Physiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China.,Key Laboratory of Mental Health of the Ministry of Education, Southern Medical University, Guangzhou, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongzheng Zhang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China
| |
Collapse
|
36
|
Tada M, Kirihara K, Ishishita Y, Takasago M, Kunii N, Uka T, Shimada S, Ibayashi K, Kawai K, Saito N, Koshiyama D, Fujioka M, Araki T, Kasai K. Global and Parallel Cortical Processing Based on Auditory Gamma Oscillatory Responses in Humans. Cereb Cortex 2021; 31:4518-4532. [PMID: 33907804 PMCID: PMC8408476 DOI: 10.1093/cercor/bhab103] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 03/27/2021] [Accepted: 03/28/2021] [Indexed: 11/13/2022] Open
Abstract
Gamma oscillations are physiological phenomena that reflect perception and cognition, and involve parvalbumin-positive γ-aminobutyric acid-ergic interneuron function. The auditory steady-state response (ASSR) is the most robust index for gamma oscillations, and it is impaired in patients with neuropsychiatric disorders such as schizophrenia and autism. Although ASSR reduction is known to vary in terms of frequency and time, the neural mechanisms are poorly understood. We obtained high-density electrocorticography recordings from a wide area of the cortex in 8 patients with refractory epilepsy. In an ASSR paradigm, click sounds were presented at frequencies of 20, 30, 40, 60, 80, 120, and 160 Hz. We performed time-frequency analyses and analyzed intertrial coherence, event-related spectral perturbation, and high-gamma oscillations. We demonstrate that the ASSR is globally distributed among the temporal, parietal, and frontal cortices. The ASSR was composed of time-dependent neural subcircuits differing in frequency tuning. Importantly, the frequency tuning characteristics of the late-latency ASSR varied between the temporal/frontal and parietal cortex, suggestive of differentiation along parallel auditory pathways. This large-scale survey of the cortical ASSR could serve as a foundation for future studies of the ASSR in patients with neuropsychiatric disorders.
Collapse
Affiliation(s)
- Mariko Tada
- Department of Neuropsychiatry, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Kenji Kirihara
- Department of Neuropsychiatry, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Yohei Ishishita
- Department of Neurosurgery, Jichi Medical University, 3311-1 Yakushiji, Shimotsuke, Tochigi 329-0498, Japan
| | - Megumi Takasago
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Naoto Kunii
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Takanori Uka
- Department of Integrative Physiology, Graduate School of Medicine, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi 409-3898, Japan
| | - Seijiro Shimada
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Kenji Ibayashi
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Kensuke Kawai
- Department of Neurosurgery, Jichi Medical University, 3311-1 Yakushiji, Shimotsuke, Tochigi 329-0498, Japan
| | - Nobuhito Saito
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Daisuke Koshiyama
- Department of Neuropsychiatry, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Mao Fujioka
- Department of Neuropsychiatry, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Tsuyoshi Araki
- Department of Neuropsychiatry, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Kiyoto Kasai
- Department of Neuropsychiatry, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| |
Collapse
|
37
|
Gao Q, Xiang Y, Zhang J, Luo N, Liang M, Gong L, Yu J, Cui Q, Sepulcre J, Chen H. A reachable probability approach for the analysis of spatio-temporal dynamics in the human functional network. Neuroimage 2021; 243:118497. [PMID: 34428571 DOI: 10.1016/j.neuroimage.2021.118497] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 08/06/2021] [Accepted: 08/20/2021] [Indexed: 12/25/2022] Open
Abstract
The dynamic architecture of the human brain has been consistently observed. However, there is still limited modeling work to elucidate how neuronal circuits are hierarchically and flexibly organized in functional systems. Here we proposed a reachable probability approach based on non-homogeneous Markov chains, to characterize all possible connectivity flows and the hierarchical structure of brain functional systems at the dynamic level. We proved at the theoretical level the convergence of the functional brain network system, and demonstrated that this approach is able to detect network steady states across connectivity structure, particularly in areas of the default mode network. We further explored the dynamically hierarchical functional organization centered at the primary sensory cortices. We observed smaller optimal reachable steps to their local functional regions, and differentiated patterns in larger optimal reachable steps for primary perceptual modalities. The reachable paths with the largest and second largest transition probabilities between primary sensory seeds via multisensory integration regions were also tracked to explore the flexibility and plasticity of the multisensory integration. The present work provides a novel approach to depict both the stable and flexible hierarchical connectivity organization of the human brain.
Collapse
Affiliation(s)
- Qing Gao
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Yu Xiang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jiabao Zhang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Ning Luo
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Minfeng Liang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Lisha Gong
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jiali Yu
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Qian Cui
- School of Public Affairs and Administration, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jorge Sepulcre
- Gordon Center for Medical Imaging, Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| | - Huafu Chen
- High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Radiology, First Affiliated Hospital to Army Medical University, Chongqing 400038, China.
| |
Collapse
|
38
|
Regev M, Halpern AR, Owen AM, Patel AD, Zatorre RJ. Mapping Specific Mental Content during Musical Imagery. Cereb Cortex 2021; 31:3622-3640. [PMID: 33749742 DOI: 10.1093/cercor/bhab036] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 02/02/2021] [Accepted: 02/05/2021] [Indexed: 11/12/2022] Open
Abstract
Humans can mentally represent auditory information without an external stimulus, but the specificity of these internal representations remains unclear. Here, we asked how similar the temporally unfolding neural representations of imagined music are compared to those during the original perceived experience. We also tested whether rhythmic motion can influence the neural representation of music during imagery as during perception. Participants first memorized six 1-min-long instrumental musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject correlation analysis showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were reinstated in more extensive temporal-lobe regions bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.
Collapse
Affiliation(s)
- Mor Regev
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada
| | - Andrea R Halpern
- Department of Psychology, Bucknell University, Lewisburg, PA 17837, USA
| | - Adrian M Owen
- Brain and Mind Institute, Department of Psychology and Department of Physiology and Pharmacology, Western University, London, ON N6A 5B7, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| | - Aniruddh D Patel
- Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program.,Department of Psychology, Tufts University, Medford, MA 02155, USA
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| |
Collapse
|
39
|
Cannon J. Expectancy-based rhythmic entrainment as continuous Bayesian inference. PLoS Comput Biol 2021; 17:e1009025. [PMID: 34106918 PMCID: PMC8216548 DOI: 10.1371/journal.pcbi.1009025] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 06/21/2021] [Accepted: 04/29/2021] [Indexed: 11/18/2022] Open
Abstract
When presented with complex rhythmic auditory stimuli, humans are able to track underlying temporal structure (e.g., a "beat"), both covertly and with their movements. This capacity goes far beyond that of a simple entrained oscillator, drawing on contextual and enculturated timing expectations and adjusting rapidly to perturbations in event timing, phase, and tempo. Previous modeling work has described how entrainment to rhythms may be shaped by event timing expectations, but sheds little light on any underlying computational principles that could unify the phenomenon of expectation-based entrainment with other brain processes. Inspired by the predictive processing framework, we propose that the problem of rhythm tracking is naturally characterized as a problem of continuously estimating an underlying phase and tempo based on precise event times and their correspondence to timing expectations. We present two inference problems formalizing this insight: PIPPET (Phase Inference from Point Process Event Timing) and PATIPPET (Phase and Tempo Inference). Variational solutions to these inference problems resemble previous "Dynamic Attending" models of perceptual entrainment, but introduce new terms representing the dynamics of uncertainty and the influence of expectations in the absence of sensory events. These terms allow us to model multiple characteristics of covert and motor human rhythm tracking not addressed by other models, including sensitivity of error corrections to inter-event interval and perceived tempo changes induced by event omissions. We show that positing these novel influences in human entrainment yields a range of testable behavioral predictions. Guided by recent neurophysiological observations, we attempt to align the phase inference framework with a specific brain implementation. We also explore the potential of this normative framework to guide the interpretation of experimental data and serve as building blocks for even richer predictive processing and active inference models of timing.
Collapse
Affiliation(s)
- Jonathan Cannon
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
40
|
Rogenmoser L, Arnicane A, Jäncke L, Elmer S. The left dorsal stream causally mediates the tone labeling in absolute pitch. Ann N Y Acad Sci 2021; 1500:122-133. [PMID: 34046902 PMCID: PMC8518498 DOI: 10.1111/nyas.14616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 05/03/2021] [Accepted: 05/07/2021] [Indexed: 11/29/2022]
Abstract
Absolute pitch (AP) refers to the ability to effortlessly identify given pitches without any reference. Correlative evidence suggests that the left posterior dorsolateral prefrontal cortex (DLPFC) is responsible for the process underlying pitch labeling in AP. Here, we measured the sight‐reading performance of right‐handed AP possessors and matched controls under cathodal and sham transcranial direct current stimulation of the left DLPFC. The participants were instructed to report notations as accurately and as fast as possible by playing with their right hand on a piano. The notations were simultaneously presented with distracting auditory stimuli that either matched or mismatched them in different semitone degrees. Unlike the controls, AP possessors revealed an interference effect in that they responded slower in mismatching conditions than in the matching one. Under cathodal stimulation, this interference effect disappeared. These findings confirm that the pitch‐labeling process underlying AP occurs automatically and is largely nonsuppressible when triggered by tone exposure. The improvement of the AP possessors’ sight‐reading performances in response to the suppression of the left DLPFC using cathodal stimulation confirms a causal relationship between this brain structure and pitch labeling.
Collapse
Affiliation(s)
- Lars Rogenmoser
- Department of Medicine, University of Fribourg, Fribourg, Switzerland
| | - Andra Arnicane
- Auditory Research Group Zurich (ARGZ), Division of Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Lutz Jäncke
- Auditory Research Group Zurich (ARGZ), Division of Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland.,University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| | - Stefan Elmer
- Auditory Research Group Zurich (ARGZ), Division of Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
41
|
Yang Y, Weiss PH, Fink GR, Chen Q. Hand preference for the visual and auditory modalities in humans. Sci Rep 2021; 11:7868. [PMID: 33846508 PMCID: PMC8041834 DOI: 10.1038/s41598-021-87396-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 03/22/2021] [Indexed: 02/01/2023] Open
Abstract
The sensory dominance effect refers to the phenomenon that one sensory modality more frequently receives preferential processing (and eventually dominates consciousness and behavior) over and above other modalities. On the other hand, hand dominance is an innate aspect of the human motor system. To investigate how the sensory dominance effect interacts with hand dominance, we applied the adapted Colavita paradigm and recruited a large cohort of healthy right-handed participants (n = 119). While the visual dominance effect in bimodal trials was observed for the whole group (n = 119), about half of the right-handers (48%) showed a visual preference, i.e., their dominant hand effect manifested in responding to the visual stimuli. By contrast, 39% of the right-handers exhibited an auditory preference, i.e., the dominant hand effect occurred for the auditory responses. The remaining participants (13%) did not show any dominant hand preference for either visual or auditory responses. For the first time, the current behavioral data revealed that human beings possess a characteristic and persistent preferential link between different sensory modalities and the dominant vs. non-dominant hand. Whenever this preferential link between the sensory and the motor system was adopted, one dominance effect peaks upon the other dominance effect's best performance.
Collapse
Affiliation(s)
- Yuqian Yang
- grid.8385.60000 0001 2297 375XCognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Wilhelm-Johnen-Str., 52428 Jülich, Germany
| | - Peter H. Weiss
- grid.8385.60000 0001 2297 375XCognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Wilhelm-Johnen-Str., 52428 Jülich, Germany ,grid.6190.e0000 0000 8580 3777Department of Neurology, University Hospital Cologne and Faculty of Medicine, University of Cologne, 50937 Cologne, Germany
| | - Gereon R. Fink
- grid.8385.60000 0001 2297 375XCognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Wilhelm-Johnen-Str., 52428 Jülich, Germany ,grid.6190.e0000 0000 8580 3777Department of Neurology, University Hospital Cologne and Faculty of Medicine, University of Cologne, 50937 Cologne, Germany
| | - Qi Chen
- grid.8385.60000 0001 2297 375XCognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Wilhelm-Johnen-Str., 52428 Jülich, Germany ,grid.263785.d0000 0004 0368 7397Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, 510631 China ,grid.419897.a0000 0004 0369 313XKey Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, 510631 China
| |
Collapse
|
42
|
Latini F, Trevisi G, Fahlström M, Jemstedt M, Alberius Munkhammar Å, Zetterling M, Hesselager G, Ryttlefors M. New Insights Into the Anatomy, Connectivity and Clinical Implications of the Middle Longitudinal Fasciculus. Front Neuroanat 2021; 14:610324. [PMID: 33584207 PMCID: PMC7878690 DOI: 10.3389/fnana.2020.610324] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 12/30/2020] [Indexed: 12/01/2022] Open
Abstract
The middle longitudinal fascicle (MdLF) is a long, associative white matter tract connecting the superior temporal gyrus (STG) with the parietal and occipital lobe. Previous studies show different cortical terminations, and a possible segmentation pattern of the tract. In this study, we performed a post-mortem white matter dissection of 12 human hemispheres and an in vivo deterministic fiber tracking of 24 subjects acquired from the Human Connectome Project to establish whether a constant organization of fibers exists among the MdLF subcomponents and to acquire anatomical information on each subcomponent. Moreover, two clinical cases of brain tumors impinged on MdLF territories are reported to further discuss the anatomical results in light of previously published data on the functional involvement of this bundle. The main finding is that the MdLF is consistently organized into two layers: an antero-ventral segment (aMdLF) connecting the anterior STG (including temporal pole and planum polare) and the extrastriate lateral occipital cortex, and a posterior-dorsal segment (pMdLF) connecting the posterior STG, anterior transverse temporal gyrus and planum temporale with the superior parietal lobule and lateral occipital cortex. The anatomical connectivity pattern and quantitative differences between the MdLF subcomponents along with the clinical cases reported in this paper support the role of MdLF in high-order functions related to acoustic information. We suggest that pMdLF may contribute to the learning process associated with verbal-auditory stimuli, especially on left side, while aMdLF may play a role in processing/retrieving auditory information already consolidated within the temporal lobe.
Collapse
Affiliation(s)
- Francesco Latini
- Neurosurgical Unit, Department of Surgery, Ospedale Santo Spirito, Pescara, Italy
| | - Gianluca Trevisi
- Neurosurgical Unit, Department of Surgery, Ospedale Santo Spirito, Pescara, Italy
| | - Markus Fahlström
- Section of Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Malin Jemstedt
- Section of Speech-Language Pathology, Department of Neuroscience, Uppsala University, Uppsala, Sweden
| | | | - Maria Zetterling
- Section of Neurosurgery, Department of Neuroscience, Uppsala University, Uppsala, Sweden
| | - Göran Hesselager
- Section of Neurosurgery, Department of Neuroscience, Uppsala University, Uppsala, Sweden
| | - Mats Ryttlefors
- Section of Neurosurgery, Department of Neuroscience, Uppsala University, Uppsala, Sweden
| |
Collapse
|
43
|
Michaelis K, Miyakoshi M, Norato G, Medvedev AV, Turkeltaub PE. Motor engagement relates to accurate perception of phonemes and audiovisual words, but not auditory words. Commun Biol 2021; 4:108. [PMID: 33495548 PMCID: PMC7835217 DOI: 10.1038/s42003-020-01634-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Accepted: 12/15/2020] [Indexed: 11/12/2022] Open
Abstract
A longstanding debate has surrounded the role of the motor system in speech perception, but progress in this area has been limited by tasks that only examine isolated syllables and conflate decision-making with perception. Using an adaptive task that temporally isolates perception from decision-making, we examined an EEG signature of motor activity (sensorimotor μ/beta suppression) during the perception of auditory phonemes, auditory words, audiovisual words, and environmental sounds while holding difficulty constant at two levels (Easy/Hard). Results revealed left-lateralized sensorimotor μ/beta suppression that was related to perception of speech but not environmental sounds. Audiovisual word and phoneme stimuli showed enhanced left sensorimotor μ/beta suppression for correct relative to incorrect trials, while auditory word stimuli showed enhanced suppression for incorrect trials. Our results demonstrate that motor involvement in perception is left-lateralized, is specific to speech stimuli, and it not simply the result of domain-general processes. These results provide evidence for an interactive network for speech perception in which dorsal stream motor areas are dynamically engaged during the perception of speech depending on the characteristics of the speech signal. Crucially, this motor engagement has different effects on the perceptual outcome depending on the lexicality and modality of the speech stimulus. Michaelis et al. used extra-cranial EEG during a forced-choice identification task to investigate the role of the motor system in speech perception. Their findings suggest that left hemisphere dorsal stream motor areas are dynamically engaged during speech perception based on the properties of the stimulus.
Collapse
Affiliation(s)
- Kelly Michaelis
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA.,Human Cortical Physiology and Stroke Neurorehabilitation Section, National Institute for Neurological Disorders and Stroke (NINDS), National Institutes of Health, Bethesda, MD, USA
| | - Makoto Miyakoshi
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, San Diego, CA, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Andrei V Medvedev
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA. .,Research Division, Medstar National Rehabilitation Hospital, Washington, DC, USA.
| |
Collapse
|
44
|
Cannon JJ, Patel AD. How Beat Perception Co-opts Motor Neurophysiology. Trends Cogn Sci 2020; 25:137-150. [PMID: 33353800 DOI: 10.1016/j.tics.2020.11.002] [Citation(s) in RCA: 70] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 11/06/2020] [Accepted: 11/12/2020] [Indexed: 02/08/2023]
Abstract
Beat perception offers cognitive scientists an exciting opportunity to explore how cognition and action are intertwined in the brain even in the absence of movement. Many believe the motor system predicts the timing of beats, yet current models of beat perception do not specify how this is neurally implemented. Drawing on recent insights into the neurocomputational properties of the motor system, we propose that beat anticipation relies on action-like processes consisting of precisely patterned neural time-keeping activity in the supplementary motor area (SMA), orchestrated and sequenced by activity in the dorsal striatum. In addition to synthesizing recent advances in cognitive science and motor neuroscience, our framework provides testable predictions to guide future work.
Collapse
Affiliation(s)
- Jonathan J Cannon
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA, USA; Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, CA.
| |
Collapse
|
45
|
Proksch S, Comstock DC, Médé B, Pabst A, Balasubramaniam R. Motor and Predictive Processes in Auditory Beat and Rhythm Perception. Front Hum Neurosci 2020; 14:578546. [PMID: 33061902 PMCID: PMC7518112 DOI: 10.3389/fnhum.2020.578546] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 08/18/2020] [Indexed: 11/30/2022] Open
Abstract
In this article, we review recent advances in research on rhythm and musical beat perception, focusing on the role of predictive processes in auditory motor interactions. We suggest that experimental evidence of the motor system's role in beat perception, including in passive listening, may be explained by the generation and maintenance of internal predictive models, concordant with the Active Inference framework of sensory processing. We highlight two complementary hypotheses for the neural underpinnings of rhythm perception: The Action Simulation for Auditory Prediction hypothesis (Patel and Iversen, 2014) and the Gradual Audiomotor Evolution hypothesis (Merchant and Honing, 2014) and review recent experimental progress supporting each of these hypotheses. While initial formulations of ASAP and GAE explain different aspects of beat-based timing-the involvement of motor structures in the absence of movement, and physical entrainment to an auditory beat respectively-we suggest that work under both hypotheses provide converging evidence toward understanding the predictive role of the motor system in the perception of rhythm, and the specific neural mechanisms involved. We discuss future experimental work necessary to further evaluate the causal neural mechanisms underlying beat and rhythm perception.
Collapse
Affiliation(s)
- Shannon Proksch
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Daniel C Comstock
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Butovens Médé
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Alexandria Pabst
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Ramesh Balasubramaniam
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| |
Collapse
|
46
|
Stankova EP, Kruchinina OV, Shepovalnikov AN, Galperina EI. Evolution of the Central Mechanisms
of Oral Speech. J EVOL BIOCHEM PHYS+ 2020. [DOI: 10.1134/s0022093020030011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
47
|
Fasano MC, Glerean E, Gold BP, Sheng D, Sams M, Vuust P, Rauschecker JP, Brattico E. Inter-subject Similarity of Brain Activity in Expert Musicians After Multimodal Learning: A Behavioral and Neuroimaging Study on Learning to Play a Piano Sonata. Neuroscience 2020; 441:102-116. [PMID: 32569807 DOI: 10.1016/j.neuroscience.2020.06.015] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 06/11/2020] [Accepted: 06/14/2020] [Indexed: 11/26/2022]
Abstract
Human behavior is inherently multimodal and relies on sensorimotor integration. This is evident when pianists exhibit activity in motor and premotor cortices, as part of a dorsal pathway, while listening to a familiar piece of music, or when naïve participants learn to play simple patterns on the piano. Here we investigated the interaction between multimodal learning and dorsal-stream activity over the course of four weeks in ten skilled pianists by adopting a naturalistic data-driven analysis approach. We presented the pianists with audio-only, video-only and audiovisual recordings of a piano sonata during functional magnetic resonance imaging (fMRI) before and after they had learned to play the sonata by heart for a total of four weeks. We followed the learning process and its outcome with questionnaires administered to the pianists, one piano instructor following their training, and seven external expert judges. The similarity of the pianists' brain activity during stimulus presentations was examined before and after learning by means of inter-subject correlation (ISC) analysis. After learning, an increased ISC was found in the pianists while watching the audiovisual performance, particularly in motor and premotor regions of the dorsal stream. While these brain structures have previously been associated with learning simple audio-motor sequences, our findings are the first to suggest their involvement in learning a complex and demanding audiovisual-motor task. Moreover, the most motivated learners and the best performers of the sonata showed ISC in the dorsal stream and in the reward brain network.
Collapse
Affiliation(s)
- Maria C Fasano
- Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland; International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russia
| | - Benjamin P Gold
- Montreal Neurological Institute, McGill University, Montreál, Canada
| | - Dana Sheng
- Department of Neuroscience, Georgetown University Medical Center, Washington, USA
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland; Department of Computer Science, Alto University, Espoo, Finland; Advanced Magnetic Imaging (AMI) Centre, Aalto University School of Science, Espoo, Finland
| | - Peter Vuust
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, USA; Institute for Advanced Study, TUM, Munich, Germany
| | - Elvira Brattico
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy.
| |
Collapse
|
48
|
Why do we move to the beat? A multi-scale approach, from physical principles to brain dynamics. Neurosci Biobehav Rev 2020; 112:553-584. [DOI: 10.1016/j.neubiorev.2019.12.024] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 10/20/2019] [Accepted: 12/13/2019] [Indexed: 01/08/2023]
|
49
|
Dricu M, Frühholz S. A neurocognitive model of perceptual decision-making on emotional signals. Hum Brain Mapp 2020; 41:1532-1556. [PMID: 31868310 PMCID: PMC7267943 DOI: 10.1002/hbm.24893] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/18/2019] [Accepted: 11/29/2019] [Indexed: 01/09/2023] Open
Abstract
Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.
Collapse
Affiliation(s)
- Mihai Dricu
- Department of PsychologyUniversity of BernBernSwitzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center Zurich (ZNZ)University of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
50
|
Dias JW, McClaskey CM, Eckert MA, Jensen JH, Harris KC. Intra- and interhemispheric white matter tract associations with auditory spatial processing: Distinct normative and aging effects. Neuroimage 2020; 215:116792. [PMID: 32278895 PMCID: PMC7292771 DOI: 10.1016/j.neuroimage.2020.116792] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 02/20/2020] [Accepted: 03/20/2020] [Indexed: 12/18/2022] Open
Abstract
Declining auditory spatial processing is hypothesized to contribute to the difficulty older adults have detecting, locating, and selecting a talker from among others in noisy listening environments. Though auditory spatial processing has been associated with several cortical structures, little is known regarding the underlying white matter architecture or how age-related changes in white matter microstructure may affect it. The arcuate fasciculus is a target for understanding age-related differences in auditory spatial attention based on normative spatial attention findings in humans. Similarly, animal and human clinical studies suggest that the corpus callosum plays a role in the cross-hemispheric integration of auditory spatial information important for spatial localization and attention. The current investigation used diffusion imaging to examine the extent to which age-group differences in the identification of spatially cued speech were accounted for by individual differences in the white matter microstructure of the right arcuate fasciculus and the corpus callosum. Higher right arcuate and callosal fractional anisotropy (FA) predicted better segregation and identification of spatially cued speech across younger and older listeners. Further, individual differences in callosal microstructure mediated age-group differences in auditory spatial processing. Follow-up analyses suggested that callosal tracts connecting left and right pre-frontal and posterior parietal cortex are particularly important for auditory spatial processing. The results are consistent with previous work in animals and clinical human samples and provide a cortical mechanism to account for age-related deficits in auditory spatial processing. Further, the results suggest that both intrahemispheric and interhemispheric mechanisms are involved in auditory spatial processing.
Collapse
|