1
|
Preisig BC, Meyer M. Predictive coding and dimension-selective attention enhance the lateralization of spoken language processing. Neurosci Biobehav Rev 2025; 172:106111. [PMID: 40118260 DOI: 10.1016/j.neubiorev.2025.106111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 02/12/2025] [Accepted: 03/15/2025] [Indexed: 03/23/2025]
Abstract
Hemispheric lateralization in speech and language processing exemplifies functional brain specialization. Seminal work in patients with left hemisphere damage highlighted the left-hemispheric dominance in language functions. However, speech processing is not confined to the left hemisphere. Hence, some researchers associate lateralization with auditory processing asymmetries: slow temporal and fine spectral acoustic information is preferentially processed in right auditory regions, while faster temporal information is primarily handled by left auditory regions. Other scholars posit that lateralization relates more to linguistic processing, particularly for speech and speech-like stimuli. We argue that these seemingly distinct accounts are interdependent. Linguistic analysis of speech relies on top-down processes, such as predictive coding and dimension-selective auditory attention, which enhance lateralized processing by engaging left-lateralized sensorimotor networks. Our review highlights that lateralization is weaker for simple sounds, stronger for speech-like sounds, and strongest for meaningful speech. Evidence shows that predictive speech processing and selective attention enhance lateralization. We illustrate that these top-down processes rely on left-lateralized sensorimotor networks and provide insights into the role of these networks in speech processing.
Collapse
Affiliation(s)
- Basil C Preisig
- The Institute for the Interdisciplinary Study of Language Evolution, Evolutionary Neuroscience of Language, University of Zurich, Switzerland; Zurich Center for Linguistics, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, Switzerland.
| | - Martin Meyer
- The Institute for the Interdisciplinary Study of Language Evolution, Evolutionary Neuroscience of Language, University of Zurich, Switzerland; Zurich Center for Linguistics, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, Switzerland
| |
Collapse
|
2
|
Oderbolz C, Poeppel D, Meyer M. Asymmetric Sampling in Time: Evidence and perspectives. Neurosci Biobehav Rev 2025; 171:106082. [PMID: 40010659 DOI: 10.1016/j.neubiorev.2025.106082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2024] [Revised: 02/15/2025] [Accepted: 02/21/2025] [Indexed: 02/28/2025]
Abstract
Auditory and speech signals are undisputedly processed in both left and right hemispheres, but this bilateral allocation is likely unequal. The Asymmetric Sampling in Time (AST) hypothesis proposed a division of labor that has its neuroanatomical basis in the distribution of neuronal ensembles with differing temporal integration constants: left auditory areas house a larger proportion of ensembles with shorter temporal integration windows (tens of milliseconds), suited to process rapidly changing signals; right auditory areas host a larger proportion with longer time constants (∼150-300 ms), ideal for slowly changing signals. Here we evaluate the large body of findings that clarifies this relationship between auditory temporal structure and functional lateralization. In this reappraisal, we unpack whether this relationship is influenced by stimulus type (speech/nonspeech), stimulus temporal extent (long/short), task engagement (high/low), or (imaging) modality (hemodynamic/electrophysiology/behavior). We find that the right hemisphere displays a clear preference for slowly changing signals whereas the left-hemispheric preference for rapidly changing signals is highly dependent on the experimental design. We consider neuroanatomical properties potentially linked to functional lateralization, contextualize the results in an evolutionary perspective, and highlight future directions.
Collapse
Affiliation(s)
- Chantal Oderbolz
- Institute for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland; Department of Neuroscience, Georgetown University Medical Center, Washington D.C., USA.
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
| | - Martin Meyer
- Institute for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Bakhtiar M, Johari K. The application of non-invasive neuromodulation in stuttering: Current status and future directions. JOURNAL OF FLUENCY DISORDERS 2025; 83:106100. [PMID: 39879702 DOI: 10.1016/j.jfludis.2025.106100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 01/20/2025] [Accepted: 01/20/2025] [Indexed: 01/31/2025]
Abstract
Non-invasive neuromodulation methods such as transcranial Direct Current Stimulation (tDCS) and Transcranial Magnetic Stimulation (TMS), have been extensively utilized to enhance treatment efficacy for various neurogenic communicative disorders. Recently, these methods have gained attention for their potential to reveal more about the underlying nature of stuttering and serve as adjunct therapeutic approaches for stuttering intervention. In this review, we present existing research and discuss critical factors that might influence the efficacy of these interventions, such as location, polarity, intensity, and duration of stimulation, as well as the impact of combined behavioral training. Additionally, we explore implications for future studies, including the application of different neuromodulation methods to address various aspects of stuttering such as speech fluency and associated psychological and cognitive aspects in people who stutter.
Collapse
Affiliation(s)
- Mehdi Bakhtiar
- Speech and Neuromodulation Laboratory, Unit of Human Communication, Learning and Development, Faculty of Education, The University of Hong Kong, Hong Kong.
| | - Karim Johari
- Human Neurophysiology and Neuromodulation Laboratory, Department of Communication Science and Disorders, Louisiana State University, Baton Roug, LA, USA
| |
Collapse
|
4
|
Zhang M, Riecke L, Bonte M. Cortical tracking of language structures: Modality-dependent and independent responses. Clin Neurophysiol 2024; 166:56-65. [PMID: 39111244 DOI: 10.1016/j.clinph.2024.07.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 04/18/2024] [Accepted: 07/20/2024] [Indexed: 09/15/2024]
Abstract
OBJECTIVES The mental parsing of linguistic hierarchy is crucial for language comprehension, and while there is growing interest in the cortical tracking of auditory speech, the neurophysiological substrates for tracking written language are still unclear. METHODS We recorded electroencephalographic (EEG) responses from participants exposed to auditory and visual streams of either random syllables or tri-syllabic real words. Using a frequency-tagging approach, we analyzed the neural representations of physically presented (i.e., syllables) and mentally constructed (i.e., words) linguistic units and compared them between the two sensory modalities. RESULTS We found that tracking syllables is partially modality dependent, with anterior and posterior scalp regions more involved in the tracking of spoken and written syllables, respectively. The cortical tracking of spoken and written words instead was found to involve a shared anterior region to a similar degree, suggesting a modality-independent process for word tracking. CONCLUSION Our study suggests that basic linguistic features are represented in a sensory modality-specific manner, while more abstract ones are modality-unspecific during the online processing of continuous language input. SIGNIFICANCE The current methodology may be utilized in future research to examine the development of reading skills, especially the deficiencies in fluent reading among those with dyslexia.
Collapse
Affiliation(s)
- Manli Zhang
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Lars Riecke
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
5
|
Truppa V, Gamba M, Togliatto R, Caselli M, Zanoli A, Palagi E, Norscia I. Manual preference, performance, and dexterity for bimanual grass-feeding behavior in wild geladas (Theropithecus gelada). Am J Primatol 2024; 86:e23602. [PMID: 38299312 DOI: 10.1002/ajp.23602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/15/2024] [Accepted: 01/18/2024] [Indexed: 02/02/2024]
Abstract
We assessed whether wild geladas, highly specialized terrestrial grass eaters, are lateralized for bimanual grass-plucking behavior. According to the literature, we expected that complex motor movements in grass feeding would favor the emergence of a population-level hand bias in these primates. In addition, we described geladas' manual behavior based on systematic observations of several individuals. Our study group included 28 individuals belonging to a population of free-ranging geladas frequenting the Kundi plateau, Ethiopia. We filmed monkeys while feeding on grass, and hand preference and performance were coded. Geladas performed more plucking movements per second with their left hand (LH) compared to the right one and preferred their LH both to start and finish collection bouts. Also, the rhythmic movements of each hand had a significant tendency toward isochrony. Finally, geladas used forceful pad-to-pad precision grips, in-hand movements, and compound grips to pluck and collect grass blades, considered the most advanced manual skills in primate species. The LH's leading role suggests an advantage of the right hemisphere in regulating geladas' bimanual grass-feeding behavior. The tactile input from the hands and/or rhythmic hand movements might contribute to explaining this pattern of laterality. Our findings highlighted the importance of adopting multiple laterality measures to investigate manual laterality. Moreover, the need to speed up the execution time of manual foraging might be a further important factor in studying the evolution of manual laterality and dexterity in primates.
Collapse
Affiliation(s)
- Valentina Truppa
- Unit of Cognitive Primatology and Primate Center, Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Marco Gamba
- Department of Life Sciences and Systems Biology, University of Torino, Torino, Italy
| | - Roberta Togliatto
- Unit of Cognitive Primatology and Primate Center, Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
- Department of Life Sciences and Systems Biology, University of Torino, Torino, Italy
| | - Marta Caselli
- Department of Life Sciences and Systems Biology, University of Torino, Torino, Italy
| | - Anna Zanoli
- Department of Life Sciences and Systems Biology, University of Torino, Torino, Italy
| | - Elisabetta Palagi
- Department of Biology, Unit of Ethology, University of Pisa, Pisa, Italy
| | - Ivan Norscia
- Department of Life Sciences and Systems Biology, University of Torino, Torino, Italy
| |
Collapse
|
6
|
Luo Q, Gao L, Yang Z, Chen S, Yang J, Lu S. Integrated sentence-level speech perception evokes strengthened language networks and facilitates early speech development. Neuroimage 2024; 289:120544. [PMID: 38365164 DOI: 10.1016/j.neuroimage.2024.120544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 12/23/2023] [Accepted: 02/14/2024] [Indexed: 02/18/2024] Open
Abstract
Natural poetic speeches (i.e., proverbs, nursery rhymes, and commercial ads) with strong prosodic regularities are easily memorized by children and the harmonious acoustic patterns are suggested to facilitate their integrated sentence processing. Do children have specific neural pathways for perceiving such poetic utterances, and does their speech development benefit from it? We recorded the task-induced hemodynamic changes of 94 children aged 2 to 12 years using functional near-infrared spectroscopy (fNIRS) while they listened to poetic and non-poetic natural sentences. Seventy-three adult as controls were recruited to investigate the developmental specificity of children group. The results indicated that poetic sentences perceiving is a highly integrated process featured by a lower brain workload in both groups. However, an early activated large-scale network was induced only in the child group, coordinated by hubs for connectivity diversity. Additionally, poetic speeches evoked activation in the phonological encoding regions in the children's group rather than adult controls which decreases with children's ages. The neural responses to poetic speeches were positively linked to children's speech communication performance, especially the fluency and semantic aspects. These results reveal children's neural sensitivity to integrated speech perception which facilitate early speech development by strengthening more sophisticated language networks and the perception-production circuit.
Collapse
Affiliation(s)
- Qinqin Luo
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Chinese Language and Literature, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Leyan Gao
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China
| | - Zhirui Yang
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Sihui Chen
- Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jingwen Yang
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China
| | - Shuo Lu
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China.
| |
Collapse
|
7
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
8
|
Schwab S, Mouthon M, Jost LB, Salvadori J, Stefanos-Yakoub I, da Silva EF, Giroud N, Perriard B, Annoni JM. Neural correlates of lexical stress processing in a foreign free-stress language. Brain Behav 2023; 13:e2854. [PMID: 36573037 PMCID: PMC9847599 DOI: 10.1002/brb3.2854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/25/2022] [Accepted: 12/05/2022] [Indexed: 12/28/2022] Open
Abstract
INTRODUCTION The paper examines the discrimination of lexical stress contrasts in a foreign language from a neural perspective. The aim of the study was to identify the areas associated with word stress processing (in comparison with vowel processing), when listeners of a fixed-stress language have to process stress in a foreign free-stress language. METHODS We asked French-speaking participants to process stress and vowel contrasts in Spanish, a foreign language that the participants did not know. Participants performed a discrimination task on Spanish word pairs differing either with respect to word stress (penultimate or final stressed word) or with respect to the final vowel while functional magnetic resonance imaging data was acquired. RESULTS Behavioral results showed lower accuracy and longer reaction times for discriminating stress contrasts than vowel contrasts. The contrast Stress > Vowel revealed an increased bilateral activation of regions shown to be associated with stress processing (i.e., supplementary motor area, insula, middle/superior temporal gyrus), as well as a stronger involvement of areas related to more domain-general cognitive control functions (i.e., bilateral inferior frontal gyrus). The contrast Vowel > Stress showed an increased activation in regions typically associated with the default mode network (known for decreasing its activity during attentionally more demanding tasks). CONCLUSION When processing Spanish stress contrasts as compared to processing vowel contrasts, native listeners of French activated to a higher degree anterior networks including regions related to cognitive control. They also show a decrease in regions related to the default mode network. These findings, together with the behavioral results, reflect the higher cognitive demand, and therefore, the larger difficulties, for French-speaking listeners during stress processing as compared to vowel processing.
Collapse
Affiliation(s)
- Sandra Schwab
- Department of French, University of Fribourg, Fribourg, Switzerland
| | - Michael Mouthon
- Neurology-Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Fribourg, Switzerland
| | - Lea B Jost
- Neurology-Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Fribourg, Switzerland
| | | | | | | | - Nathalie Giroud
- Computational Neuroscience of Speech & Hearing, Department of Computational Linguistics, University of Zurich, Zürich, Switzerland
| | - Benoit Perriard
- Department of French, University of Fribourg, Fribourg, Switzerland
| | - Jean-Marie Annoni
- Neurology-Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
9
|
Theta Band (4-8 Hz) Oscillations Reflect Online Processing of Rhythm in Speech Production. Brain Sci 2022; 12:brainsci12121593. [PMID: 36552053 PMCID: PMC9775388 DOI: 10.3390/brainsci12121593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 11/08/2022] [Accepted: 11/14/2022] [Indexed: 11/24/2022] Open
Abstract
How speech prosody is processed in the brain during language production remains an unsolved issue. The present work used the phrase-recall paradigm to analyze brain oscillation underpinning rhythmic processing in speech production. Participants were told to recall target speeches aloud consisting of verb-noun pairings with a common (e.g., [2+2], the numbers in brackets represent the number of syllables) or uncommon (e.g., [1+3]) rhythmic pattern. Target speeches were preceded by rhythmic musical patterns, either congruent or incongruent, created by using pure tones at various temporal intervals. Electroencephalogram signals were recorded throughout the experiment. Behavioral results in 2+2 target speeches showed a rhythmic priming effect when comparing congruent and incongruent conditions. Cerebral-acoustic coherence analysis showed that neural activities synchronized with the rhythmic patterns of primes. Furthermore, target phrases that had congruent rhythmic patterns with a prime rhythm were associated with increased theta-band (4-8 Hz) activity in the time window of 400-800 ms in both the 2+2 and 1+3 target conditions. These findings suggest that rhythmic patterns can be processed online. Neural activities synchronize with the rhythmic input and speakers create an abstract rhythmic pattern before and during articulation in speech production.
Collapse
|
10
|
Chen Y, Luo Q, Liang M, Gao L, Yang J, Feng R, Liu J, Qiu G, Li Y, Zheng Y, Lu S. Children's Neural Sensitivity to Prosodic Features of Natural Speech and Its Significance to Speech Development in Cochlear Implanted Children. Front Neurosci 2022; 16:892894. [PMID: 35903806 PMCID: PMC9315047 DOI: 10.3389/fnins.2022.892894] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 06/14/2022] [Indexed: 11/13/2022] Open
Abstract
Catchy utterances, such as proverbs, verses, and nursery rhymes (i.e., "No pain, no gain" in English), contain strong-prosodic (SP) features and are child-friendly in repeating and memorizing; yet the way those prosodic features encoded by neural activity and their influence on speech development in children are still largely unknown. Using functional near-infrared spectroscopy (fNIRS), this study investigated the cortical responses to the perception of natural speech sentences with strong/weak-prosodic (SP/WP) features and evaluated the speech communication ability in 21 pre-lingually deaf children with cochlear implantation (CI) and 25 normal hearing (NH) children. A comprehensive evaluation of speech communication ability was conducted on all the participants to explore the potential correlations between neural activities and children's speech development. The SP information evoked right-lateralized cortical responses across a broad brain network in NH children and facilitated the early integration of linguistic information, highlighting children's neural sensitivity to natural SP sentences. In contrast, children with CI showed significantly weaker cortical activation and characteristic deficits in speech perception with SP features, suggesting hearing loss at the early age of life, causing significantly impaired sensitivity to prosodic features of sentences. Importantly, the level of neural sensitivity to SP sentences was significantly related to the speech behaviors of all children participants. These findings demonstrate the significance of speech prosodic features in children's speech development.
Collapse
Affiliation(s)
- Yuebo Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qinqin Luo
- Department of Chinese Language and Literature, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Leyan Gao
- Neurolinguistics Teaching Laboratory, Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jingwen Yang
- Department of Neurology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Ruiyan Feng
- Neurolinguistics Teaching Laboratory, Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Hearing and Speech Science Department, Guangzhou Xinhua University, Guangzhou, China
| | - Guoxin Qiu
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yi Li
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Hearing and Speech Science Department, Guangzhou Xinhua University, Guangzhou, China
| | - Shuo Lu
- School of Foreign Languages, Shenzhen University, Shenzhen, China
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
11
|
Heller Murray ES, Segawa J, Karahanoglu FI, Tocci C, Tourville JA, Nieto-Castanon A, Tager-Flusberg H, Manoach DS, Guenther FH. Increased Intra-Subject Variability of Neural Activity During Speech Production in People with Autism Spectrum Disorder. RESEARCH IN AUTISM SPECTRUM DISORDERS 2022; 94:101955. [PMID: 35601992 PMCID: PMC9119427 DOI: 10.1016/j.rasd.2022.101955] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Background Communication difficulties are a core deficit in many people with autism spectrum disorder (ASD). The current study evaluated neural activation in participants with ASD and neurotypical (NT) controls during a speech production task. Methods Neural activities of participants with ASD (N = 15, M = 16.7 years, language abilities ranged from low verbal abilities to verbally fluent) and NT controls (N = 12, M = 17.1 years) was examined using functional magnetic resonance imaging with a sparse-sampling paradigm. Results There were no differences between the ASD and NT groups in average speech activation or inter-subject run-to-run variability in speech activation. Intra-subject run-to-run neural variability was greater in the ASD group and was positively correlated with autism severity in cortical areas associated with speech. Conclusions These findings highlight the importance of understanding intra-subject neural variability in participants with ASD.
Collapse
Affiliation(s)
- Elizabeth S. Heller Murray
- Boston University, Department of Speech, Language, & Hearing Sciences, 635 Commonwealth Avenue, Boston, MA, 02215
| | - Jennifer Segawa
- Boston University, Department of Speech, Language, & Hearing Sciences, 635 Commonwealth Avenue, Boston, MA, 02215
| | - F. Isik Karahanoglu
- Massachusetts General Hospital, Department of Psychiatry, Harvard Medical School, 55 Fruit Street, Boston, MA, 02215
| | - Catherine Tocci
- Massachusetts General Hospital, Department of Psychiatry, Harvard Medical School, 55 Fruit Street, Boston, MA, 02215
| | - Jason A. Tourville
- Boston University, Department of Speech, Language, & Hearing Sciences, 635 Commonwealth Avenue, Boston, MA, 02215
| | - Alfonso Nieto-Castanon
- Boston University, Department of Speech, Language, & Hearing Sciences, 635 Commonwealth Avenue, Boston, MA, 02215
| | - Helen Tager-Flusberg
- Boston University, Department of Psychological and Brain Sciences, 64 Cummington Mall Boston, MA, 02115
| | - Dara S. Manoach
- Massachusetts General Hospital, Department of Psychiatry, Harvard Medical School, 55 Fruit Street, Boston, MA, 02215
- Athinoula A. Martinos Center for Biomedical Imaging, 149 13th Street, Room 2618, Charlestown, MA 02129
| | - Frank H. Guenther
- Boston University, Department of Speech, Language, & Hearing Sciences, 635 Commonwealth Avenue, Boston, MA, 02215
- Boston University, Department of Biomedical Engineering, 44 Cummington Mall Boston, MA, 02115
| |
Collapse
|
12
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18:10.1088/1741-2552/abecf0. [PMID: 33690177 PMCID: PMC8738965 DOI: 10.1088/1741-2552/abecf0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
13
|
Neural entrainment to speech and nonspeech in dyslexia: Conceptual replication and extension of previous investigations. Cortex 2021; 137:160-178. [PMID: 33618156 DOI: 10.1016/j.cortex.2020.12.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 11/02/2020] [Accepted: 12/23/2020] [Indexed: 12/17/2022]
Abstract
Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information is still under debate. Previous findings suggested that dyslexic participants showed atypical neural entrainment to slow and/or fast temporal modulations in speech, which might affect prosodic/syllabic and phonemic processing respectively. However, the large methodological variations across these studies do not allow us to draw clear conclusions on the nature of the entrainment deficit in dyslexia. Using magnetoencephalography, we measured neural entrainment to nonspeech and speech in both groups. We first aimed to conceptually replicate previous studies on auditory entrainment in dyslexia, using the same measurement methods as in previous studies, and also using new measurement methods (cross-correlation analyses) to better characterize the synchronization between stimulus and brain response. We failed to observe any of the significant group differences that had previously been reported in delta, theta and gamma frequency bands, whether using speech or nonspeech stimuli. However, when analyzing amplitude cross-correlations between noise stimuli and brain responses, we found that control participants showed larger responses than dyslexic participants in the delta range in the right hemisphere and in the gamma range in the left hemisphere. Overall, our results are weakly consistent with the hypothesis that dyslexic individuals show an atypical entrainment to temporal modulations. Our attempt at replicating previously published results highlights the multiple weaknesses of this research area, particularly low statistical power due to small sample size, and the lack of methodological standards inducing considerable heterogeneity of measurement and analysis methods across studies.
Collapse
|
14
|
Liao X, Sun J, Jin Z, Wu D, Liu J. Cortical Morphological Changes in Congenital Amusia: Surface-Based Analyses. Front Psychiatry 2021; 12:721720. [PMID: 35095585 PMCID: PMC8794692 DOI: 10.3389/fpsyt.2021.721720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 12/07/2021] [Indexed: 11/25/2022] Open
Abstract
Background: Congenital amusia (CA) is a rare disorder characterized by deficits in pitch perception, and many structural and functional magnetic resonance imaging studies have been conducted to better understand its neural bases. However, a structural magnetic resonance imaging analysis using a surface-based morphology method to identify regions with cortical features abnormalities at the vertex-based level has not yet been performed. Methods: Fifteen participants with CA and 13 healthy controls underwent structural magnetic resonance imaging. A surface-based morphology method was used to identify anatomical abnormalities. Then, the surface parameters' mean value of the identified clusters with statistically significant between-group differences were extracted and compared. Finally, Pearson's correlation analysis was used to assess the correlation between the Montreal Battery of Evaluation of Amusia (MBEA) scores and surface parameters. Results: The CA group had significantly lower MBEA scores than the healthy controls (p = 0.000). The CA group exhibited a significant higher fractal dimension in the right caudal middle frontal gyrus and a lower sulcal depth in the right pars triangularis gyrus (p < 0.05; false discovery rate-corrected at the cluster level) compared to healthy controls. There were negative correlations between the mean fractal dimension values in the right caudal middle frontal gyrus and MBEA score, including the mean MBEA score (r = -0.5398, p = 0.0030), scale score (r = -0.5712, p = 0.0015), contour score (r = -0.4662, p = 0.0124), interval score (r = -0.4564, p = 0.0146), rhythmic score (r = -0.5133, p = 0.0052), meter score (r = -0.3937, p = 0.0382), and memory score (r = -0.3879, p = 0.0414). There was a significant positive correlation between the mean sulcal depth in the right pars triangularis gyrus and the MBEA score, including the mean score (r = 0.5130, p = 0.0052), scale score (r = 0.5328, p = 0.0035), interval score (r = 0.4059, p = 0.0321), rhythmic score (r = 0.5733, p = 0.0014), meter score (r = 0.5061, p = 0.0060), and memory score (r = 0.4001, p = 0.0349). Conclusion: Individuals with CA exhibit cortical morphological changes in the right hemisphere. These findings may indicate that the neural basis of speech perception and memory impairments in individuals with CA is associated with abnormalities in the right pars triangularis gyrus and middle frontal gyrus, and that these cortical abnormalities may be a neural marker of CA.
Collapse
Affiliation(s)
- Xuan Liao
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Junjie Sun
- Department of Radiology, The Sir Run Run Shaw Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Zhishuai Jin
- Medical Psychological Center, The Second Xiangya Hospital of Central South University, Changsha, China
| | - DaXing Wu
- Medical Psychological Center, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China.,Clinical Research Center for Medical Imaging in Hunan Province, Changsha, China.,Department of Radiology Quality Control Center, The Second Xiangya Hospital of Central South University, Changsha, China
| |
Collapse
|
15
|
Dietziker J, Staib M, Frühholz S. Neural competition between concurrent speech production and other speech perception. Neuroimage 2020; 228:117710. [PMID: 33385557 DOI: 10.1016/j.neuroimage.2020.117710] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 11/28/2020] [Accepted: 12/19/2020] [Indexed: 10/22/2022] Open
Abstract
Understanding others' speech while individuals simultaneously produce speech utterances implies neural competition and requires specific mechanisms for a neural resolution given that previous studies proposed opposing signal dynamics for both processes in the auditory cortex (AC). We here used neuroimaging in humans to investigate this neural competition by lateralized stimulations with other speech samples and ipsilateral or contralateral lateralized feedback of actively produced self speech utterances in the form of various speech vowels. In experiment 1, we show, first, that others' speech classifications during active self speech lead to activity in the planum temporale (PTe) when both self and other speech samples were presented together to only the left or right ear. The contralateral PTe also seemed to indifferently respond to single self and other speech samples. Second, specific activity in the left anterior superior temporal cortex (STC) was found during dichotic stimulations (i.e. self and other speech presented to separate ears). Unlike previous studies, this left anterior STC activity supported self speech rather than other speech processing. Furthermore, right mid and anterior STC was more involved in other speech processing. These results signify specific mechanisms for self and other speech processing in the left and right STC beyond a more general speech processing in PTe. Third, other speech recognition in the context of listening to recorded self speech in experiment 2 led to largely symmetric activity in STC and additionally in inferior frontal subregions. The latter was previously reported to be generally relevant for other speech perception and classification, but we found frontal activity only when other speech classification was challenged by recorded but not by active self speech samples. Altogether, unlike formerly established brain networks for uncompetitive other speech perception, active self speech during other speech perception seemingly leads to a neural reordering, functional reassignment, and unusual lateralization of AC and frontal brain activations.
Collapse
Affiliation(s)
- Joris Dietziker
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland.
| | - Matthias Staib
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Switzerland; Department of Psychology, University of Oslo, Norway.
| |
Collapse
|
16
|
Shi ER, Zhang Q. A domain-general perspective on the role of the basal ganglia in language and music: Benefits of music therapy for the treatment of aphasia. BRAIN AND LANGUAGE 2020; 206:104811. [PMID: 32442810 DOI: 10.1016/j.bandl.2020.104811] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 03/19/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
In addition to cortical lesions, mounting evidence on the links between language and the subcortical regions suggests that subcortical lesions may also lead to the emergence of aphasic symptoms. In this paper, by emphasizing the domain-general function of the basal ganglia in both language and music, we highlight that rhythm processing, the function of temporal prediction, motor programming and execution, is an important shared mechanism underlying the treatment of non-fluent aphasia with music therapy. In support of this, we conduct a literature review on the music therapy treating aphasia. The results show that rhythm processing plays a key role in Melodic Intonation Therapy in the rehabilitation of non-fluent aphasia patients with lesions on the basal ganglia. This paper strengthens the correlation between the basal ganglia lesions and language deficits, and provides support to the direction of taking advantage of rhythm as an important point in music therapy in clinical studies.
Collapse
Affiliation(s)
- Edward Ruoyang Shi
- Department of Catalan Philology and General Linguistics, University of Barcelona, Gran Via de Les Corts Catalanes, 585, 08007 Barcelona, Spain
| | - Qing Zhang
- Department of Psychology, Sun Yat-Sen Universtiy, Waihuan East Road, No. 132, Guangzhou 510006, China.
| |
Collapse
|
17
|
Honbolygó F, Kóbor A, Hermann P, Kettinger ÁO, Vidnyánszky Z, Kovács G, Csépe V. Expectations about word stress modulate neural activity in speech-sensitive cortical areas. Neuropsychologia 2020; 143:107467. [PMID: 32305299 DOI: 10.1016/j.neuropsychologia.2020.107467] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 03/06/2020] [Accepted: 04/12/2020] [Indexed: 10/24/2022]
Abstract
A recent dual-stream model of language processing proposed that the postero-dorsal stream performs predictive sequential processing of linguistic information via hierarchically organized internal models. However, it remains unexplored whether the prosodic segmentation of linguistic information involves predictive processes. Here, we addressed this question by investigating the processing of word stress, a major component of speech segmentation, using probabilistic repetition suppression (RS) modulation as a marker of predictive processing. In an event-related acoustic fMRI RS paradigm, we presented pairs of pseudowords having the same (Rep) or different (Alt) stress patterns, in blocks with varying Rep and Alt trial probabilities. We found that the BOLD signal was significantly lower for Rep than for Alt trials, indicating RS in the posterior and middle superior temporal gyrus (STG) bilaterally, and in the anterior STG in the left hemisphere. Importantly, the magnitude of RS was modulated by repetition probability in the posterior and middle STG. These results reveal the predictive processing of word stress in the STG areas and raise the possibility that words stress processing is related to the dorsal "where" auditory stream.
Collapse
Affiliation(s)
- Ferenc Honbolygó
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary; Institute of Psychology, Eötvös Loránd University, Budapest, Hungary.
| | - Andrea Kóbor
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| | - Petra Hermann
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| | - Ádám Ottó Kettinger
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary; Department of Nuclear Techniques, Budapest University of Technology and Economics, Budapest, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| | - Gyula Kovács
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary; Department of Biological Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Jena, Germany
| | - Valéria Csépe
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary; Faculty of Modern Philology and Social Sciences, University of Pannonia, Veszprém, Hungary
| |
Collapse
|
18
|
Notter MP, Hanke M, Murray MM, Geiser E. Encoding of Auditory Temporal Gestalt in the Human Brain. Cereb Cortex 2020; 29:475-484. [PMID: 29365070 DOI: 10.1093/cercor/bhx328] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Indexed: 12/16/2022] Open
Abstract
The perception of an acoustic rhythm is invariant to the absolute temporal intervals constituting a sound sequence. It is unknown where in the brain temporal Gestalt, the percept emerging from the relative temporal proximity between acoustic events, is encoded. Two different relative temporal patterns, each induced by three experimental conditions with different absolute temporal patterns as sensory basis, were presented to participants. A linear support vector machine classifier was trained to differentiate activation patterns in functional magnetic resonance imaging data to the two different percepts. Across the sensory constituents the classifier decoded which percept was perceived. A searchlight analysis localized activation patterns specific to the temporal Gestalt bilaterally to the temporoparietal junction, including the planum temporale and supramarginal gyrus, and unilaterally to the right inferior frontal gyrus (pars opercularis). We show that auditory areas not only process absolute temporal intervals, but also integrate them into percepts of Gestalt and that encoding of these percepts persists in high-level associative areas. The findings complement existing knowledge regarding the processing of absolute temporal patterns to the processing of relative temporal patterns relevant to the sequential binding of perceptual elements into Gestalt.
Collapse
Affiliation(s)
- Michael P Notter
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Michael Hanke
- Institute of Psychology, Otto-von-Guericke-University.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Micah M Murray
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.,Ophthalmology Department, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Eveline Geiser
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,McGovern Institute, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
19
|
Kimball AE, Yiu LK, Watson DG. Word Recall is Affected by Surrounding Metrical Context. LANGUAGE, COGNITION AND NEUROSCIENCE 2019; 35:383-392. [PMID: 33015217 PMCID: PMC7531771 DOI: 10.1080/23273798.2019.1665190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 08/25/2019] [Indexed: 06/11/2023]
Abstract
It has been claimed that English has a metrical structure, or rhythm, in which stressed and unstressed syllables alternate. In previous research regular, alternating patterns have been shown to facilitate online language comprehension. Expanding these findings to downstream processing would lead to the prediction that metrical regularity enhances memory. Research from the memory literature, however, indicates that regular patterns are less salient and therefore less well remembered, and also that strings of similar sounds are harder to remember. This work suggests that, like lists of words with similar sounds, lists of words with similar metrical patterns are less accurately remembered than comparable metrically irregular patterns. The present study tests these conflicting predictions by examining the effects of metrical regularity in a recall task. We find that words are better recalled when they do not match their metrical context, suggesting that a regular metrical structure may not be beneficial in all contexts.
Collapse
Affiliation(s)
| | - Loretta K Yiu
- Department of Human Centered Design and Engineering, University of Washington
| | - Duane G Watson
- Department of Psychology and Human Development, Vanderbilt University
| |
Collapse
|
20
|
Montani V, Chanoine V, Grainger J, Ziegler JC. Frequency-tagged visual evoked responses track syllable effects in visual word recognition. Cortex 2019; 121:60-77. [PMID: 31550616 DOI: 10.1016/j.cortex.2019.08.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 06/11/2019] [Accepted: 08/11/2019] [Indexed: 01/05/2023]
Abstract
The processing of syllables in visual word recognition was investigated using a novel paradigm based on steady-state visual evoked potentials (SSVEPs). French words were presented to proficient readers in a delayed naming task. Words were split into two segments, the first of which was flickered at 18.75 Hz and the second at 25 Hz. The first segment either matched (congruent condition) or did not match (incongruent condition) the first syllable. The SSVEP responses in the congruent condition showed increased power compared to the responses in the incongruent condition, providing new evidence that syllables are important sublexical units in visual word recognition and reading aloud. With respect to the neural correlates of the effect, syllables elicited an early activation of a right hemisphere network. This network is typically associated with the programming of complex motor sequences, cognitive control and timing. Subsequently, responses were obtained in left hemisphere areas related to phonological processing.
Collapse
Affiliation(s)
- Veronica Montani
- Aix-Marseille University and CNRS, Brain and Language Research Institute, Marseille Cedex 3, France.
| | - Valérie Chanoine
- Aix-Marseille University, Institute of Language, Communication and the Brain, Brain and Language Research Institute, Aix-en-Provence, France
| | | | | |
Collapse
|
21
|
Giroud N, Keller M, Hirsiger S, Dellwo V, Meyer M. Bridging the brain structure—brain function gap in prosodic speech processing in older adults. Neurobiol Aging 2019; 80:116-126. [DOI: 10.1016/j.neurobiolaging.2019.04.017] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 04/24/2019] [Accepted: 04/26/2019] [Indexed: 12/21/2022]
|
22
|
McKinney TL, Euler MJ. Neural anticipatory mechanisms predict faster reaction times and higher fluid intelligence. Psychophysiology 2019; 56:e13426. [PMID: 31241187 DOI: 10.1111/psyp.13426] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 04/11/2019] [Accepted: 05/11/2019] [Indexed: 10/26/2022]
Abstract
Higher cognitive ability is reliably linked to better performance on chronometric tasks (i.e., faster reaction times, RT), yet the neural basis of these effects remains unclear. Anticipatory processes represent compelling yet understudied potential mechanisms of these effects, which may facilitate performance through reducing the uncertainty surrounding the temporal onset of stimuli (temporal uncertainty) and/or facilitating motor readiness despite uncertainty about impending target locations (target uncertainty). Specifically, the contingent negative variation (CNV) represents a compelling candidate mechanism of anticipatory motor planning, while the alpha oscillation is thought to be sensitive to temporal contingencies in perceptual systems. The current study undertook a secondary analysis of a large data set (n = 91) containing choice RT, cognitive ability, and EEG measurements to help clarify these issues. Single-trial EEG analysis in conjunction with mixed-effects modeling revealed that higher fluid intelligence corresponded to faster RT on average. When considered together, temporal and target uncertainty moderated the RT-ability relationship, with higher ability being associated with greater resilience to both types of uncertainty. Target uncertainty attenuated the amplitude of the CNV for all participants, but higher ability individuals were more resilient to this effect. Similarly, only higher ability individuals showed increased prestimulus alpha power (at left-lateralized sites) during longer, more easily anticipated interstimulus intervals. Collectively, these findings emphasize top-down anticipatory processes as likely contributors to chronometry-ability correlations.
Collapse
Affiliation(s)
- Ty L McKinney
- Department of Psychology, University of Utah, Salt Lake City, Utah
| | - Matthew J Euler
- Department of Psychology, University of Utah, Salt Lake City, Utah
| |
Collapse
|
23
|
Kellmeyer P, Vry MS, Ball T. A transcallosal fibre system between homotopic inferior frontal regions supports complex linguistic processing. Eur J Neurosci 2019; 50:3544-3556. [PMID: 31209927 PMCID: PMC6899774 DOI: 10.1111/ejn.14486] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 05/20/2019] [Accepted: 05/30/2019] [Indexed: 12/31/2022]
Abstract
Inferior frontal regions in the left and right hemisphere support different aspects of language processing. In the canonical model, left inferior frontal regions are mostly involved in processing based on phonological, syntactic and semantic features of language, whereas the right inferior frontal regions process paralinguistic aspects like affective prosody. Using diffusion tensor imaging (DTI)‐based probabilistic fibre tracking in 20 healthy volunteers, we identify a callosal fibre system connecting left and right inferior frontal regions that are involved in linguistic processing of varying complexity. Anatomically, we show that the interhemispheric fibres are highly aligned and distributed along a rostral to caudal gradient in the body and genu of the corpus callosum to connect homotopic inferior frontal regions. In the light of converging data, taking previous DTI‐based tracking studies and clinical case studies into account, our findings suggest that the right inferior frontal cortex not only processes paralinguistic aspects of language (such as affective prosody), as purported by the canonical model, but also supports the computation of linguistic aspects of varying complexity in the human brain. Our model may explain patterns of right‐hemispheric contribution to stroke recovery as well as disorders of prosodic processing. Beyond language‐related brain function, we discuss how inter‐species differences in interhemispheric connectivity and fibre density, including the system we described here may also explain differences in transcallosal information transfer and cognitive abilities across different mammalian species.
Collapse
Affiliation(s)
- Philipp Kellmeyer
- Neuromedical Artificial Intelligence Lab, Department of Neurosurgery, Medical Center-University of Freiburg, Freiburg im Breisgau, Germany.,Cluster of Excellence BrainLinks-BrainTools, University of Freiburg, Freiburg im Breisgau, Germany
| | - Magnus-Sebastian Vry
- Department of Psychiatry and Psychotherapy, Faculty of Medicine, Medical Center-University of Freiburg, Freiburg im Breisgau, Germany
| | - Tonio Ball
- Neuromedical Artificial Intelligence Lab, Department of Neurosurgery, Medical Center-University of Freiburg, Freiburg im Breisgau, Germany.,Cluster of Excellence BrainLinks-BrainTools, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
24
|
Keller M, Neuschwander P, Meyer M. When right becomes less right: Neural dedifferentiation during suprasegmental speech processing in the aging brain. Neuroimage 2019; 189:886-895. [DOI: 10.1016/j.neuroimage.2019.01.050] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Revised: 01/17/2019] [Accepted: 01/17/2019] [Indexed: 01/27/2023] Open
|
25
|
Bareš M, Apps R, Avanzino L, Breska A, D'Angelo E, Filip P, Gerwig M, Ivry RB, Lawrenson CL, Louis ED, Lusk NA, Manto M, Meck WH, Mitoma H, Petter EA. Consensus paper: Decoding the Contributions of the Cerebellum as a Time Machine. From Neurons to Clinical Applications. CEREBELLUM (LONDON, ENGLAND) 2019; 18:266-286. [PMID: 30259343 DOI: 10.1007/s12311-018-0979-5] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Time perception is an essential element of conscious and subconscious experience, coordinating our perception and interaction with the surrounding environment. In recent years, major technological advances in the field of neuroscience have helped foster new insights into the processing of temporal information, including extending our knowledge of the role of the cerebellum as one of the key nodes in the brain for this function. This consensus paper provides a state-of-the-art picture from the experts in the field of the cerebellar research on a variety of crucial issues related to temporal processing, drawing on recent anatomical, neurophysiological, behavioral, and clinical research.The cerebellar granular layer appears especially well-suited for timing operations required to confer millisecond precision for cerebellar computations. This may be most evident in the manner the cerebellum controls the duration of the timing of agonist-antagonist EMG bursts associated with fast goal-directed voluntary movements. In concert with adaptive processes, interactions within the cerebellar cortex are sufficient to support sub-second timing. However, supra-second timing seems to require cortical and basal ganglia networks, perhaps operating in concert with cerebellum. Additionally, sensory information such as an unexpected stimulus can be forwarded to the cerebellum via the climbing fiber system, providing a temporally constrained mechanism to adjust ongoing behavior and modify future processing. Patients with cerebellar disorders exhibit impairments on a range of tasks that require precise timing, and recent evidence suggest that timing problems observed in other neurological conditions such as Parkinson's disease, essential tremor, and dystonia may reflect disrupted interactions between the basal ganglia and cerebellum.The complex concepts emerging from this consensus paper should provide a foundation for further discussion, helping identify basic research questions required to understand how the brain represents and utilizes time, as well as delineating ways in which this knowledge can help improve the lives of those with neurological conditions that disrupt this most elemental sense. The panel of experts agrees that timing control in the brain is a complex concept in whom cerebellar circuitry is deeply involved. The concept of a timing machine has now expanded to clinical disorders.
Collapse
Affiliation(s)
- Martin Bareš
- First Department of Neurology, St. Anne's University Hospital and Faculty of Medicine, Masaryk University, Brno, Czech Republic.
- Department of Neurology, School of Medicine, University of Minnesota, Minneapolis, USA.
| | - Richard Apps
- School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, UK
| | - Laura Avanzino
- Department of Experimental Medicine, Section of Human Physiology and Centro Polifunzionale di Scienze Motorie, University of Genoa, Genoa, Italy
- Centre for Parkinson's Disease and Movement Disorders, Ospedale Policlinico San Martino, Genoa, Italy
| | - Assaf Breska
- Department of Psychology and Helen Wills Neuroscience Institute, University of California, Berkeley, USA
| | - Egidio D'Angelo
- Neurophysiology Unit, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- Brain Connectivity Center, Fondazione Istituto Neurologico Nazionale Casimiro Mondino (IRCCS), Pavia, Italy
| | - Pavel Filip
- First Department of Neurology, St. Anne's University Hospital and Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | - Marcus Gerwig
- Department of Neurology, University of Duisburg-Essen, Duisburg, Germany
| | - Richard B Ivry
- Department of Psychology and Helen Wills Neuroscience Institute, University of California, Berkeley, USA
| | - Charlotte L Lawrenson
- School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, UK
| | - Elan D Louis
- Department of Neurology, Yale School of Medicine, Yale University, New Haven, CT, USA
- Department of Chronic Disease Epidemiology, Yale School of Public Health, Yale University, New Haven, CT, USA
| | - Nicholas A Lusk
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - Mario Manto
- Department of Neurology, CHU-Charleroi, Charleroi, Belgium -Service des Neurosciences, UMons, Mons, Belgium
| | - Warren H Meck
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - Hiroshi Mitoma
- Medical Education Promotion Center, Tokyo Medical University, Tokyo, Japan
| | - Elijah A Petter
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| |
Collapse
|
26
|
Xu XM, Jiao Y, Tang TY, Zhang J, Salvi R, Teng GJ. Inefficient Involvement of Insula in Sensorineural Hearing Loss. Front Neurosci 2019; 13:133. [PMID: 30842724 PMCID: PMC6391342 DOI: 10.3389/fnins.2019.00133] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Accepted: 02/06/2019] [Indexed: 01/22/2023] Open
Abstract
The insular cortex plays an important role in multimodal sensory processing, audio-visual integration and emotion; however, little is known about how the insula is affected by auditory deprivation due to sensorineural hearing loss (SNHL). To address this issue, we used structural and functional magnetic resonance imaging to determine if the neural activity within the insula and its interregional functional connectivity (FC) was disrupted by SNHL and if these alterations were correlated clinical measures of emotion and cognition. Thirty-five SNHL subjects and 54 Controls enrolled in our study underwent auditory evaluation, neuropsychological assessments, functional and structure MRI, respectively. Twenty five patients and 20 Controls underwent arterial spin labeling scanning. FC of six insula subdivisions were assessed and the FC results were compared to the neuropsychological tests. Interregional connections were also compared among insula-associated networks, including salience network (SN), default mode network (DMN), and central executive network (CEN). Compared to Controls, SNHL subjects demonstrated hyperperfusion in the insula and significantly decreased FC between some insula subdivisions and other brain regions, including thalamus, putamen, precentral gyrus, postcentral gyrus, mid-cingulate cortex, dorsolateral prefrontal cortex, rolandic operculum. Anxiety, depression and cognitive impairments were correlated with FC values. Abnormal interactions among SN, DMN, and CEN were observed in SNHL group. Our result provides support for the "inefficient high-order control" theory of the insula in which the auditory deprivation caused by SNHL contributes to impaired sensory integration and central deficits in emotional and cognitive processing.
Collapse
Affiliation(s)
- Xiao-Min Xu
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School of Southeast University, Nanjing, China
| | - Yun Jiao
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School of Southeast University, Nanjing, China
| | - Tian-Yu Tang
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School of Southeast University, Nanjing, China
| | - Jian Zhang
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School of Southeast University, Nanjing, China
| | - Richard Salvi
- Center for Hearing and Deafness, University at Buffalo, Buffalo, NY, United States
| | - Gao-Jun Teng
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School of Southeast University, Nanjing, China
| |
Collapse
|
27
|
Sammler D, Cunitz K, Gierhan SME, Anwander A, Adermann J, Meixensberger J, Friederici AD. White matter pathways for prosodic structure building: A case study. BRAIN AND LANGUAGE 2018; 183:1-10. [PMID: 29758365 DOI: 10.1016/j.bandl.2018.05.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 03/14/2018] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
The relevance of left dorsal and ventral fiber pathways for syntactic and semantic comprehension is well established, while pathways for prosody are little explored. The present study examined linguistic prosodic structure building in a patient whose right arcuate/superior longitudinal fascicles and posterior corpus callosum were transiently compromised by a vasogenic peritumoral edema. Compared to ten matched healthy controls, the patient's ability to detect irregular prosodic structure significantly improved between pre- and post-surgical assessment. This recovery was accompanied by an increase in average fractional anisotropy (FA) in right dorsal and posterior transcallosal fiber tracts. Neither general cognitive abilities nor (non-prosodic) syntactic comprehension nor FA in right ventral and left dorsal fiber tracts showed a similar pre-post increase. Together, these findings suggest a contribution of right dorsal and inter-hemispheric pathways to prosody perception, including the right-dorsal tracking and structuring of prosodic pitch contours that is transcallosally informed by concurrent syntactic information.
Collapse
Affiliation(s)
- Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany.
| | - Katrin Cunitz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital Ulm, Steinhövelstraße 5, 89075 Ulm, Germany
| | - Sarah M E Gierhan
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University Berlin, Unter den Linden 6, 10099 Berlin, Germany
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
| | - Jens Adermann
- University Hospital Leipzig, Clinic and Policlinic for Neurosurgery, Liebigstraße 20, 04103 Leipzig, Germany
| | - Jürgen Meixensberger
- University Hospital Leipzig, Clinic and Policlinic for Neurosurgery, Liebigstraße 20, 04103 Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University Berlin, Unter den Linden 6, 10099 Berlin, Germany
| |
Collapse
|
28
|
Rosemann S, Thiel CM. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment. Neuroimage 2018; 175:425-437. [PMID: 29655940 DOI: 10.1016/j.neuroimage.2018.04.023] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 03/09/2018] [Accepted: 04/09/2018] [Indexed: 11/19/2022] Open
Abstract
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
29
|
Guiraud H, Bedoin N, Krifi-Papoz S, Herbillon V, Caillot-Bascoul A, Gonzalez-Monge S, Boulenger V. Don't speak too fast! Processing of fast rate speech in children with specific language impairment. PLoS One 2018; 13:e0191808. [PMID: 29373610 PMCID: PMC5786310 DOI: 10.1371/journal.pone.0191808] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 01/11/2018] [Indexed: 11/23/2022] Open
Abstract
Background Perception of speech rhythm requires the auditory system to track temporal envelope fluctuations, which carry syllabic and stress information. Reduced sensitivity to rhythmic acoustic cues has been evidenced in children with Specific Language Impairment (SLI), impeding syllabic parsing and speech decoding. Our study investigated whether these children experience specific difficulties processing fast rate speech as compared with typically developing (TD) children. Method Sixteen French children with SLI (8–13 years old) with mainly expressive phonological disorders and with preserved comprehension and 16 age-matched TD children performed a judgment task on sentences produced 1) at normal rate, 2) at fast rate or 3) time-compressed. Sensitivity index (d′) to semantically incongruent sentence-final words was measured. Results Overall children with SLI perform significantly worse than TD children. Importantly, as revealed by the significant Group × Speech Rate interaction, children with SLI find it more challenging than TD children to process both naturally or artificially accelerated speech. The two groups do not significantly differ in normal rate speech processing. Conclusion In agreement with rhythm-processing deficits in atypical language development, our results suggest that children with SLI face difficulties adjusting to rapid speech rate. These findings are interpreted in light of temporal sampling and prosodic phrasing frameworks and of oscillatory mechanisms underlying speech perception.
Collapse
Affiliation(s)
- Hélène Guiraud
- Laboratoire Dynamique Du Langage, CNRS/Université de Lyon UMR5596, Lyon, France
- * E-mail: (HG); (VB)
| | - Nathalie Bedoin
- Laboratoire Dynamique Du Langage, CNRS/Université de Lyon UMR5596, Lyon, France
| | - Sonia Krifi-Papoz
- Service de Neurologie Pédiatrique, Hôpital Femme Mère Enfant, Bron, France
| | - Vania Herbillon
- Service Épilepsie, Sommeil et Explorations Fonctionnelles Neuropédiatriques, Hôpital Femme Mère Enfant, Bron, France
- Centre de Recherche en Neurosciences de Lyon, DYCOG, INSERM U1028 / CNRS UMR5292, Bron, France
| | - Aurélia Caillot-Bascoul
- Service ORL chirurgie cervico-faciale, Centre Hospitalier Universitaire Gabriel Montpied, Clermont-Ferrand, France
| | - Sibylle Gonzalez-Monge
- Centre de Référence Troubles des Apprentissages, Service de Rééducation pédiatrique, Hôpital Femme Mère Enfant, Bron, France
| | - Véronique Boulenger
- Laboratoire Dynamique Du Langage, CNRS/Université de Lyon UMR5596, Lyon, France
- * E-mail: (HG); (VB)
| |
Collapse
|
30
|
Aggelopoulos NC, Deike S, Selezneva E, Scheich H, Brechmann A, Brosch M. Predictive cues for auditory stream formation in humans and monkeys. Eur J Neurosci 2017; 51:1254-1264. [PMID: 29250854 DOI: 10.1111/ejn.13808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 12/12/2017] [Accepted: 12/12/2017] [Indexed: 11/27/2022]
Abstract
Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses.
Collapse
Affiliation(s)
- Nikolaos C Aggelopoulos
- Special Lab of Primate Neurobiology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
| | - Susann Deike
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Elena Selezneva
- Special Lab of Primate Neurobiology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
| | - Henning Scheich
- Emeritus Group Lifelong Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| | - André Brechmann
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| | - Michael Brosch
- Special Lab of Primate Neurobiology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| |
Collapse
|
31
|
Flexible, rapid and automatic neocortical word form acquisition mechanism in children as revealed by neuromagnetic brain response dynamics. Neuroimage 2017; 155:450-459. [DOI: 10.1016/j.neuroimage.2017.03.066] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2016] [Revised: 03/03/2017] [Accepted: 03/31/2017] [Indexed: 11/15/2022] Open
|
32
|
An oscillopathic approach to developmental dyslexia: From genes to speech processing. Behav Brain Res 2017; 329:84-95. [DOI: 10.1016/j.bbr.2017.03.048] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Revised: 03/14/2017] [Accepted: 03/18/2017] [Indexed: 12/27/2022]
|
33
|
Katlowitz KA, Oya H, Howard MA, Greenlee JDW, Long MA. Paradoxical vocal changes in a trained singer by focally cooling the right superior temporal gyrus. Cortex 2017; 89:111-119. [PMID: 28282570 PMCID: PMC5421518 DOI: 10.1016/j.cortex.2017.01.024] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 11/26/2016] [Accepted: 01/30/2017] [Indexed: 11/24/2022]
Abstract
The production and perception of music is preferentially mediated by cortical areas within the right hemisphere, but little is known about how these brain regions individually contribute to this process. In an experienced singer undergoing awake craniotomy, we demonstrated that direct electrical stimulation to a portion of the right posterior superior temporal gyrus (pSTG) selectively interrupted singing but not speaking. We then focally cooled this region to modulate its activity during vocalization. In contrast to similar manipulations in left hemisphere speech production regions, pSTG cooling did not elicit any changes in vocal timing or quality. However, this manipulation led to an increase in the pitch of speaking with no such change in singing. Further analysis revealed that all vocalizations exhibited a cooling-induced increase in the frequency of the first formant, raising the possibility that potential pitch offsets may have been actively avoided during singing. Our results suggest that the right pSTG plays a key role in vocal sensorimotor processing whose impact is dependent on the type of vocalization produced.
Collapse
Affiliation(s)
- Kalman A Katlowitz
- NYU Neuroscience Institute, New York University Langone Medical Center, New York, NY, USA; Department of Otolaryngology, New York University Langone Medical Center, New York, NY, USA; Center for Neural Science, New York University, New York, NY, USA
| | - Hiroyuki Oya
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Matthew A Howard
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Jeremy D W Greenlee
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Medical Center, New York, NY, USA; Department of Otolaryngology, New York University Langone Medical Center, New York, NY, USA; Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
34
|
Kandylaki KD, Henrich K, Nagels A, Kircher T, Domahs U, Schlesewsky M, Bornkessel-Schlesewsky I, Wiese R. Where Is the Beat? The Neural Correlates of Lexical Stress and Rhythmical Well-formedness in Auditory Story Comprehension. J Cogn Neurosci 2017; 29:1119-1131. [PMID: 28294714 DOI: 10.1162/jocn_a_01122] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
While listening to continuous speech, humans process beat information to correctly identify word boundaries. The beats of language are stress patterns that are created by combining lexical (word-specific) stress patterns and the rhythm of a specific language. Sometimes, the lexical stress pattern needs to be altered to obey the rhythm of the language. This study investigated the interplay of lexical stress patterns and rhythmical well-formedness in natural speech with fMRI. Previous electrophysiological studies on cases in which a regular lexical stress pattern may be altered to obtain rhythmical well-formedness showed that even subtle rhythmic deviations are detected by the brain if attention is directed toward prosody. Here, we present a new approach to this phenomenon by having participants listen to contextually rich stories in the absence of a task targeting the manipulation. For the interaction of lexical stress and rhythmical well-formedness, we found one suprathreshold cluster localized between the cerebellum and the brain stem. For the main effect of lexical stress, we found higher BOLD responses to the retained lexical stress pattern in the bilateral SMA, bilateral postcentral gyrus, bilateral middle fontal gyrus, bilateral inferior and right superior parietal lobule, and right precuneus. These results support the view that lexical stress is processed as part of a sensorimotor network of speech comprehension. Moreover, our results connect beat processing in language to domain-independent timing perception.
Collapse
Affiliation(s)
| | - Karen Henrich
- 2 Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | | | | | - Ulrike Domahs
- 4 Free University of Bozen-Bolzano, Brixen-Bressanone, Italy
| | | | | | | |
Collapse
|
35
|
Cutini S, Szűcs D, Mead N, Huss M, Goswami U. Atypical right hemisphere response to slow temporal modulations in children with developmental dyslexia. Neuroimage 2016; 143:40-49. [PMID: 27520749 PMCID: PMC5139981 DOI: 10.1016/j.neuroimage.2016.08.012] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Revised: 07/22/2016] [Accepted: 08/08/2016] [Indexed: 01/18/2023] Open
Abstract
Phase entrainment of neuronal oscillations is thought to play a central role in encoding speech. Children with developmental dyslexia show impaired phonological processing of speech, proposed theoretically to be related to atypical phase entrainment to slower temporal modulations in speech (<10Hz). While studies of children with dyslexia have found atypical phase entrainment in the delta band (~2Hz), some studies of adults with developmental dyslexia have shown impaired entrainment in the low gamma band (~35-50Hz). Meanwhile, studies of neurotypical adults suggest asymmetric temporal sensitivity in auditory cortex, with preferential processing of slower modulations by right auditory cortex, and faster modulations processed bilaterally. Here we compared neural entrainment to slow (2Hz) versus faster (40Hz) amplitude-modulated noise using fNIRS to study possible hemispheric asymmetry effects in children with developmental dyslexia. We predicted atypical right hemisphere responding to 2Hz modulations for the children with dyslexia in comparison to control children, but equivalent responding to 40Hz modulations in both hemispheres. Analyses of HbO concentration revealed a right-lateralised region focused on the supra-marginal gyrus that was more active in children with dyslexia than in control children for 2Hz stimulation. We discuss possible links to linguistic prosodic processing, and interpret the data with respect to a neural 'temporal sampling' framework for conceptualizing the phonological deficits that characterise children with developmental dyslexia across languages.
Collapse
Affiliation(s)
- Simone Cutini
- Department of Developmental Psychology, University of Padova, Italy
| | - Dénes Szűcs
- Centre for Neuroscience in Education, Department of Psychology, Downing Street, Cambridge CB2 3EB, UK
| | - Natasha Mead
- Centre for Neuroscience in Education, Department of Psychology, Downing Street, Cambridge CB2 3EB, UK
| | - Martina Huss
- Centre for Neuroscience in Education, Department of Psychology, Downing Street, Cambridge CB2 3EB, UK
| | - Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, Downing Street, Cambridge CB2 3EB, UK.
| |
Collapse
|
36
|
Zioga I, Di Bernardi Luft C, Bhattacharya J. Musical training shapes neural responses to melodic and prosodic expectation. Brain Res 2016; 1650:267-282. [PMID: 27622645 PMCID: PMC5069926 DOI: 10.1016/j.brainres.2016.09.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 09/01/2016] [Accepted: 09/09/2016] [Indexed: 11/15/2022]
Abstract
Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Further, pitch perception is modulated by musical training. The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). Participants' (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. At the neural level, prosodic violations elicited a front-central positive ERP around 150 ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Critically, musicians' P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise. Melodic expectancy influences the processing of prosodic expectancy. Musical expertise modulates pitch processing in music and language. Musicians have a more refined response to pitch. Musicians' neural responses are proportional to their level of musical expertise. Possible association between the P200 neural component and behavioural facilitation.
Collapse
Affiliation(s)
- Ioanna Zioga
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom.
| | - Caroline Di Bernardi Luft
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom; School of Biological and Chemical Sciences, Queen Mary, University of London, Mile End Rd, London E1 4NS, United Kingdom
| | - Joydeep Bhattacharya
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom
| |
Collapse
|
37
|
Sameiro-Barbosa CM, Geiser E. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation. Front Neurosci 2016; 10:361. [PMID: 27559306 PMCID: PMC4978719 DOI: 10.3389/fnins.2016.00361] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 07/20/2016] [Indexed: 12/18/2022] Open
Abstract
The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system.
Collapse
Affiliation(s)
- Catia M Sameiro-Barbosa
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois Lausanne, Switzerland
| | - Eveline Geiser
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire VaudoisLausanne, Switzerland; The Laboratory for Investigative Neurophysiology, Department of Radiology, Centre Hospitalier Universitaire VaudoisLausanne, Switzerland; Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of TechnologyCambridge, MA, USA
| |
Collapse
|
38
|
Distinct developmental trajectories for explicit and implicit timing. J Exp Child Psychol 2016; 150:141-154. [PMID: 27295205 DOI: 10.1016/j.jecp.2016.05.010] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2016] [Revised: 05/19/2016] [Accepted: 05/20/2016] [Indexed: 11/22/2022]
Abstract
Adults and children aged 5 and 8years were given explicit and implicit timing tasks. These tasks were based on the same temporal representation (the temporal interval between two signals), but in the explicit task participants received overt instructions to judge the duration of the interval, whereas in the implicit task they did not receive any temporal instructions and were asked only to press as quickly as possible after the second signal. In addition, participants' cognitive capacities were assessed with different neuropsychological tests. The results showed that temporal variability (i.e., the spread of performance around the reference interval) decreased as a function of age in the explicit task, being higher in the 5-year-olds than in the 8-year-olds and adults. The higher variability in the youngest children was directly linked to their limited cognitive capacity. By contrast, temporal variability in the implicit timing task remained constant across the different age groups and was unrelated to cognitive capacity. Processing of time, therefore, was independent of age in the implicit task but changed with age in the explicit task, thereby demonstrating distinct developmental trajectories for explicit and implicit timing.
Collapse
|
39
|
Archila-Suerte P, Bunta F, Hernandez AE. Speech sound learning depends on individuals' ability, not just experience. THE INTERNATIONAL JOURNAL OF BILINGUALISM : CROSS-DISCIPLINARY, CROSS-LINGUISTIC STUDIES OF LANGUAGE BEHAVIOR 2016; 20:231-253. [PMID: 30381786 PMCID: PMC6205517 DOI: 10.1177/1367006914552206] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
AIMS The goal of this study was to investigate if phonetic experience with two languages facilitated the learning of novel speech sounds or if general perceptual abilities independent of bilingualism played a role in this learning. METHOD The underlying neural mechanisms involved in novel speech sound learning were observed in groups of English monolinguals (n = 20), early Spanish-English bilinguals (n = 24), and experimentally derived subgroups of individuals with advanced ability to learn novel speech sound contrasts (ALs, n = 28) and individuals with non-advanced ability to learn novel speech sound contrasts (non-ALs, n = 16). Subjects participated in four consecutive sessions of phonetic training in which they listened to novel speech sounds embedded in Hungarian pseudowords. Participants completed two fMRI sessions, one before training and another one after training. While in the scanner, participants passively listened to the speech stimuli presented during training. A repeated measures behavioral analysis and ANOVA for fMRI data were conducted to investigate learning after training. RESULTS AND CONCLUSIONS The results showed that bilinguals did not significantly differ from monolinguals in the learning of novel sounds behaviorally. Instead, the behavioral results revealed that regardless of language group (monolingual or bilingual), ALs were better at discriminating pseudowords throughout the training than non-ALs. Neurally, region of interest (ROI) analysis showed increased activity in the superior temporal gyrus (STG) bilaterally in ALs relative to non-ALs after training. Bilinguals also showed greater STG activity than monolinguals. Extracted values from ROIs entered into a 2×2 MANOVA showed a main effect of performance, demonstrating that individual ability exerts a significant effect on learning novel speech sounds. In fact, advanced ability to learn novel speech sound contrasts appears to play a more significant role in speech sound learning than experience with two phonological systems.
Collapse
Affiliation(s)
| | - Ferenc Bunta
- Department of Communication Sciences and Disorders, University of Houston, USA
| | | |
Collapse
|
40
|
Abstract
Recent models of interval timing have emphasized local, modality-specific processes or a core network centered on a cortico-thalamic-striatal circuit, leaving the role of the cerebellum unclear. We examine this issue, using current taxonomies of timing as a guide to review the association of the cerebellum in motor and perceptual tasks in which timing information is explicit or implicit. Evidence from neuropsychological, neurophysiological, and neuroimaging studies indicates that the involvement of the cerebellum in timing is not restricted to any subdomain of this taxonomy. However, an emerging pattern is that tasks in which timing is done in cyclic continuous contexts do not rely on the cerebellum. In such scenarios, timing may be an emergent property of system dynamics, and especially oscillatory entrainment. The cerebellum may be necessary to time discrete intervals in the absence of continuous cyclic dynamics.
Collapse
Affiliation(s)
- Assaf Breska
- Department of Psychology and Helen Wills Neuroscience Institute University of California, Berkeley 94720-1650
| | - Richard B Ivry
- Department of Psychology and Helen Wills Neuroscience Institute University of California, Berkeley 94720-1650
| |
Collapse
|
41
|
LaCroix AN, Diaz AF, Rogalsky C. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study. Front Psychol 2015; 6:1138. [PMID: 26321976 PMCID: PMC4531212 DOI: 10.3389/fpsyg.2015.01138] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 07/22/2015] [Indexed: 11/30/2022] Open
Abstract
The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.
Collapse
Affiliation(s)
- Arianna N LaCroix
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| | - Alvaro F Diaz
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| | - Corianne Rogalsky
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| |
Collapse
|
42
|
Hurschler MA, Liem F, Oechslin M, Stämpfli P, Meyer M. fMRI reveals lateralized pattern of brain activity modulated by the metrics of stimuli during auditory rhyme processing. BRAIN AND LANGUAGE 2015; 147:41-50. [PMID: 26025759 DOI: 10.1016/j.bandl.2015.05.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2014] [Revised: 03/09/2015] [Accepted: 05/04/2015] [Indexed: 06/04/2023]
Abstract
Our fMRI study investigates auditory rhyme processing in spoken language to further elucidate the topic of functional lateralization of language processing. During scanning, 14 subjects listened to four different types of versed word strings and subsequently performed either a rhyme or a meter detection task. Our results show lateralization to auditory-related temporal regions in the right hemisphere irrespective of task. As for the left hemisphere we report responses in the supramarginal gyrus as well as in the opercular part of the inferior frontal gyrus modulated by the presence of regular meter and rhyme. The interaction of rhyme and meter was associated with increased involvement of the superior temporal sulcus and the putamen of the right hemisphere. Overall, these findings support the notion of right-hemispheric specialization for suprasegmental analyses during processing of spoken sentences and provide neuroimaging evidence for the influence of metrics on auditory rhyme processing.
Collapse
Affiliation(s)
- Martina A Hurschler
- Univ Zurich, Inst Psychol, Neuroplasticity and Learning in the Healthy Aging Brain (HAB LAB), Zurich, Switzerland.
| | - Franziskus Liem
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Univ Zurich, International Normal Aging and Plasticity Imaging Center, Zurich, Switzerland
| | - Mathias Oechslin
- Univ Zurich, International Normal Aging and Plasticity Imaging Center, Zurich, Switzerland
| | - Philipp Stämpfli
- Univ Zurich, MR-Center of the Psychiatric University Hospital and the Department of Child and Adolescent Psychiatry, Zurich, Switzerland; Univ Zurich, Department of Psychiatry, Psychotherapy and Psychosomatics, Psychiatric Hospital, Zurich, Switzerland
| | - Martin Meyer
- Univ Zurich, Inst Psychol, Neuroplasticity and Learning in the Healthy Aging Brain (HAB LAB), Zurich, Switzerland; Univ Zurich, International Normal Aging and Plasticity Imaging Center, Zurich, Switzerland; University Research Priority Program "Dynamics of Healthy Aging", University of Zurich, Switzerland; Univ Klagenfurt, Inst Psychol, Div Cognitive Neuroscience, Klagenfurt, Austria
| |
Collapse
|
43
|
Kotz SA, Schmidt-Kassow M. Basal ganglia contribution to rule expectancy and temporal predictability in speech. Cortex 2015; 68:48-60. [DOI: 10.1016/j.cortex.2015.02.021] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Revised: 01/28/2015] [Accepted: 02/25/2015] [Indexed: 10/23/2022]
|
44
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/03/2015] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return. Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Collapse
|
45
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2017] [Indexed: 12/28/2022] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
46
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004.2 DOI: 10.12688/f1000research.6175.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2016] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
47
|
Bornkessel-Schlesewsky I, Schlesewsky M, Small SL, Rauschecker JP. Neurobiological roots of language in primate audition: common computational properties. Trends Cogn Sci 2015; 19:142-50. [PMID: 25600585 PMCID: PMC4348204 DOI: 10.1016/j.tics.2014.12.008] [Citation(s) in RCA: 133] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Revised: 12/06/2014] [Accepted: 12/12/2014] [Indexed: 11/26/2022]
Abstract
Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions.
Collapse
Affiliation(s)
- Ina Bornkessel-Schlesewsky
- Cognitive Neuroscience Laboratory, School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, SA, Australia; Department of Germanic Linguistics, University of Marburg, Marburg, Germany.
| | - Matthias Schlesewsky
- Department of English and Linguistics, Johannes Gutenberg-University, Mainz, Germany
| | - Steven L Small
- Department of Neurology, University of California, Irvine, CA, USA
| | - Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington DC, USA; Institute for Advanced Study, Technische Universität München, Garching, Germany
| |
Collapse
|
48
|
Cason N, Astésano C, Schön D. Bridging music and speech rhythm: rhythmic priming and audio-motor training affect speech perception. Acta Psychol (Amst) 2015; 155:43-50. [PMID: 25553343 DOI: 10.1016/j.actpsy.2014.12.002] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2014] [Revised: 12/01/2014] [Accepted: 12/03/2014] [Indexed: 11/16/2022] Open
Abstract
Following findings that musical rhythmic priming enhances subsequent speech perception, we investigated whether rhythmic priming for spoken sentences can enhance phonological processing - the building blocks of speech - and whether audio-motor training enhances this effect. Participants heard a metrical prime followed by a sentence (with a matching/mismatching prosodic structure), for which they performed a phoneme detection task. Behavioural (RT) data was collected from two groups: one who received audio-motor training, and one who did not. We hypothesised that 1) phonological processing would be enhanced in matching conditions, and 2) audio-motor training with the musical rhythms would enhance this effect. Indeed, providing a matching rhythmic prime context resulted in faster phoneme detection, thus revealing a cross-domain effect of musical rhythm on phonological processing. In addition, our results indicate that rhythmic audio-motor training enhances this priming effect. These results have important implications for rhythm-based speech therapies, and suggest that metrical rhythm in music and speech may rely on shared temporal processing brain resources.
Collapse
Affiliation(s)
- Nia Cason
- Aix-Marseille Université, Institut de Neurosciences des Systèmes, Marseille, France; INSERM, U1106, Marseille, France.
| | - Corine Astésano
- UMR 7309, Laboratoire Parole et Langage, CNRS & Aix-Marseille University, 5 avenue Pasteur, 13006 Aix-en-Provence, France; EA 4156, U.R.I. Octogone-Lordat, 5 allées Antonio Machado, 31058 Toulouse Cedex 09, France.
| | - Daniele Schön
- Aix-Marseille Université, Institut de Neurosciences des Systèmes, Marseille, France; INSERM, U1106, Marseille, France.
| |
Collapse
|
49
|
Archila-Suerte P, Zevin J, Hernandez AE. The effect of age of acquisition, socioeducational status, and proficiency on the neural processing of second language speech sounds. BRAIN AND LANGUAGE 2015; 141:35-49. [PMID: 25528287 PMCID: PMC5956909 DOI: 10.1016/j.bandl.2014.11.005] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Revised: 11/06/2014] [Accepted: 11/09/2014] [Indexed: 06/02/2023]
Abstract
This study investigates the role of age of acquisition (AoA), socioeducational status (SES), and second language (L2) proficiency on the neural processing of L2 speech sounds. In a task of pre-attentive listening and passive viewing, Spanish-English bilinguals and a control group of English monolinguals listened to English syllables while watching a film of natural scenery. Eight regions of interest were selected from brain areas involved in speech perception and executive processes. The regions of interest were examined in 2 separate two-way ANOVA (AoA×SES; AoA×L2 proficiency). The results showed that AoA was the main variable affecting the neural response in L2 speech processing. Direct comparisons between AoA groups of equivalent SES and proficiency level enhanced the intensity and magnitude of the results. These results suggest that AoA, more than SES and proficiency level, determines which brain regions are recruited for the processing of second language speech sounds.
Collapse
Affiliation(s)
| | - Jason Zevin
- Sackler Institute for Developmental Psychobiology, Weill Medical College of Cornell University, 1300 York Ave., Box 140, NY, NY 10065, United States.
| | | |
Collapse
|
50
|
Size and synchronization of auditory cortex promotes musical, literacy, and attentional skills in children. J Neurosci 2014; 34:10937-49. [PMID: 25122894 DOI: 10.1523/jneurosci.5315-13.2014] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered.
Collapse
|