1
|
Corsini A, Tomassini A, Pastore A, Delis I, Fadiga L, D'Ausilio A. Speech perception difficulty modulates theta-band encoding of articulatory synergies. J Neurophysiol 2024; 131:480-491. [PMID: 38323331 DOI: 10.1152/jn.00388.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 02/08/2024] Open
Abstract
The human brain tracks available speech acoustics and extrapolates missing information such as the speaker's articulatory patterns. However, the extent to which articulatory reconstruction supports speech perception remains unclear. This study explores the relationship between articulatory reconstruction and task difficulty. Participants listened to sentences and performed a speech-rhyming task. Real kinematic data of the speaker's vocal tract were recorded via electromagnetic articulography (EMA) and aligned to corresponding acoustic outputs. We extracted articulatory synergies from the EMA data with principal component analysis (PCA) and employed partial information decomposition (PID) to separate the electroencephalographic (EEG) encoding of acoustic and articulatory features into unique, redundant, and synergistic atoms of information. We median-split sentences into easy (ES) and hard (HS) based on participants' performance and found that greater task difficulty involved greater encoding of unique articulatory information in the theta band. We conclude that fine-grained articulatory reconstruction plays a complementary role in the encoding of speech acoustics, lending further support to the claim that motor processes support speech perception.NEW & NOTEWORTHY Top-down processes originating from the motor system contribute to speech perception through the reconstruction of the speaker's articulatory movement. This study investigates the role of such articulatory simulation under variable task difficulty. We show that more challenging listening tasks lead to increased encoding of articulatory kinematics in the theta band and suggest that, in such situations, fine-grained articulatory reconstruction complements acoustic encoding.
Collapse
Affiliation(s)
- Alessandro Corsini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Aldo Pastore
- Laboratorio NEST, Scuola Normale Superiore, Pisa, Italy
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, United Kingdom
| | - Luciano Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Alessandro D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| |
Collapse
|
2
|
Pulli EP, Nolvi S, Eskola E, Nordenswan E, Holmberg E, Copeland A, Kumpulainen V, Silver E, Merisaari H, Saunavaara J, Parkkola R, Lähdesmäki T, Saukko E, Kataja E, Korja R, Karlsson L, Karlsson H, Tuulari JJ. Structural brain correlates of non-verbal cognitive ability in 5-year-old children: Findings from the FinnBrain birth cohort study. Hum Brain Mapp 2023; 44:5582-5601. [PMID: 37606608 PMCID: PMC10619410 DOI: 10.1002/hbm.26463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 08/03/2023] [Accepted: 08/08/2023] [Indexed: 08/23/2023] Open
Abstract
Non-verbal cognitive ability predicts multiple important life outcomes, for example, school and job performance. It has been associated with parieto-frontal cortical anatomy in prior studies in adult and adolescent populations, while young children have received relatively little attention. We explored the associations between cortical anatomy and non-verbal cognitive ability in 165 5-year-old participants (mean scan age 5.40 years, SD 0.13; 90 males) from the FinnBrain Birth Cohort study. T1-weighted brain magnetic resonance images were processed using FreeSurfer. Non-verbal cognitive ability was measured using the Performance Intelligence Quotient (PIQ) estimated from the Block Design and Matrix Reasoning subtests from the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III). In vertex-wise general linear models, PIQ scores associated positively with volumes in the left caudal middle frontal and right pericalcarine regions, as well as surface area in left the caudal middle frontal, left inferior temporal, and right lingual regions. There were no associations between PIQ and cortical thickness. To the best of our knowledge, this is the first study to examine structural correlates of non-verbal cognitive ability in a large sample of typically developing 5-year-olds. The findings are generally in line with prior findings from older age groups, with the important addition of the positive association between volume / surface area in the right medial occipital region and non-verbal cognitive ability. This finding adds to the literature by discovering a new brain region that should be considered in future studies exploring the role of cortical structure for cognitive development in young children.
Collapse
Affiliation(s)
- Elmo P. Pulli
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
| | - Saara Nolvi
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
- Turku Institute for Advanced Studies, Department of Psychology and Speech‐Language PathologyUniversity of TurkuTurkuFinland
| | - Eeva Eskola
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
- Department of PsychologyUniversity of TurkuTurkuFinland
| | - Elisabeth Nordenswan
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
| | - Eeva Holmberg
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
| | - Anni Copeland
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
| | - Venla Kumpulainen
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
| | - Eero Silver
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
| | - Harri Merisaari
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
- Department of RadiologyUniversity of TurkuTurkuFinland
| | - Jani Saunavaara
- Department of Medical PhysicsTurku University Hospital and University of TurkuTurkuFinland
| | - Riitta Parkkola
- Department of RadiologyUniversity of TurkuTurkuFinland
- Department of RadiologyTurku University HospitalTurkuFinland
| | - Tuire Lähdesmäki
- Pediatric Neurology, Department of Pediatrics and Adolescent MedicineTurku University Hospital and University of TurkuTurkuFinland
| | | | - Eeva‐Leena Kataja
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
| | - Riikka Korja
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
- Department of PsychologyUniversity of TurkuTurkuFinland
| | - Linnea Karlsson
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
- Department of Pediatrics and Adolescent MedicineTurku University Hospital and University of TurkuTurkuFinland
| | - Hasse Karlsson
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
- Department of PsychiatryTurku University Hospital and University of TurkuTurkuFinland
| | - Jetro J. Tuulari
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Department of Clinical MedicineUniversity of TurkuTurkuFinland
- Centre for Population Health ResearchTurku University Hospital and University of TurkuTurkuFinland
- Department of PsychiatryTurku University Hospital and University of TurkuTurkuFinland
- Turku Collegium for Science, Medicine and TechnologyUniversity of TurkuTurkuFinland
- Department of PsychiatryUniversity of OxfordOxfordUK
| |
Collapse
|
3
|
Pastore A, Tomassini A, Delis I, Dolfini E, Fadiga L, D'Ausilio A. Speech listening entails neural encoding of invisible articulatory features. Neuroimage 2022; 264:119724. [PMID: 36328272 DOI: 10.1016/j.neuroimage.2022.119724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/28/2022] [Accepted: 10/30/2022] [Indexed: 11/06/2022] Open
Abstract
Speech processing entails a complex interplay between bottom-up and top-down computations. The former is reflected in the neural entrainment to the quasi-rhythmic properties of speech acoustics while the latter is supposed to guide the selection of the most relevant input subspace. Top-down signals are believed to originate mainly from motor regions, yet similar activities have been shown to tune attentional cycles also for simpler, non-speech stimuli. Here we examined whether, during speech listening, the brain reconstructs articulatory patterns associated to speech production. We measured electroencephalographic (EEG) data while participants listened to sentences during the production of which articulatory kinematics of lips, jaws and tongue were also recorded (via Electro-Magnetic Articulography, EMA). We captured the patterns of articulatory coordination through Principal Component Analysis (PCA) and used Partial Information Decomposition (PID) to identify whether the speech envelope and each of the kinematic components provided unique, synergistic and/or redundant information regarding the EEG signals. Interestingly, tongue movements contain both unique as well as synergistic information with the envelope that are encoded in the listener's brain activity. This demonstrates that during speech listening the brain retrieves highly specific and unique motor information that is never accessible through vision, thus leveraging audio-motor maps that arise most likely from the acquisition of speech production during development.
Collapse
Affiliation(s)
- A Pastore
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy.
| | - A Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - I Delis
- School of Biomedical Sciences, University of Leeds, Leeds, UK
| | - E Dolfini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - L Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - A D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy.
| |
Collapse
|
4
|
Dole M, Vilain C, Haldin C, Baciu M, Cousin E, Lamalle L, Lœvenbruck H, Vilain A, Schwartz JL. Comparing the selectivity of vowel representations in cortical auditory vs. motor areas: A repetition-suppression study. Neuropsychologia 2022; 176:108392. [DOI: 10.1016/j.neuropsychologia.2022.108392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 09/22/2022] [Accepted: 10/03/2022] [Indexed: 10/31/2022]
|
5
|
Computational Modelling of Tone Perception Based on Direct Processing of f0 Contours. Brain Sci 2022; 12:brainsci12030337. [PMID: 35326294 PMCID: PMC8946547 DOI: 10.3390/brainsci12030337] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 02/26/2022] [Accepted: 03/01/2022] [Indexed: 12/25/2022] Open
Abstract
It has been widely assumed that in speech perception it is imperative to first detect a set of distinctive properties or features and then use them to recognize phonetic units like consonants, vowels, and tones. Those features can be auditory cues or articulatory gestures, or a combination of both. There have been no clear demonstrations of how exactly such a two-phase process would work in the perception of continuous speech, however. Here we used computational modelling to explore whether it is possible to recognize phonetic categories from syllable-sized continuous acoustic signals of connected speech without intermediate featural representations. We used Support Vector Machine (SVM) and Self-organizing Map (SOM) to simulate tone perception in Mandarin, by either directly processing f0 trajectories, or extracting various tonal features. The results show that direct tone recognition not only yields better performance than any of the feature extraction schemes, but also requires less computational power. These results suggest that prior extraction of features is unlikely the operational mechanism of speech perception.
Collapse
|
6
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
7
|
Tang DL, McDaniel A, Watkins KE. Disruption of speech motor adaptation with repetitive transcranial magnetic stimulation of the articulatory representation in primary motor cortex. Cortex 2021; 145:115-130. [PMID: 34717269 PMCID: PMC8650828 DOI: 10.1016/j.cortex.2021.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 03/26/2021] [Accepted: 09/13/2021] [Indexed: 11/25/2022]
Abstract
When auditory feedback perturbation is introduced in a predictable way over a number of utterances, speakers learn to compensate by adjusting their own productions, a process known as sensorimotor adaptation. Despite multiple lines of evidence indicating the role of primary motor cortex (M1) in motor learning and memory, whether M1 causally contributes to sensorimotor adaptation in the speech domain remains unclear. Here, we aimed to assay whether temporary disruption of the articulatory representation in left M1 by repetitive transcranial magnetic stimulation (rTMS) impairs speech adaptation. To induce sensorimotor adaptation, the frequencies of first formants (F1) were shifted up and played back to participants when they produced “head”, “bed”, and “dead” repeatedly (the learning phase). A low-frequency rTMS train (.6 Hz, subthreshold, 12 min) over either the tongue or the hand representation of M1 (between-subjects design) was applied before participants experienced altered auditory feedback in the learning phase. We found that the group who received rTMS over the hand representation showed the expected compensatory response for the upwards shift in F1 by significantly reducing F1 and increasing the second formant (F2) frequencies in their productions. In contrast, these expected compensatory changes in both F1 and F2 did not occur in the group that received rTMS over the tongue representation. Critically, rTMS (subthreshold) over the tongue representation did not affect vowel production, which was unchanged from baseline. These results provide direct evidence that the articulatory representation in left M1 causally contributes to sensorimotor learning in speech. Furthermore, these results also suggest that M1 is critical to the network supporting a more global adaptation that aims to move the altered speech production closer to a learnt pattern of speech production used to produce another vowel.
Collapse
Affiliation(s)
- Ding-Lan Tang
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK.
| | - Alexander McDaniel
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK
| | - Kate E Watkins
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK
| |
Collapse
|
8
|
Jenson D, Saltuklaroglu T. Sensorimotor contributions to working memory differ between the discrimination of Same and Different syllable pairs. Neuropsychologia 2021; 159:107947. [PMID: 34216594 DOI: 10.1016/j.neuropsychologia.2021.107947] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 02/01/2021] [Accepted: 06/27/2021] [Indexed: 10/21/2022]
Abstract
Sensorimotor activity during speech perception is both pervasive and highly variable, changing as a function of the cognitive demands imposed by the task. The purpose of the current study was to evaluate whether the discrimination of Same (matched) and Different (unmatched) syllable pairs elicit different patterns of sensorimotor activity as stimuli are processed in working memory. Raw EEG data recorded from 42 participants were decomposed with independent component analysis to identify bilateral sensorimotor mu rhythms from 36 subjects. Time frequency decomposition of mu rhythms revealed concurrent event related desynchronization (ERD) in alpha and beta frequency bands across the peri- and post-stimulus time periods, which were interpreted as evidence of sensorimotor contributions to working memory encoding and maintenance. Left hemisphere alpha/beta ERD was stronger in Different trials than Same trials during the post-stimulus period, while right hemisphere alpha/beta ERD was stronger in Same trials than Different trials. A between-hemispheres contrast revealed no differences during Same trials, while post-stimulus alpha/beta ERD was stronger in the left hemisphere than the right during Different trials. Results were interpreted to suggest that predictive coding mechanisms lead to repetition suppression effects in Same trials. Mismatches arising from predictive coding mechanisms in Different trials shift subsequent working memory processing to the speech-dominant left hemisphere. Findings clarify how sensorimotor activity differentially supports working memory encoding and maintenance stages during speech discrimination tasks and have potential to inform sensorimotor models of speech perception and working memory.
Collapse
Affiliation(s)
- David Jenson
- Washington State University, Elson S. Floyd College of Medicine, Department of Speech and Hearing Sciences, Spokane, WA, USA.
| | - Tim Saltuklaroglu
- University of Tennessee Health Science Center, College of Health Professions, Department of Audiology and Speech-Pathology, Knoxville, TN, USA
| |
Collapse
|
9
|
Perron M, Theaud G, Descoteaux M, Tremblay P. The frontotemporal organization of the arcuate fasciculus and its relationship with speech perception in young and older amateur singers and non-singers. Hum Brain Mapp 2021; 42:3058-3076. [PMID: 33835629 PMCID: PMC8193549 DOI: 10.1002/hbm.25416] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 02/26/2021] [Accepted: 03/08/2021] [Indexed: 12/11/2022] Open
Abstract
The ability to perceive speech in noise (SPiN) declines with age. Although the etiology of SPiN decline is not well understood, accumulating evidence suggests a role for the dorsal speech stream. While age‐related decline within the dorsal speech stream would negatively affect SPiN performance, experience‐induced neuroplastic changes within the dorsal speech stream could positively affect SPiN performance. Here, we investigated the relationship between SPiN performance and the structure of the arcuate fasciculus (AF), which forms the white matter scaffolding of the dorsal speech stream, in aging singers and non‐singers. Forty‐three non‐singers and 41 singers aged 20 to 87 years old completed a hearing evaluation and a magnetic resonance imaging session that included High Angular Resolution Diffusion Imaging. The groups were matched for sex, age, education, handedness, cognitive level, and musical instrument experience. A subgroup of participants completed syllable discrimination in the noise task. The AF was divided into 10 segments to explore potential local specializations for SPiN. The results show that, in carefully matched groups of singers and non‐singers (a) myelin and/or axonal membrane deterioration within the bilateral frontotemporal AF segments are associated with SPiN difficulties in aging singers and non‐singers; (b) the structure of the AF is different in singers and non‐singers; (c) these differences are not associated with a benefit on SPiN performance for singers. This study clarifies the etiology of SPiN difficulties by supporting the hypothesis for the role of aging of the dorsal speech stream.
Collapse
Affiliation(s)
- Maxime Perron
- CERVO Brain Research Center, Quebec City, Quebec, Canada.,Département de Réadaptation, Université Laval, Faculté de Médecine, Quebec City, Quebec, Canada
| | - Guillaume Theaud
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science Department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science Department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Pascale Tremblay
- CERVO Brain Research Center, Quebec City, Quebec, Canada.,Département de Réadaptation, Université Laval, Faculté de Médecine, Quebec City, Quebec, Canada
| |
Collapse
|
10
|
Walker GM, Rollo PS, Tandon N, Hickok G. Effect of Bilateral Opercular Syndrome on Speech Perception. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:335-353. [PMID: 37213256 PMCID: PMC10158595 DOI: 10.1162/nol_a_00037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 03/23/2021] [Indexed: 05/23/2023]
Abstract
Speech perception ability and structural neuroimaging were investigated in two cases of bilateral opercular syndrome. Due to bilateral ablation of the motor control center for the lower face and surrounds, these rare cases provide an opportunity to evaluate the necessity of cortical motor representations for speech perception, a cornerstone of some neurocomputational theories of language processing. Speech perception, including audiovisual integration (i.e., the McGurk effect), was mostly unaffected in these cases, although verbal short-term memory impairment hindered performance on several tasks that are traditionally used to evaluate speech perception. The results suggest that the role of the cortical motor system in speech perception is context-dependent and supplementary, not inherent or necessary.
Collapse
Affiliation(s)
- Grant M. Walker
- Department of Cognitive Sciences, University of California, Irvine
- * Corresponding Author:
| | | | - Nitin Tandon
- Department of Neurosurgery, University of Texas Medical School at Houston
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine
- Department of Language Science, University of California, Irvine
| |
Collapse
|
11
|
Tremblay P, Brisson V, Deschamps I. Brain aging and speech perception: Effects of background noise and talker variability. Neuroimage 2020; 227:117675. [PMID: 33359849 DOI: 10.1016/j.neuroimage.2020.117675] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 10/22/2022] Open
Abstract
Speech perception can be challenging, especially for older adults. Despite the importance of speech perception in social interactions, the mechanisms underlying these difficulties remain unclear and treatment options are scarce. While several studies have suggested that decline within cortical auditory regions may be a hallmark of these difficulties, a growing number of studies have reported decline in regions beyond the auditory processing network, including regions involved in speech processing and executive control, suggesting a potentially diffuse underlying neural disruption, though no consensus exists regarding underlying dysfunctions. To address this issue, we conducted two experiments in which we investigated age differences in speech perception when background noise and talker variability are manipulated, two factors known to be detrimental to speech perception. In Experiment 1, we examined the relationship between speech perception, hearing and auditory attention in 88 healthy participants aged 19 to 87 years. In Experiment 2, we examined cortical thickness and BOLD signal using magnetic resonance imaging (MRI) and related these measures to speech perception performance using a simple mediation approach in 32 participants from Experiment 1. Our results show that, even after accounting for hearing thresholds and two measures of auditory attention, speech perception significantly declined with age. Age-related decline in speech perception in noise was associated with thinner cortex in auditory and speech processing regions (including the superior temporal cortex, ventral premotor cortex and inferior frontal gyrus) as well as in regions involved in executive control (including the dorsal anterior insula, the anterior cingulate cortex and medial frontal cortex). Further, our results show that speech perception performance was associated with reduced brain response in the right superior temporal cortex in older compared to younger adults, and to an increase in response to noise in older adults in the left anterior temporal cortex. Talker variability was not associated with different activation patterns in older compared to younger adults. Together, these results support the notion of a diffuse rather than a focal dysfunction underlying speech perception in noise difficulties in older adults.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada.
| | - Valérie Brisson
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada
| | | |
Collapse
|
12
|
Berezutskaya J, Baratin C, Freudenburg ZV, Ramsey NF. High-density intracranial recordings reveal a distinct site in anterior dorsal precentral cortex that tracks perceived speech. Hum Brain Mapp 2020; 41:4587-4609. [PMID: 32744403 PMCID: PMC7555065 DOI: 10.1002/hbm.25144] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 06/23/2020] [Accepted: 07/06/2020] [Indexed: 01/15/2023] Open
Abstract
Various brain regions are implicated in speech processing, and the specific function of some of them is better understood than others. In particular, involvement of the dorsal precentral cortex (dPCC) in speech perception remains debated, and attribution of the function of this region is more or less restricted to motor processing. In this study, we investigated high-density intracranial responses to speech fragments of a feature film, aiming to determine whether dPCC is engaged in perception of continuous speech. Our findings show that dPCC exhibited preference to speech over other tested sounds. Moreover, the identified area was involved in tracking of speech auditory properties including speech spectral envelope, its rhythmic phrasal pattern and pitch contour. DPCC also showed the ability to filter out noise from the perceived speech. Comparing these results to data from motor experiments showed that the identified region had a distinct location in dPCC, anterior to the hand motor area and superior to the mouth articulator region. The present findings uncovered with high-density intracranial recordings help elucidate the functional specialization of PCC and demonstrate the unique role of its anterior dorsal region in continuous speech perception.
Collapse
Affiliation(s)
- Julia Berezutskaya
- Brain Center, Department of Neurology and NeurosurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Clarissa Baratin
- Brain Center, Department of Neurology and NeurosurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
- Université Grenoble AlpesGrenoble Institut des NeurosciencesGrenobleFrance
| | - Zachary V. Freudenburg
- Brain Center, Department of Neurology and NeurosurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Nicolas F. Ramsey
- Brain Center, Department of Neurology and NeurosurgeryUniversity Medical Center UtrechtUtrechtThe Netherlands
| |
Collapse
|
13
|
Saltzman DI, Myers EB. Neural Representation of Articulable and Inarticulable Novel Sound Contrasts: The Role of the Dorsal Stream. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:339-364. [PMID: 35784619 PMCID: PMC9248853 DOI: 10.1162/nol_a_00016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 05/23/2020] [Indexed: 06/15/2023]
Abstract
The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the "dorsal stream") to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.
Collapse
|
14
|
Al Dahhan NZ, Kirby JR, Chen Y, Brien DC, Munoz DP. Examining the neural and cognitive processes that underlie reading through naming speed tasks. Eur J Neurosci 2020; 51:2277-2298. [PMID: 31912932 DOI: 10.1111/ejn.14673] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Revised: 12/13/2019] [Accepted: 12/31/2019] [Indexed: 11/29/2022]
Abstract
We combined fMRI with eye tracking and speech recording to examine the neural and cognitive mechanisms that underlie reading. To simplify the study of the complex processes involved during reading, we used naming speed (NS) tasks (also known as rapid automatized naming or RAN) as a focus for this study, in which average reading right-handed adults named sets of stimuli (letters or objects) as quickly and accurately as possible. Due to the possibility of spoken output during fMRI studies creating motion artifacts, we employed both an overt session and a covert session. When comparing the two sessions, there were no significant differences in behavioral performance, sensorimotor activation (except for regions involved in the motor aspects of speech production) or activation in regions within the left-hemisphere-dominant neural reading network. This established that differences found between the tasks within the reading network were not attributed to speech production motion artifacts or sensorimotor processes. Both behavioral and neuroimaging measures showed that letter naming was a more automatic and efficient task than object naming. Furthermore, specific manipulations to the NS tasks to make the stimuli more visually and/or phonologically similar differentially activated the reading network in the left hemisphere associated with phonological, orthographic and orthographic-to-phonological processing, but not articulatory/motor processing related to speech production. These findings further our understanding of the underlying neural processes that support reading by examining how activation within the reading network differs with both task performance and task characteristics.
Collapse
Affiliation(s)
- Noor Z Al Dahhan
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - John R Kirby
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada.,Faculty of Education, Queen's University, Kingston, ON, Canada
| | - Ying Chen
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - Donald C Brien
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - Douglas P Munoz
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
| |
Collapse
|
15
|
Rachman L, Dubal S, Aucouturier JJ. Happy you, happy me: expressive changes on a stranger's voice recruit faster implicit processes than self-produced expressions. Soc Cogn Affect Neurosci 2020; 14:559-568. [PMID: 31044241 PMCID: PMC6545538 DOI: 10.1093/scan/nsz030] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Revised: 04/09/2019] [Accepted: 04/21/2019] [Indexed: 01/09/2023] Open
Abstract
In social interactions, people have to pay attention both to the ‘what’ and ‘who’. In particular, expressive changes heard on speech signals have to be integrated with speaker identity, differentiating e.g. self- and other-produced signals. While previous research has shown that self-related visual information processing is facilitated compared to non-self stimuli, evidence in the auditory modality remains mixed. Here, we compared electroencephalography (EEG) responses to expressive changes in sequence of self- or other-produced speech sounds using a mismatch negativity (MMN) passive oddball paradigm. Critically, to control for speaker differences, we used programmable acoustic transformations to create voice deviants that differed from standards in exactly the same manner, making EEG responses to such deviations comparable between sequences. Our results indicate that expressive changes on a stranger’s voice are highly prioritized in auditory processing compared to identical changes on the self-voice. Other-voice deviants generate earlier MMN onset responses and involve stronger cortical activations in a left motor and somatosensory network suggestive of an increased recruitment of resources for less internally predictable, and therefore perhaps more socially relevant, signals.
Collapse
Affiliation(s)
- Laura Rachman
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France.,Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| | - Stéphanie Dubal
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France
| | - Jean-Julien Aucouturier
- Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| |
Collapse
|
16
|
Grabski K, Sato M. Adaptive phonemic coding in the listening and speaking brain. Neuropsychologia 2020; 136:107267. [DOI: 10.1016/j.neuropsychologia.2019.107267] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Revised: 10/23/2019] [Accepted: 11/15/2019] [Indexed: 10/25/2022]
|
17
|
Longcamp M, Hupé JM, Ruiz M, Vayssière N, Sato M. Shared premotor activity in spoken and written communication. BRAIN AND LANGUAGE 2019; 199:104694. [PMID: 31586790 DOI: 10.1016/j.bandl.2019.104694] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Revised: 09/12/2019] [Accepted: 09/15/2019] [Indexed: 06/10/2023]
Abstract
The aim of the present study was to uncover a possible common neural organizing principle in spoken and written communication, through the coupling of perceptual and motor representations. In order to identify possible shared neural substrates for processing the basic units of spoken and written language, a sparse sampling fMRI acquisition protocol was performed on the same subjects in two experimental sessions with similar sets of letters being read and written and of phonemes being heard and orally produced. We found evidence of common premotor regions activated in spoken and written language, both in perception and in production. The location of those brain regions was confined to the left lateral and medial frontal cortices, at locations corresponding to the premotor cortex, inferior frontal cortex and supplementary motor area. Interestingly, the speaking and writing tasks also appeared to be controlled by largely overlapping networks, possibly indicating some domain general cognitive processing. Finally, the spatial distribution of individual activation peaks further showed more dorsal and more left-lateralized premotor activations in written than in spoken language.
Collapse
Affiliation(s)
| | - Jean-Michel Hupé
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France
| | - Mathieu Ruiz
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France
| | - Nathalie Vayssière
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France; Toulouse Mind and Brain Institute, France
| | - Marc Sato
- CNRS, Aix-Marseille Univ, LPL, Aix-en-Provence, France
| |
Collapse
|
18
|
Jenson D, Thornton D, Harkrider AW, Saltuklaroglu T. Influences of cognitive load on sensorimotor contributions to working memory: An EEG investigation of mu rhythm activity during speech discrimination. Neurobiol Learn Mem 2019; 166:107098. [DOI: 10.1016/j.nlm.2019.107098] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 09/11/2019] [Accepted: 10/09/2019] [Indexed: 11/16/2022]
|
19
|
Harvey JS, Smithson HE, Siviour CR, Gasper GEM, Sønnesyn SO, McLeish TCB, Howard DM. A thirteenth-century theory of speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:937. [PMID: 31472541 PMCID: PMC7051007 DOI: 10.1121/1.5119126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 06/20/2019] [Accepted: 07/03/2019] [Indexed: 06/10/2023]
Abstract
This historical paper examines a pioneering theory of speech production and perception from the thirteenth century. Robert Grosseteste (c.1175—1253) was a celebrated medieval thinker, who developed an impressive corpus of treatises on the natural world. This paper looks at his treatise on sound and phonetics, De generatione sonorum [On the Generation of Sounds]. Through interdisciplinary analysis of the text, this paper finds a theory of vowel production and perception that is notably mathematical, with a formulation of vowel space rooted in combinatorics. Specifically, Grosseteste constructs a categorical space comprising three fundamental types of movements pertaining to the vocal apparatus: linear, circular, and dilational-constrictional; these correspond to similarity transformations of translation, rotation, and uniform scaling, respectively. That Grosseteste's space is categorical, and low-dimensional, is remarkable vis-a-vis current theories of phoneme perception. As well as his description of vowel space, Grosseteste also sets out a hypothetical framework of multisensory integration, uniting the production, perception, and representation in writing of vowels with a set of geometric figures associated with “mental images.” This has clear resonances with contemporary studies of motor facilitation during speech perception and audiovisual speech. This paper additionally provides an experimental foray, illustrating the coherence of mathematical and scientific thinking underpinning this early theory.
Collapse
Affiliation(s)
- J S Harvey
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Oxford, OX2 6GG, United Kingdom
| | - H E Smithson
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Oxford, OX2 6GG, United Kingdom
| | - C R Siviour
- Department of Engineering Science, University of Oxford, Oxford e-Research Centre, 7 Keble Road, OX1 3QG, Oxford, United Kingdom
| | - G E M Gasper
- Department of History, Durham University, 43 North Bailey, Durham, DH1 3EX, United Kingdom
| | - S O Sønnesyn
- Department of History, Durham University, 43 North Bailey, Durham, DH1 3EX, United Kingdom
| | - T C B McLeish
- Department of Physics, University of York, Heslington, York, YO10 5DD, United Kingdom
| | - D M Howard
- Department of Electronic Engineering, Royal Holloway, University of London, Egham Hill, Egham, TW20 0EX, United Kingdom
| |
Collapse
|
20
|
Abstract
Recent evidence suggests that the motor system may have a facilitatory role in speech perception during noisy listening conditions. Studies clearly show an association between activity in auditory and motor speech systems, but also hint at a causal role for the motor system in noisy speech perception. However, in the most compelling "causal" studies performance was only measured at a single signal-to-noise ratio (SNR). If listening conditions must be noisy to invoke causal motor involvement, then effects will be contingent on the SNR at which they are tested. We used articulatory suppression to disrupt motor-speech areas while measuring phonemic identification across a range of SNRs. As controls, we also measured phoneme identification during passive listening, mandible gesturing, and foot-tapping conditions. Two-parameter (threshold, slope) psychometric functions were fit to the data in each condition. Our findings indicate: (1) no effect of experimental task on psychometric function slopes; (2) a small effect of articulatory suppression, in particular, on psychometric function thresholds. The size of the latter effect was 1 dB (~5% correct) on average, suggesting, at best, a minor modulatory role of the speech motor system in perception.
Collapse
Affiliation(s)
- Ryan C Stokes
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA.
| | - Jonathan H Venezia
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA
| | - Gregory Hickok
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA
| |
Collapse
|
21
|
Schmitz J, Bartoli E, Maffongelli L, Fadiga L, Sebastian-Galles N, D’Ausilio A. Motor cortex compensates for lack of sensory and motor experience during auditory speech perception. Neuropsychologia 2019; 128:290-296. [DOI: 10.1016/j.neuropsychologia.2018.01.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Revised: 12/18/2017] [Accepted: 01/05/2018] [Indexed: 10/18/2022]
|
22
|
Barnaud ML, Schwartz JL, Bessière P, Diard J. Computer simulations of coupled idiosyncrasies in speech perception and speech production with COSMO, a perceptuo-motor Bayesian model of speech communication. PLoS One 2019; 14:e0210302. [PMID: 30633745 PMCID: PMC6329510 DOI: 10.1371/journal.pone.0210302] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Accepted: 12/18/2018] [Indexed: 01/09/2023] Open
Abstract
The existence of a functional relationship between speech perception and production systems is now widely accepted, but the exact nature and role of this relationship remains quite unclear. The existence of idiosyncrasies in production and in perception sheds interesting light on the nature of the link. Indeed, a number of studies explore inter-individual variability in auditory and motor prototypes within a given language, and provide evidence for a link between both sets. In this paper, we attempt to simulate one study on coupled idiosyncrasies in the perception and production of French oral vowels, within COSMO, a Bayesian computational model of speech communication. First, we show that if the learning process in COSMO includes a communicative mechanism between a Learning Agent and a Master Agent, vowel production does display idiosyncrasies. Second, we implement within COSMO three models for speech perception that are, respectively, auditory, motor and perceptuo-motor. We show that no idiosyncrasy in perception can be obtained in the auditory model, since it is optimally tuned to the learning environment, which does not include the motor variability of the Learning Agent. On the contrary, motor and perceptuo-motor models provide perception idiosyncrasies correlated with idiosyncrasies in production. We draw conclusions about the role and importance of motor processes in speech perception, and propose a perceptuo-motor model in which auditory processing would enable optimal processing of learned sounds and motor processing would be helpful in unlearned adverse conditions.
Collapse
Affiliation(s)
- Marie-Lou Barnaud
- Univ. Grenoble Alpes, Gipsa-lab, Grenoble, France.,CNRS, Gipsa-lab, Grenoble, France.,Univ. Grenoble Alpes, LPNC, Grenoble, France.,CNRS, LPNC, Grenoble, France
| | - Jean-Luc Schwartz
- Univ. Grenoble Alpes, Gipsa-lab, Grenoble, France.,CNRS, Gipsa-lab, Grenoble, France
| | | | - Julien Diard
- Univ. Grenoble Alpes, LPNC, Grenoble, France.,CNRS, LPNC, Grenoble, France
| |
Collapse
|
23
|
Tremblay P, Perron M, Deschamps I, Kennedy‐Higgins D, Houde J, Dick AS, Descoteaux M. The role of the arcuate and middle longitudinal fasciculi in speech perception in noise in adulthood. Hum Brain Mapp 2019; 40:226-241. [PMID: 30277622 PMCID: PMC6865648 DOI: 10.1002/hbm.24367] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 08/07/2018] [Accepted: 08/08/2018] [Indexed: 12/13/2022] Open
Abstract
In this article, we used High Angular Resolution Diffusion Imaging (HARDI) with advanced anatomically constrained particle filtering tractography to investigate the role of the arcuate fasciculus (AF) and the middle longitudinal fasciculus (MdLF) in speech perception in noise in younger and older adults. Fourteen young and 15 elderly adults completed a syllable discrimination task in the presence of broadband masking noise. Mediation analyses revealed few effects of age on white matter (WM) in these fascicles but broad effects of WM on speech perception, independently of age, especially in terms of sensitivity and criterion (response bias), after controlling for individual differences in hearing sensitivity and head size. Indirect (mediated) effects of age on speech perception through WM microstructure were also found, after controlling for individual differences in hearing sensitivity and head size, with AF microstructure related to sensitivity, response bias and phonological priming, and MdLF microstructure more strongly related to response bias. These findings suggest that pathways of the perisylvian region contribute to speech processing abilities, with relatively distinct contributions for the AF (sensitivity) and MdLF (response bias), indicative of a complex contribution of both phonological and cognitive processes to age-related speech perception decline. These results provide new and important insights into the roles of these pathways as well as the factors that may contribute to elderly speech perception deficits. They also highlight the need for a greater focus to be placed on studying the role of WM microstructure to understand cognitive aging.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research CenterQuebec CityCanada
- Département de Réadaptation, Faculté de MédecineUniversité LavalQuebec CityCanada
| | | | - Isabelle Deschamps
- CERVO Brain Research CenterQuebec CityCanada
- Département de Réadaptation, Faculté de MédecineUniversité LavalQuebec CityCanada
| | - Dan Kennedy‐Higgins
- CERVO Brain Research CenterQuebec CityCanada
- Department of Speech, Hearing and Phonetic SciencesUniversity College LondonUnited Kingdom
| | - Jean‐Christophe Houde
- Département d'informatique, Faculté des Sciences, Sherbrooke Connectivity Imaging LabUniversité de SherbrookeSherbrookeCanada
| | | | - Maxime Descoteaux
- Département d'informatique, Faculté des Sciences, Sherbrooke Connectivity Imaging LabUniversité de SherbrookeSherbrookeCanada
| |
Collapse
|
24
|
Liebenthal E, Möttönen R. An interactive model of auditory-motor speech perception. BRAIN AND LANGUAGE 2018; 187:33-40. [PMID: 29268943 PMCID: PMC6005717 DOI: 10.1016/j.bandl.2017.12.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 12/02/2017] [Indexed: 05/30/2023]
Abstract
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research.
Collapse
Affiliation(s)
- Einat Liebenthal
- Department of Psychiatry, Brigham & Women's Hospital, Harvard Medical School, Boston, USA.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, UK; School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
25
|
Barnaud ML, Bessière P, Diard J, Schwartz JL. Reanalyzing neurocognitive data on the role of the motor system in speech perception within COSMO, a Bayesian perceptuo-motor model of speech communication. BRAIN AND LANGUAGE 2018; 187:19-32. [PMID: 29241588 PMCID: PMC6286382 DOI: 10.1016/j.bandl.2017.12.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/17/2017] [Accepted: 12/02/2017] [Indexed: 06/07/2023]
Abstract
While neurocognitive data provide clear evidence for the involvement of the motor system in speech perception, its precise role and the way motor information is involved in perceptual decision remain unclear. In this paper, we discuss some recent experimental results in light of COSMO, a Bayesian perceptuo-motor model of speech communication. COSMO enables us to model both speech perception and speech production with probability distributions relating phonological units with sensory and motor variables. Speech perception is conceived as a sensory-motor architecture combining an auditory and a motor decoder thanks to a Bayesian fusion process. We propose the sketch of a neuroanatomical architecture for COSMO, and we capitalize on properties of the auditory vs. motor decoders to address three neurocognitive studies of the literature. Altogether, this computational study reinforces functional arguments supporting the role of a motor decoding branch in the speech perception process.
Collapse
Affiliation(s)
- Marie-Lou Barnaud
- Univ. Grenoble Alpes, Gipsa-lab, F-38000 Grenoble, France; CNRS, Gipsa-lab, F-38000 Grenoble, France; Univ. Grenoble Alpes, LPNC, F-38000 Grenoble, France; CNRS, LPNC, F-38000 Grenoble, France.
| | | | - Julien Diard
- Univ. Grenoble Alpes, LPNC, F-38000 Grenoble, France; CNRS, LPNC, F-38000 Grenoble, France
| | - Jean-Luc Schwartz
- Univ. Grenoble Alpes, Gipsa-lab, F-38000 Grenoble, France; CNRS, Gipsa-lab, F-38000 Grenoble, France.
| |
Collapse
|
26
|
Thornton D, Harkrider AW, Jenson D, Saltuklaroglu T. Sensorimotor activity measured via oscillations of EEG mu rhythms in speech and non-speech discrimination tasks with and without segmentation demands. BRAIN AND LANGUAGE 2018; 187:62-73. [PMID: 28431691 DOI: 10.1016/j.bandl.2017.03.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 01/24/2017] [Accepted: 03/31/2017] [Indexed: 06/07/2023]
Abstract
Better understanding of the role of sensorimotor processing in speech and non-speech segmentation can be achieved with more temporally precise measures. Twenty adults made same/different discriminations of speech and non-speech stimuli pairs, with and without segmentation demands. Independent component analysis of 64-channel EEG data revealed clear sensorimotor mu components, with characteristic alpha and beta peaks, localized to premotor regions in 70% of participants.Time-frequency analyses of mu components from accurate trials showed that (1) segmentation tasks elicited greater event-related synchronization immediately following offset of the first stimulus, suggestive of inhibitory activity; (2) strong late event-related desynchronization in all conditions, suggesting that working memory/covert replay contributed substantially to sensorimotor activity in all conditions; (3) stronger beta desynchronization in speech versus non-speech stimuli during stimulus presentation, suggesting stronger auditory-motor transforms for speech versus non-speech stimuli. Findings support the continued use of oscillatory approaches for helping understand segmentation and other cognitive tasks.
Collapse
Affiliation(s)
- David Thornton
- University of Tennessee Health Science Center, United States.
| | | | - David Jenson
- University of Tennessee Health Science Center, United States
| | | |
Collapse
|
27
|
Nuttall HE, Kennedy-Higgins D, Devlin JT, Adank P. Modulation of intra- and inter-hemispheric connectivity between primary and premotor cortex during speech perception. BRAIN AND LANGUAGE 2018; 187:74-82. [PMID: 29397191 DOI: 10.1016/j.bandl.2017.12.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 10/28/2017] [Accepted: 12/02/2017] [Indexed: 06/07/2023]
Abstract
Primary motor (M1) areas for speech production activate during speechperception. It has been suggested that such activation may be dependent upon modulatory inputs from premotor cortex (PMv). If and how PMv differentially modulates M1 activity during perception of speech that is easy or challenging to understand, however, is unclear. This study aimed to test the link between PMv and M1 during challenging speech perception in two experiments. The first experiment investigated intra-hemispheric connectivity between left hemisphere PMv and left M1 lip area during comprehension of speech under clear and distorted listening conditions. Continuous theta burst stimulation (cTBS) was applied to left PMv in eighteen participants (aged 18-35). Post-cTBS, participants performed a sentence verification task on distorted (imprecisely articulated), and clear speech, whilst also undergoing stimulation of the lip representation in the left M1 to elicit motor evoked potentials (MEPs). In a second, separate experiment, we investigated the role of inter-hemispheric connectivity between right hemisphere PMv and left hemisphere M1 lip area. Dual-coil transcranial magnetic stimulation was applied to right PMv and left M1 lip in fifteen participants (aged 18-35). Results indicated that disruption of PMv during speech perception affects comprehension of distorted speech specifically. Furthermore, our data suggest that listening to distorted speech modulates the balance of intra- and inter-hemispheric interactions, with a larger sensorimotor network implicated during comprehension of distorted speech than when speech perception is optimal. The present results further understanding of PMv-M1 interactions during auditory-motor integration.
Collapse
Affiliation(s)
- Helen E Nuttall
- Department of Psychology, Fylde College, Lancaster University, Lancaster LA1 4YF, UK; Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK.
| | - Dan Kennedy-Higgins
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London WC1H 0AP, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
28
|
Glanz Iljina O, Derix J, Kaur R, Schulze-Bonhage A, Auer P, Aertsen A, Ball T. Real-life speech production and perception have a shared premotor-cortical substrate. Sci Rep 2018; 8:8898. [PMID: 29891885 PMCID: PMC5995900 DOI: 10.1038/s41598-018-26801-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Accepted: 05/09/2018] [Indexed: 11/25/2022] Open
Abstract
Motor-cognitive accounts assume that the articulatory cortex is involved in language comprehension, but previous studies may have observed such an involvement as an artefact of experimental procedures. Here, we employed electrocorticography (ECoG) during natural, non-experimental behavior combined with electrocortical stimulation mapping to study the neural basis of real-life human verbal communication. We took advantage of ECoG’s ability to capture high-gamma activity (70–350 Hz) as a spatially and temporally precise index of cortical activation during unconstrained, naturalistic speech production and perception conditions. Our findings show that an electrostimulation-defined mouth motor region located in the superior ventral premotor cortex is consistently activated during both conditions. This region became active early relative to the onset of speech production and was recruited during speech perception regardless of acoustic background noise. Our study thus pinpoints a shared ventral premotor substrate for real-life speech production and perception with its basic properties.
Collapse
Affiliation(s)
- Olga Glanz Iljina
- GRK 1624 'Frequency Effects in Language', University of Freiburg, Freiburg, Germany. .,Department of German Linguistics, University of Freiburg, Freiburg, Germany. .,Hermann Paul School of Linguistics, University of Freiburg, Freiburg, Germany. .,Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany. .,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany. .,Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany.
| | - Johanna Derix
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany.,Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany
| | - Rajbir Kaur
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,Faculty of Medicine, University of Cologne, Cologne, Germany
| | - Andreas Schulze-Bonhage
- BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany.,Epilepsy Center, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Peter Auer
- GRK 1624 'Frequency Effects in Language', University of Freiburg, Freiburg, Germany.,Department of German Linguistics, University of Freiburg, Freiburg, Germany.,Hermann Paul School of Linguistics, University of Freiburg, Freiburg, Germany
| | - Ad Aertsen
- Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany.,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Tonio Ball
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany. .,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany. .,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany.
| |
Collapse
|
29
|
Assaneo MF, Poeppel D. The coupling between auditory and motor cortices is rate-restricted: Evidence for an intrinsic speech-motor rhythm. SCIENCE ADVANCES 2018; 4:eaao3842. [PMID: 29441362 PMCID: PMC5810610 DOI: 10.1126/sciadv.aao3842] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Accepted: 01/09/2018] [Indexed: 05/19/2023]
Abstract
The relation between perception and action remains a fundamental question for neuroscience. In the context of speech, existing data suggest an interaction between auditory and speech-motor cortices, but the underlying mechanisms remain incompletely characterized. We fill a basic gap in our understanding of the sensorimotor processing of speech by examining the synchronization between auditory and speech-motor regions over different speech rates, a fundamental parameter delimiting successful perception. First, using magnetoencephalography, we measure synchronization between auditory and speech-motor regions while participants listen to syllables at various rates. We show, surprisingly, that auditory-motor synchrony is significant only over a restricted range and is enhanced at ~4.5 Hz, a value compatible with the mean syllable rate across languages. Second, neural modeling reveals that this modulated coupling plausibly emerges as a consequence of the underlying neural architecture. The findings suggest that the temporal patterns of speech emerge as a consequence of the intrinsic rhythms of cortical areas.
Collapse
Affiliation(s)
- M. Florencia Assaneo
- Department of Psychology, New York University, New York, NY 10003, USA
- Corresponding author.
| | - David Poeppel
- Department of Psychology, New York University, New York, NY 10003, USA
- Neuroscience Department, Max Planck Institute for Empirical Aesthetics, Frankfurt 60322, Germany
| |
Collapse
|
30
|
Treille A, Vilain C, Schwartz JL, Hueber T, Sato M. Electrophysiological evidence for Audio-visuo-lingual speech integration. Neuropsychologia 2018; 109:126-133. [DOI: 10.1016/j.neuropsychologia.2017.12.024] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 11/21/2017] [Accepted: 12/13/2017] [Indexed: 01/25/2023]
|
31
|
Facial expressions as a model to test the role of the sensorimotor system in the visual perception of the actions. Exp Brain Res 2017; 235:3771-3783. [PMID: 28975379 DOI: 10.1007/s00221-017-5097-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Accepted: 09/21/2017] [Indexed: 10/18/2022]
Abstract
A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.
Collapse
|
32
|
Saltuklaroglu T, Harkrider AW, Thornton D, Jenson D, Kittilstved T. EEG Mu (µ) rhythm spectra and oscillatory activity differentiate stuttering from non-stuttering adults. Neuroimage 2017; 153:232-245. [PMID: 28400266 PMCID: PMC5569894 DOI: 10.1016/j.neuroimage.2017.04.022] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 01/24/2017] [Accepted: 04/08/2017] [Indexed: 10/19/2022] Open
Abstract
Stuttering is linked to sensorimotor deficits related to internal modeling mechanisms. This study compared spectral power and oscillatory activity of EEG mu (μ) rhythms between persons who stutter (PWS) and controls in listening and auditory discrimination tasks. EEG data were analyzed from passive listening in noise and accurate (same/different) discrimination of tones or syllables in quiet and noisy backgrounds. Independent component analysis identified left and/or right μ rhythms with characteristic alpha (α) and beta (β) peaks localized to premotor/motor regions in 23 of 27 people who stutter (PWS) and 24 of 27 controls. PWS produced μ spectra with reduced β amplitudes across conditions, suggesting reduced forward modeling capacity. Group time-frequency differences were associated with noisy conditions only. PWS showed increased μ-β desynchronization when listening to noise and early in discrimination events, suggesting evidence of heightened motor activity that might be related to forward modeling deficits. PWS also showed reduced μ-α synchronization in discrimination conditions, indicating reduced sensory gating. Together these findings indicate spectral and oscillatory analyses of μ rhythms are sensitive to stuttering. More specifically, they can reveal stuttering-related sensorimotor processing differences in listening and auditory discrimination that also may be influenced by basal ganglia deficits.
Collapse
Affiliation(s)
- Tim Saltuklaroglu
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Ashley W Harkrider
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA.
| | - David Thornton
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - David Jenson
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Tiffani Kittilstved
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| |
Collapse
|
33
|
Lopopolo A, Frank SL, van den Bosch A, Willems RM. Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One 2017; 12:e0177794. [PMID: 28542396 PMCID: PMC5436813 DOI: 10.1371/journal.pone.0177794] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Accepted: 05/03/2017] [Indexed: 11/19/2022] Open
Abstract
Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.
Collapse
Affiliation(s)
- Alessandro Lopopolo
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Stefan L. Frank
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Antal van den Bosch
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, the Netherlands
- Meertens Institute, Royal Netherlands Academy of Science and Arts, Amsterdam, the Netherlands
| | - Roel M. Willems
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, the Netherlands
- Donders Institute, Radboud University Nijmegen, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| |
Collapse
|
34
|
Wu C, Zheng Y, Li J, Wu H, She S, Liu S, Ning Y, Li L. Brain substrates underlying auditory speech priming in healthy listeners and listeners with schizophrenia. Psychol Med 2017; 47:837-852. [PMID: 27894376 DOI: 10.1017/s0033291716002816] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
BACKGROUND Under 'cocktail party' listening conditions, healthy listeners and listeners with schizophrenia can use temporally pre-presented auditory speech-priming (ASP) stimuli to improve target-speech recognition, even though listeners with schizophrenia are more vulnerable to informational speech masking. METHOD Using functional magnetic resonance imaging, this study searched for both brain substrates underlying the unmasking effect of ASP in 16 healthy controls and 22 patients with schizophrenia, and brain substrates underlying schizophrenia-related speech-recognition deficits under speech-masking conditions. RESULTS In both controls and patients, introducing the ASP condition (against the auditory non-speech-priming condition) not only activated the left superior temporal gyrus (STG) and left posterior middle temporal gyrus (pMTG), but also enhanced functional connectivity of the left STG/pMTG with the left caudate. It also enhanced functional connectivity of the left STG/pMTG with the left pars triangularis of the inferior frontal gyrus (TriIFG) in controls and that with the left Rolandic operculum in patients. The strength of functional connectivity between the left STG and left TriIFG was correlated with target-speech recognition under the speech-masking condition in both controls and patients, but reduced in patients. CONCLUSIONS The left STG/pMTG and their ASP-related functional connectivity with both the left caudate and some frontal regions (the left TriIFG in healthy listeners and the left Rolandic operculum in listeners with schizophrenia) are involved in the unmasking effect of ASP, possibly through facilitating the following processes: masker-signal inhibition, target-speech encoding, and speech production. The schizophrenia-related reduction of functional connectivity between the left STG and left TriIFG augments the vulnerability of speech recognition to speech masking.
Collapse
Affiliation(s)
- C Wu
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behavior and Mental Health,Key Laboratory on Machine Perception (Ministry of Education),Peking University,Beijing,People's Republic of China
| | - Y Zheng
- The Affiliated Brain Hospital of Guangzhou Medical University,Guangzhou,People's Republic of China
| | - J Li
- The Affiliated Brain Hospital of Guangzhou Medical University,Guangzhou,People's Republic of China
| | - H Wu
- The Affiliated Brain Hospital of Guangzhou Medical University,Guangzhou,People's Republic of China
| | - S She
- The Affiliated Brain Hospital of Guangzhou Medical University,Guangzhou,People's Republic of China
| | - S Liu
- The Affiliated Brain Hospital of Guangzhou Medical University,Guangzhou,People's Republic of China
| | - Y Ning
- The Affiliated Brain Hospital of Guangzhou Medical University,Guangzhou,People's Republic of China
| | - L Li
- School of Psychological and Cognitive Sciences, and Beijing Key Laboratory of Behavior and Mental Health,Key Laboratory on Machine Perception (Ministry of Education),Peking University,Beijing,People's Republic of China
| |
Collapse
|
35
|
Treille A, Vilain C, Hueber T, Lamalle L, Sato M. Inside Speech: Multisensory and Modality-specific Processing of Tongue and Lip Speech Actions. J Cogn Neurosci 2017; 29:448-466. [DOI: 10.1162/jocn_a_01057] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both “audible” and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.
Collapse
Affiliation(s)
| | | | | | - Laurent Lamalle
- 2Université Grenoble-Alpes & CHU de Grenoble
- 3CNRS UMS 3552, Grenoble, France
| | - Marc Sato
- 4CNRS UMR 7309 & Aix-Marseille Université
| |
Collapse
|
36
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 117] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
37
|
Nuttall HE, Kennedy-Higgins D, Devlin JT, Adank P. The role of hearing ability and speech distortion in the facilitation of articulatory motor cortex. Neuropsychologia 2016; 94:13-22. [PMID: 27884757 DOI: 10.1016/j.neuropsychologia.2016.11.016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Revised: 11/18/2016] [Accepted: 11/20/2016] [Indexed: 11/15/2022]
Abstract
Excitability of articulatory motor cortex is facilitated when listening to speech in challenging conditions. Beyond this, however, we have little knowledge of what listener-specific and speech-specific factors engage articulatory facilitation during speech perception. For example, it is unknown whether speech motor activity is independent or dependent on the form of distortion in the speech signal. It is also unknown if speech motor facilitation is moderated by hearing ability. We investigated these questions in two experiments. We applied transcranial magnetic stimulation (TMS) to the lip area of primary motor cortex (M1) in young, normally hearing participants to test if lip M1 is sensitive to the quality (Experiment 1) or quantity (Experiment 2) of distortion in the speech signal, and if lip M1 facilitation relates to the hearing ability of the listener. Experiment 1 found that lip motor evoked potentials (MEPs) were larger during perception of motor-distorted speech that had been produced using a tongue depressor, and during perception of speech presented in background noise, relative to natural speech in quiet. Experiment 2 did not find evidence of motor system facilitation when speech was presented in noise at signal-to-noise ratios where speech intelligibility was at 50% or 75%, which were significantly less severe noise levels than used in Experiment 1. However, there was a significant interaction between noise condition and hearing ability, which indicated that when speech stimuli were correctly classified at 50%, speech motor facilitation was observed in individuals with better hearing, whereas individuals with relatively worse but still normal hearing showed more activation during perception of clear speech. These findings indicate that the motor system may be sensitive to the quantity, but not quality, of degradation in the speech signal. Data support the notion that motor cortex complements auditory cortex during speech perception, and point to a role for the motor cortex in compensating for differences in hearing ability.
Collapse
Affiliation(s)
- Helen E Nuttall
- Department of Psychology, Lancaster University, Lancaster LA1 4YW, UK; Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK.
| | - Daniel Kennedy-Higgins
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London WC1H 0AP, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
38
|
Goranskaya D, Kreitewolf J, Mueller JL, Friederici AD, Hartwigsen G. Fronto-Parietal Contributions to Phonological Processes in Successful Artificial Grammar Learning. Front Hum Neurosci 2016; 10:551. [PMID: 27877120 PMCID: PMC5100555 DOI: 10.3389/fnhum.2016.00551] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Accepted: 10/17/2016] [Indexed: 11/13/2022] Open
Abstract
Sensitivity to regularities plays a crucial role in the acquisition of various linguistic features from spoken language input. Artificial grammar learning paradigms explore pattern recognition abilities in a set of structured sequences (i.e., of syllables or letters). In the present study, we investigated the functional underpinnings of learning phonological regularities in auditorily presented syllable sequences. While previous neuroimaging studies either focused on functional differences between the processing of correct vs. incorrect sequences or between different levels of sequence complexity, here the focus is on the neural foundation of the actual learning success. During functional magnetic resonance imaging (fMRI), participants were exposed to a set of syllable sequences with an underlying phonological rule system, known to ensure performance differences between participants. We expected that successful learning and rule application would require phonological segmentation and phoneme comparison. As an outcome of four alternating learning and test fMRI sessions, participants split into successful learners and non-learners. Relative to non-learners, successful learners showed increased task-related activity in a fronto-parietal network of brain areas encompassing the left lateral premotor cortex as well as bilateral superior and inferior parietal cortices during both learning and rule application. These areas were previously associated with phonological segmentation, phoneme comparison, and verbal working memory. Based on these activity patterns and the phonological strategies for rule acquisition and application, we argue that successful learning and processing of complex phonological rules in our paradigm is mediated via a fronto-parietal network for phonological processes.
Collapse
Affiliation(s)
- Dariya Goranskaya
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Jens Kreitewolf
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany; International Laboratory for Brain, Music and Sound Research, Department of Psychology, University of Montreal, MontrealQC, Canada
| | - Jutta L Mueller
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany; Institute of Cognitive Science, University of OsnabrückOsnabrück, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Gesa Hartwigsen
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| |
Collapse
|
39
|
Rosenblum LD, Dorsi J, Dias JW. The Impact and Status of Carol Fowler's Supramodal Theory of Multisensory Speech Perception. ECOLOGICAL PSYCHOLOGY 2016. [DOI: 10.1080/10407413.2016.1230373] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
40
|
Schomers MR, Pulvermüller F. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review. Front Hum Neurosci 2016; 10:435. [PMID: 27708566 PMCID: PMC5030253 DOI: 10.3389/fnhum.2016.00435] [Citation(s) in RCA: 74] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 08/15/2016] [Indexed: 11/21/2022] Open
Abstract
In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| |
Collapse
|
41
|
Tremblay P, Deschamps I, Baroni M, Hasson U. Neural sensitivity to syllable frequency and mutual information in speech perception and production. Neuroimage 2016; 136:106-21. [PMID: 27184201 DOI: 10.1016/j.neuroimage.2016.05.018] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 03/31/2016] [Accepted: 05/06/2016] [Indexed: 11/29/2022] Open
Abstract
Many factors affect our ability to decode the speech signal, including its quality, the complexity of the elements that compose it, as well as their frequency of occurrence and co-occurrence in a language. Syllable frequency effects have been described in the behavioral literature, including facilitatory effects during speech production and inhibitory effects during word recognition, but the neural mechanisms underlying these effects remain largely unknown. The objective of this study was to examine, using functional neuroimaging, the neurobiological correlates of three different distributional statistics in simple 2-syllable nonwords: the frequency of the first and second syllables, and the mutual information between the syllables. We examined these statistics during nonword perception and production using a powerful single-trial analytical approach. We found that repetition accuracy was higher for nonwords in which the frequency of the first syllable was high. In addition, brain responses to distributional statistics were widespread and almost exclusively cortical. Importantly, brain activity was modulated in a distinct manner for each statistic, with the strongest facilitatory effects associated with the frequency of the first syllable and mutual information. These findings show that distributional statistics modulate nonword perception and production. We discuss the common and unique impact of each distributional statistic on brain activity, as well as task differences.
Collapse
Affiliation(s)
- Pascale Tremblay
- Université Laval, Département de Réadaptation, Québec City, QC, Canada; Centre de Recherche de l'Institut Universitaire en santé mentale de Québec (CRIUSMQ), Québec City, QC, Canada.
| | - Isabelle Deschamps
- Université Laval, Département de Réadaptation, Québec City, QC, Canada; Centre de Recherche de l'Institut Universitaire en santé mentale de Québec (CRIUSMQ), Québec City, QC, Canada
| | - Marco Baroni
- Center for Mind and Brain Sciences (CIMeC), Università Degli Studi di Trento, Via delle Regole, 101, I-38060 Mattarello, TN, Italy
| | - Uri Hasson
- Center for Mind and Brain Sciences (CIMeC), Università Degli Studi di Trento, Via delle Regole, 101, I-38060 Mattarello, TN, Italy
| |
Collapse
|
42
|
Nuttall HE, Kennedy-Higgins D, Hogan J, Devlin JT, Adank P. The effect of speech distortion on the excitability of articulatory motor cortex. Neuroimage 2016; 128:218-226. [DOI: 10.1016/j.neuroimage.2015.12.038] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2015] [Revised: 10/30/2015] [Accepted: 12/21/2015] [Indexed: 11/30/2022] Open
|
43
|
Alho J, Green BM, May PJC, Sams M, Tiitinen H, Rauschecker JP, Jääskeläinen IP. Early-latency categorical speech sound representations in the left inferior frontal gyrus. Neuroimage 2016; 129:214-223. [PMID: 26774614 DOI: 10.1016/j.neuroimage.2016.01.016] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Revised: 12/17/2015] [Accepted: 01/06/2016] [Indexed: 11/30/2022] Open
Abstract
Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.
Collapse
Affiliation(s)
- Jussi Alho
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland.
| | - Brannon M Green
- Laboratory of Integrated Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA
| | - Patrick J C May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, D-39118 Magdeburg, Germany
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland
| | - Hannu Tiitinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland
| | - Josef P Rauschecker
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland; Laboratory of Integrated Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA; Institute for Advanced Study, TUM, Munich-Garching, 80333 Munich, Germany
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland; MEG Core, Aalto NeuroImaging, Aalto University, 00076, AALTO, Espoo, Finland; AMI Centre, Aalto NeuroImaging, Aalto University, 00076, AALTO, Espoo, Finland.
| |
Collapse
|
44
|
Smalle EHM, Rogers J, Möttönen R. Dissociating Contributions of the Motor Cortex to Speech Perception and Response Bias by Using Transcranial Magnetic Stimulation. Cereb Cortex 2015; 25:3690-8. [PMID: 25274987 PMCID: PMC4585509 DOI: 10.1093/cercor/bhu218] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Recent studies using repetitive transcranial magnetic stimulation (TMS) have demonstrated that disruptions of the articulatory motor cortex impair performance in demanding speech perception tasks. These findings have been interpreted as support for the idea that the motor cortex is critically involved in speech perception. However, the validity of this interpretation has been called into question, because it is unknown whether the TMS-induced disruptions in the motor cortex affect speech perception or rather response bias. In the present TMS study, we addressed this question by using signal detection theory to calculate sensitivity (i.e., d') and response bias (i.e., criterion c). We used repetitive TMS to temporarily disrupt the lip or hand representation in the left motor cortex. Participants discriminated pairs of sounds from a "ba"-"da" continuum before TMS, immediately after TMS (i.e., during the period of motor disruption), and after a 30-min break. We found that the sensitivity for between-category pairs was reduced during the disruption of the lip representation. In contrast, disruption of the hand representation temporarily reduced response bias. This double dissociation indicates that the hand motor cortex contributes to response bias during demanding discrimination tasks, whereas the articulatory motor cortex contributes to perception of speech sounds.
Collapse
Affiliation(s)
- Eleonore H. M. Smalle
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
- Psychological Sciences Research Institute, Institute of Neuroscience, Université Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium
| | - Jack Rogers
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
| |
Collapse
|
45
|
Stasenko A, Bonn C, Teghipco A, Garcea FE, Sweet C, Dombovy M, McDonough J, Mahon BZ. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception. Cogn Neuropsychol 2015; 32:38-57. [PMID: 25951749 DOI: 10.1080/02643294.2015.1035702] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.
Collapse
Affiliation(s)
- Alena Stasenko
- a Department of Brain & Cognitive Sciences , University of Rochester , Rochester , NY , USA
| | | | | | | | | | | | | | | |
Collapse
|
46
|
Gallese V, Gernsbacher MA, Heyes C, Hickok G, Iacoboni M. Mirror Neuron Forum. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2015; 6:369-407. [PMID: 25520744 DOI: 10.1177/1745691611413392] [Citation(s) in RCA: 106] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Vittorio Gallese
- Department of Neuroscience, University of Parma, and Italian Institute of Technology Brain Center for Social and Motor Cognition, Parma, Italy
| | | | - Cecilia Heyes
- All Souls College and Department of Experimental Psychology, University of Oxford, United Kingdom
| | - Gregory Hickok
- Center for Cognitive Neuroscience, Department of Cognitive Sciences, University of California, Irvine
| | - Marco Iacoboni
- Ahmanson-Lovelace Brain Mapping Center, Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Social Behavior, Brain Research Institute, David Geffen School of Medicine, University of California, Los Angeles
| |
Collapse
|
47
|
Evans S, Davis MH. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis. Cereb Cortex 2015; 25:4772-88. [PMID: 26157026 PMCID: PMC4635918 DOI: 10.1093/cercor/bhv136] [Citation(s) in RCA: 80] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds.
Collapse
Affiliation(s)
- Samuel Evans
- MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK Institute of Cognitive Neuroscience, University College London, WC1 3AR, UK
| | - Matthew H Davis
- MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK
| |
Collapse
|
48
|
Excitability of the motor system: A transcranial magnetic stimulation study on singing and speaking. Neuropsychologia 2015; 75:525-32. [PMID: 26116909 DOI: 10.1016/j.neuropsychologia.2015.06.030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Revised: 06/01/2015] [Accepted: 06/22/2015] [Indexed: 11/21/2022]
Abstract
The perception of movements is associated with increased activity in the human motor cortex, which in turn may underlie our ability to understand actions, as it may be implicated in the recognition, understanding and imitation of actions. Here, we investigated the involvement and lateralization of the primary motor cortex (M1) in the perception of singing and speech. Transcranial magnetic stimulation (TMS) was applied independently for both hemispheres over the mouth representation of the motor cortex in healthy participants while they watched 4-s audiovisual excerpts of singers producing a 2-note ascending interval (singing condition) or 4-s audiovisual excerpts of a person explaining a proverb (speech condition). Subjects were instructed to determine whether a sung interval/written proverb, matched a written interval/proverb. During both tasks, motor evoked potentials (MEPs) were recorded from the contralateral mouth muscle (orbicularis oris) of the stimulated motor cortex compared to a control task. Moreover, to investigate the time course of motor activation, TMS pulses were randomly delivered at 7 different time points (ranging from 500 to 3500 ms after stimulus onset). Results show that stimulation of the right hemisphere had a similar effect on the MEPs for both the singing and speech perception tasks, whereas stimulation of the left hemisphere significantly differed in the speech perception task compared to the singing perception task. Furthermore, analysis of the MEPs in the singing task revealed that they decreased for small musical intervals, but increased for large musical intervals, regardless of which hemisphere was stimulated. Overall, these results suggest a dissociation between the lateralization of M1 activity for speech perception and for singing perception, and that in the latter case its activity can be modulated by musical parameters such as the size of a musical interval.
Collapse
|
49
|
Distinct effects of memory retrieval and articulatory preparation when learning and accessing new word forms. PLoS One 2015; 10:e0126652. [PMID: 25961571 PMCID: PMC4427175 DOI: 10.1371/journal.pone.0126652] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2014] [Accepted: 04/04/2015] [Indexed: 12/03/2022] Open
Abstract
Temporal and frontal activations have been implicated in learning of novel word forms, but their specific roles remain poorly understood. The present magnetoencephalography (MEG) study examines the roles of these areas in processing newly-established word form representations. The cortical effects related to acquiring new phonological word forms during incidental learning were localized. Participants listened to and repeated back new word form stimuli that adhered to native phonology (Finnish pseudowords) or were foreign (Korean words), with a subset of the stimuli recurring four times. Subsequently, a modified 1-back task and a recognition task addressed whether the activations modulated by learning were related to planning for overt articulation, while parametrically added noise probed reliance on developing memory representations during effortful perception. Learning resulted in decreased left superior temporal and increased bilateral frontal premotor activation for familiar compared to new items. The left temporal learning effect persisted in all tasks and was strongest when stimuli were embedded in intermediate noise. In the noisy conditions, native phonotactics evoked overall enhanced left temporal activation. In contrast, the frontal learning effects were present only in conditions requiring overt repetition and were more pronounced for the foreign language. The results indicate a functional dissociation between temporal and frontal activations in learning new phonological word forms: the left superior temporal responses reflect activation of newly-established word-form representations, also during degraded sensory input, whereas the frontal premotor effects are related to planning for articulation and are not preserved in noise.
Collapse
|
50
|
Tanji K, Sakurada K, Funiu H, Matsuda K, Kayama T, Ito S, Suzuki K. Functional significance of the electrocorticographic auditory responses in the premotor cortex. Front Neurosci 2015; 9:78. [PMID: 25852457 PMCID: PMC4360713 DOI: 10.3389/fnins.2015.00078] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2014] [Accepted: 02/22/2015] [Indexed: 11/13/2022] Open
Abstract
Other than well-known motor activities in the precentral gyrus, functional magnetic resonance imaging (fMRI) studies have found that the ventral part of the precentral gyrus is activated in response to linguistic auditory stimuli. It has been proposed that the premotor cortex in the precentral gyrus is responsible for the comprehension of speech, but the precise function of this area is still debated because patients with frontal lesions that include the precentral gyrus do not exhibit disturbances in speech comprehension. We report on a patient who underwent resection of the tumor in the precentral gyrus with electrocorticographic recordings while she performed the verb generation task during awake brain craniotomy. Consistent with previous fMRI studies, high-gamma band auditory activity was observed in the precentral gyrus. Due to the location of the tumor, the patient underwent resection of the auditory responsive precentral area which resulted in the post-operative expression of a characteristic articulatory disturbance known as apraxia of speech (AOS). The language function of the patient was otherwise preserved and she exhibited intact comprehension of both spoken and written language. The present findings demonstrated that a lesion restricted to the ventral precentral gyrus is sufficient for the expression of AOS and suggest that the auditory-responsive area plays an important role in the execution of fluent speech rather than the comprehension of speech. These findings also confirm that the function of the premotor area is predominantly motor in nature and its sensory responses is more consistent with the “sensory theory of speech production,” in which it was proposed that sensory representations are used to guide motor-articulatory processes.
Collapse
Affiliation(s)
- Kazuyo Tanji
- Department of Clinical Neuroscience, Yamagata University Graduate School of Medicine Yamagata, Japan
| | - Kaori Sakurada
- Department of Neurosurgery, Yamagata University Graduate School of Medicine Yamagata, Japan
| | - Hayato Funiu
- Department of Neurosurgery, Yamagata University Graduate School of Medicine Yamagata, Japan
| | - Kenichiro Matsuda
- Department of Neurosurgery, Yamagata University Graduate School of Medicine Yamagata, Japan
| | - Takamasa Kayama
- Department of Neurosurgery, Yamagata University Graduate School of Medicine Yamagata, Japan
| | - Sayuri Ito
- Department of Clinical Neuroscience, Yamagata University Graduate School of Medicine Yamagata, Japan
| | - Kyoko Suzuki
- Department of Clinical Neuroscience, Yamagata University Graduate School of Medicine Yamagata, Japan
| |
Collapse
|