1
|
Xing F, Zhuo J, Stone M, Liu X, Reese TG, Wedeen VJ, Prince JL, Woo J. Quantifying articulatory variations across phonological environments: An atlas-based approach using dynamic magnetic resonance imaging. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:4000-4009. [PMID: 39670769 PMCID: PMC11646136 DOI: 10.1121/10.0034639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 11/04/2024] [Accepted: 12/02/2024] [Indexed: 12/14/2024]
Abstract
Identification and quantification of speech variations in velar production across various phonological environments have always been an interesting topic in speech motor control studies. Dynamic magnetic resonance imaging has become a favorable tool for visualizing articulatory deformations and providing quantitative insights into speech activities over time. Based on this modality, it is proposed to employ a workflow of image analysis techniques to uncover potential deformation variations in the human tongue caused by changes in phonological environments by altering the placement of velar consonants in utterances. The speech deformations of four human subjects in three different consonant positions were estimated from magnetic resonance images using a spatiotemporal tracking method before being warped via image registration into a common space-a dynamic atlas space constructed using four-dimensional alignments-for normalized quantitative comparisons. Statistical tests and principal component analyses were conducted on the magnitude of deformations, consonant-specific deformations, and internal muscle strains. The results revealed an overall decrease in deformation intensity following the initial consonant production, indicating potential muscle adaptation behaviors at a later temporal position in one speech utterance.
Collapse
Affiliation(s)
- Fangxu Xing
- Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusetts 02114, USA
| | - Jiachen Zhuo
- Department of Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, Maryland 21201, USA
| | - Maureen Stone
- Department of Neural and Pain Sciences, University of Maryland School of Dentistry, Baltimore, Maryland 21210, USA
| | - Xiaofeng Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06510, USA
| | - Timothy G Reese
- Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusetts 02114, USA
| | - Van J Wedeen
- Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusetts 02114, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, USA
| | - Jonghye Woo
- Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusetts 02114, USA
| |
Collapse
|
2
|
Volfart A, McMahon KL, de Zubicaray GI. A Comparison of Denoising Approaches for Spoken Word Production Related Artefacts in Continuous Multiband fMRI Data. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:901-921. [PMID: 39301209 PMCID: PMC11410355 DOI: 10.1162/nol_a_00151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 06/10/2024] [Indexed: 09/22/2024]
Abstract
It is well-established from fMRI experiments employing gradient echo echo-planar imaging (EPI) sequences that overt speech production introduces signal artefacts compromising accurate detection of task-related responses. Both design and post-processing (denoising) techniques have been proposed and implemented over the years to mitigate the various noise sources. Recently, fMRI studies of speech production have begun to adopt multiband EPI sequences that offer better signal-to-noise ratio (SNR) and temporal resolution allowing adequate sampling of physiological noise sources (e.g., respiration, cardiovascular effects) and reduced scanner acoustic noise. However, these new sequences may also introduce additional noise sources. In this study, we demonstrate the impact of applying several noise-estimation and removal approaches to continuous multiband fMRI data acquired during a naming-to-definition task, including rigid body motion regression and outlier censoring, principal component analysis for removal of cerebrospinal fluid (CSF)/edge-related noise components, and global fMRI signal regression (using two different approaches) compared to a baseline of realignment and unwarping alone. Our results show the strongest and most spatially extensive sources of physiological noise are the global signal fluctuations arising from respiration and muscle action and CSF/edge-related noise components, with residual rigid body motion contributing relatively little variance. Interestingly, denoising approaches tended to reduce and enhance task-related BOLD signal increases and decreases, respectively. Global signal regression using a voxel-wise linear model of the global signal estimated from unmasked data resulted in dramatic improvements in temporal SNR. Overall, these findings show the benefits of combining continuous multiband EPI sequences and denoising approaches to investigate the neurobiology of speech production.
Collapse
Affiliation(s)
- Angelique Volfart
- Faculty of Health, School of Psychology and Counselling, Queensland University of Technology, Brisbane, Australia
| | - Katie L McMahon
- Faculty of Health, School of Clinical Sciences, Queensland University of Technology, Brisbane, Australia
- Herston Imaging Research Facility, Royal Brisbane & Women's Hospital, Brisbane, Australia
- Centre for Biomedical Technologies, Queensland University of Technology, Brisbane, Australia
| | - Greig I de Zubicaray
- Faculty of Health, School of Psychology and Counselling, Queensland University of Technology, Brisbane, Australia
| |
Collapse
|
3
|
Kenyon KH, Boonstra F, Noffs G, Morgan AT, Vogel AP, Kolbe S, Van Der Walt A. The characteristics and reproducibility of motor speech functional neuroimaging in healthy controls. Front Hum Neurosci 2024; 18:1382102. [PMID: 39171097 PMCID: PMC11335534 DOI: 10.3389/fnhum.2024.1382102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 07/22/2024] [Indexed: 08/23/2024] Open
Abstract
Introduction Functional magnetic resonance imaging (fMRI) can improve our understanding of neural processes subserving motor speech function. Yet its reproducibility remains unclear. This study aimed to evaluate the reproducibility of fMRI using a word repetition task across two time points. Methods Imaging data from 14 healthy controls were analysed using a multi-level general linear model. Results Significant activation was observed during the task in the right hemispheric cerebellar lobules IV-V, right putamen, and bilateral sensorimotor cortices. Activation between timepoints was found to be moderately reproducible across time in the cerebellum but not in other brain regions. Discussion Preliminary findings highlight the involvement of the cerebellum and connected cerebral regions during a motor speech task. More work is needed to determine the degree of reproducibility of speech fMRI before this could be used as a reliable marker of changes in brain activity.
Collapse
Affiliation(s)
- Katherine H. Kenyon
- Department of Neuroscience, School of Translational Medicine, Melbourne, VIC, Australia
| | - Frederique Boonstra
- Department of Neuroscience, School of Translational Medicine, Melbourne, VIC, Australia
| | - Gustavo Noffs
- Department of Neuroscience, School of Translational Medicine, Melbourne, VIC, Australia
- Redenlab Inc., Melbourne, VIC, Australia
| | - Angela T. Morgan
- Murdoch Childrens Research Institute, Royal Children's Hospital, Melbourne, VIC, Australia
- Department of Audiology and Speech Pathology, Faculty of Medicine, Dentistry and Health Sciences, Melbourne School of Health Sciences, University of Melbourne, Carlton, VIC, Australia
| | - Adam P. Vogel
- Redenlab Inc., Melbourne, VIC, Australia
- Department of Audiology and Speech Pathology, Parkville, VIC, Australia
| | - Scott Kolbe
- Department of Neuroscience, School of Translational Medicine, Melbourne, VIC, Australia
| | - Anneke Van Der Walt
- Department of Neuroscience, School of Translational Medicine, Melbourne, VIC, Australia
- Department of Neurology, Royal Melbourne Hospital, Melbourne, VIC, Australia
| |
Collapse
|
4
|
Lei VLC, Leong TI, Leong CT, Liu L, Choi CU, Sereno MI, Li D, Huang R. Phase-encoded fMRI tracks down brainstorms of natural language processing with subsecond precision. Hum Brain Mapp 2024; 45:e26617. [PMID: 38339788 PMCID: PMC10858339 DOI: 10.1002/hbm.26617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/04/2023] [Accepted: 01/21/2024] [Indexed: 02/12/2024] Open
Abstract
Natural language processing unfolds information overtime as spatially separated, multimodal, and interconnected neural processes. Existing noninvasive subtraction-based neuroimaging techniques cannot simultaneously achieve the spatial and temporal resolutions required to visualize ongoing information flows across the whole brain. Here we have developed rapid phase-encoded designs to fully exploit the temporal information latent in functional magnetic resonance imaging data, as well as overcoming scanner noise and head-motion challenges during overt language tasks. We captured real-time information flows as coherent hemodynamic waves traveling over the cortical surface during listening, reading aloud, reciting, and oral cross-language interpreting tasks. We were able to observe the timing, location, direction, and surge of traveling waves in all language tasks, which were visualized as "brainstorms" on brain "weather" maps. The paths of hemodynamic traveling waves provide direct evidence for dual-stream models of the visual and auditory systems as well as logistics models for crossmodal and cross-language processing. Specifically, we have tracked down the step-by-step processing of written or spoken sentences first being received and processed by the visual or auditory streams, carried across language and domain-general cognitive regions, and finally delivered as overt speeches monitored through the auditory cortex, which gives a complete picture of information flows across the brain during natural language functioning. PRACTITIONER POINTS: Phase-encoded fMRI enables simultaneous imaging of high spatial and temporal resolution, capturing continuous spatiotemporal dynamics of the entire brain during real-time overt natural language tasks. Spatiotemporal traveling wave patterns provide direct evidence for constructing comprehensive and explicit models of human information processing. This study unlocks the potential of applying rapid phase-encoded fMRI to indirectly track the underlying neural information flows of sequential sensory, motor, and high-order cognitive processes.
Collapse
Affiliation(s)
- Victoria Lai Cheng Lei
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Teng Ieng Leong
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Cheok Teng Leong
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| | - Lili Liu
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| | - Chi Un Choi
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
| | - Martin I. Sereno
- Department of PsychologySan Diego State UniversitySan DiegoCaliforniaUSA
| | - Defeng Li
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Ruey‐Song Huang
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| |
Collapse
|
5
|
Wu YJ, Hou X, Peng C, Yu W, Oppenheim GM, Thierry G, Zhang D. Rapid learning of a phonemic discrimination in the first hours of life. Nat Hum Behav 2022; 6:1169-1179. [PMID: 35654965 PMCID: PMC9391223 DOI: 10.1038/s41562-022-01355-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 04/20/2022] [Indexed: 11/09/2022]
Abstract
Human neonates can discriminate phonemes, but the neural mechanism underlying this ability is poorly understood. Here we show that the neonatal brain can learn to discriminate natural vowels from backward vowels, a contrast unlikely to have been learnt in the womb. Using functional near-infrared spectroscopy, we examined the neuroplastic changes caused by 5 h of postnatal exposure to random sequences of natural and reversed (backward) vowels (T1), and again 2 h later (T2). Neonates in the experimental group were trained with the same stimuli as those used at T1 and T2. Compared with controls, infants in the experimental group showed shorter haemodynamic response latencies for forward vs backward vowels at T1, maximally over the inferior frontal region. At T2, neural activity differentially increased, maximally over superior temporal regions and the left inferior parietal region. Neonates thus exhibit ultra-fast tuning to natural phonemes in the first hours after birth.
Collapse
Affiliation(s)
- Yan Jing Wu
- Faculty of Foreign Languages, Ningbo University, Ningbo, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Cheng Peng
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Wenwen Yu
- School of Psychology, Shenzhen University, Shenzhen, China
| | | | - Guillaume Thierry
- School of Psychology, Bangor University, Bangor, Wales, UK.,Faculty of English, Adam Mickiewicz University, Poznań, Poland
| | - Dandan Zhang
- School of Psychology, Shenzhen University, Shenzhen, China. .,Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China. .,Shenzhen-Hong Kong Institute of Brain Science, Shenzhen, China.
| |
Collapse
|
6
|
Medial temporal lobe contributions to resting-state networks. Brain Struct Funct 2022; 227:995-1012. [PMID: 35041057 PMCID: PMC8930967 DOI: 10.1007/s00429-021-02442-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 12/13/2021] [Indexed: 12/31/2022]
Abstract
The medial temporal lobe (MTL) is a set of interconnected brain regions that have been shown to play a central role in behavior as well as in neurological disease. Recent studies using resting-state functional magnetic resonance imaging (rsfMRI) have attempted to understand the MTL in terms of its functional connectivity with the rest of the brain. However, the exact characterization of the whole-brain networks that co-activate with the MTL as well as how the various sub-regions of the MTL are associated with these networks remains poorly understood. Here, we attempted to advance these issues by exploiting the high spatial resolution 7T rsfMRI dataset from the Human Connectome Project with a data-driven analysis approach that relied on independent component analysis (ICA) restricted to the MTL. We found that four different well-known resting-state networks co-activated with a unique configuration of MTL subcomponents. Specifically, we found that different sections of the parahippocampal cortex were involved in the default mode, visual and dorsal attention networks; sections of the hippocampus in the somatomotor and default mode networks; and the lateral entorhinal cortex in the dorsal attention network. We replicated this set of results in a validation sample. These results provide new insight into how the MTL and its subcomponents contribute to known resting-state networks. The participation of the MTL in an expanded range of resting-state networks is in line with recent proposals on MTL function.
Collapse
|
7
|
Abstract
PURPOSE OF REVIEW Subcortical structures have long been thought to play a role in language processing. Increasingly spirited debates on language studies, arising from as early as the nineteenth century, grew remarkably sophisticated as the years pass. In the context of non-thalamic aphasia, a few theoretical frameworks have been laid out. The disconnection hypothesis postulates that basal ganglia insults result in aphasia due to a rupture of connectivity between Broca and Wernicke's areas. A second viewpoint conjectures that the basal ganglia would more directly partake in language processing, and a third stream proclaims that aphasia would stem from cortical deafferentation. On the other hand, thalamic aphasia is more predominantly deemed as a resultant of diaschisis. This article reviews the above topics with recent findings on deep brain stimulation, neurophysiology, and aphasiology. RECENT FINDINGS The more recent approach conceptualizes non-thalamic aphasias as the offspring of unpredictable cortical hypoperfusion. Regarding the thalamus, there is mounting evidence now pointing to leading contributions of the pulvinar/lateral posterior nucleus and the anterior/ventral anterior thalamus to language disturbances. While the former appears to relate to lexical-semantic indiscrimination, the latter seems to bring about a severe breakdown in word selection and/or spontaneous top-down lexical-semantic operations. The characterization of subcortical aphasias and the role of the basal ganglia and thalamus in language processing continues to pose a challenge. Neuroimaging studies have pointed a path forward, and we believe that more recent methods such as tractography and connectivity studies will significantly expand our knowledge in this particular area of aphasiology.
Collapse
|
8
|
Frankford SA, Heller Murray ES, Masapollo M, Cai S, Tourville JA, Nieto-Castañón A, Guenther FH. The Neural Circuitry Underlying the "Rhythm Effect" in Stuttering. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2325-2346. [PMID: 33887150 PMCID: PMC8740675 DOI: 10.1044/2021_jslhr-20-00328] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/23/2020] [Accepted: 01/12/2021] [Indexed: 06/12/2023]
Abstract
Purpose Stuttering is characterized by intermittent speech disfluencies, which are dramatically reduced when speakers synchronize their speech with a steady beat. The goal of this study was to characterize the neural underpinnings of this phenomenon using functional magnetic resonance imaging. Method Data were collected from 16 adults who stutter and 17 adults who do not stutter while they read sentences aloud either in a normal, self-paced fashion or paced by the beat of a series of isochronous tones ("rhythmic"). Task activation and task-based functional connectivity analyses were carried out to compare neural responses between speaking conditions and groups after controlling for speaking rate. Results Adults who stutter produced fewer disfluent trials in the rhythmic condition than in the normal condition. Adults who stutter did not have any significant changes in activation between the rhythmic condition and the normal condition, but when groups were collapsed, participants had greater activation in the rhythmic condition in regions associated with speech sequencing, sensory feedback control, and timing perception. Adults who stutter also demonstrated increased functional connectivity among cerebellar regions during rhythmic speech as compared to normal speech and decreased connectivity between the left inferior cerebellum and the left prefrontal cortex. Conclusions Modulation of connectivity in the cerebellum and prefrontal cortex during rhythmic speech suggests that this fluency-inducing technique activates a compensatory timing system in the cerebellum and potentially modulates top-down motor control and attentional systems. These findings corroborate previous work associating the cerebellum with fluency in adults who stutter and indicate that the cerebellum may be targeted to enhance future therapeutic interventions. Supplemental Material https://doi.org/10.23641/asha.14417681.
Collapse
Affiliation(s)
- Saul A. Frankford
- Department of Speech, Language & Hearing Sciences, Boston University, MA
| | | | - Matthew Masapollo
- Department of Speech, Language & Hearing Sciences, Boston University, MA
| | - Shanqing Cai
- Department of Speech, Language & Hearing Sciences, Boston University, MA
| | - Jason A. Tourville
- Department of Speech, Language & Hearing Sciences, Boston University, MA
| | | | - Frank H. Guenther
- Department of Speech, Language & Hearing Sciences, Boston University, MA
- Department of Biomedical Engineering, Boston University, MA
- Department of Radiology, Massachusetts General Hospital, Boston
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge
| |
Collapse
|
9
|
Frankford SA, Nieto-Castañón A, Tourville JA, Guenther FH. Reliability of single-subject neural activation patterns in speech production tasks. BRAIN AND LANGUAGE 2021; 212:104881. [PMID: 33278802 PMCID: PMC7781091 DOI: 10.1016/j.bandl.2020.104881] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 09/25/2020] [Accepted: 11/06/2020] [Indexed: 06/12/2023]
Abstract
Speech neuroimaging research targeting individual speakers could help elucidate differences that may be crucial to understanding speech disorders. However, this research necessitates reliable brain activation across multiple speech production sessions. In the present study, we evaluated the reliability of speech-related brain activity measured by functional magnetic resonance imaging data from twenty neuro-typical subjects who participated in two experiments involving reading aloud simple speech stimuli. Using traditional methods like the Dice and intraclass correlation coefficients, we found that most individuals displayed moderate to high reliability. We also found that a novel machine-learning subject classifier could identify these individuals by their speech activation patterns with 97% accuracy from among a dataset of seventy-five subjects. These results suggest that single-subject speech research would yield valid results and that investigations into the reliability of speech activation in people with speech disorders are warranted.
Collapse
Affiliation(s)
- Saul A Frankford
- Department of Speech, Language, & Hearing Sciences, Boston University, Boston, MA 02215, USA.
| | - Alfonso Nieto-Castañón
- Department of Speech, Language, & Hearing Sciences, Boston University, Boston, MA 02215, USA
| | - Jason A Tourville
- Department of Speech, Language, & Hearing Sciences, Boston University, Boston, MA 02215, USA.
| | - Frank H Guenther
- Department of Speech, Language, & Hearing Sciences, Boston University, Boston, MA 02215, USA; Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA; Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA.
| |
Collapse
|
10
|
Krishnamurthy V, Krishnamurthy LC, Meadows ML, Gale MK, Ji B, Gopinath K, Crosson B. A method to mitigate spatio-temporally varying task-correlated motion artifacts from overt-speech fMRI paradigms in aphasia. Hum Brain Mapp 2020; 42:1116-1129. [PMID: 33210749 PMCID: PMC7856637 DOI: 10.1002/hbm.25280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 10/23/2020] [Accepted: 10/31/2020] [Indexed: 12/19/2022] Open
Abstract
Quantifying accurate functional magnetic resonance imaging (fMRI) activation maps can be dampened by spatio‐temporally varying task‐correlated motion (TCM) artifacts in certain task paradigms (e.g., overt speech). Such real‐world tasks are relevant to characterize longitudinal brain reorganization poststroke, and removal of TCM artifacts is vital for improved clinical interpretation and translation. In this study, we developed a novel independent component analysis (ICA)‐based approach to denoise spatio‐temporally varying TCM artifacts in 14 persons with aphasia who participated in an overt language fMRI paradigm. We compared the new methodology with other existing approaches such as “standard” volume registration, nonselective motion correction ICA packages (i.e., AROMA), and combining the novel approach with AROMA. Results show that the proposed methodology outperforms other approaches in removing TCM‐related false positive activity (i.e., improved detectability power) with high spatial specificity. The proposed method was also effective in maintaining a balance between removal of TCM‐related trial‐by‐trial variability and signal retention. Finally, we show that the TCM artifact is related to clinical metrics, such as speech fluency and aphasia severity, and the implication of TCM denoising on such relationship is also discussed. Overall, our work suggests that routine bulkhead motion based denoising packages cannot effectively account for spatio‐temporally varying TCM. Further, the proposed TCM denoising approach requires a one‐time front‐end effort to hand label and train the classifiers that can be cost‐effectively utilized to denoise large clinical data sets.
Collapse
Affiliation(s)
- Venkatagiri Krishnamurthy
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, Georgia, USA.,Department of Medicine, Division of Geriatrics and Gerontology, Emory University, Atlanta, Georgia, USA.,Department of Neurology, Emory University, Atlanta, Georgia, USA
| | - Lisa C Krishnamurthy
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, Georgia, USA.,Department of Physics & Astronomy, Georgia State University, Atlanta, Georgia, USA
| | - M Lawson Meadows
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, Georgia, USA
| | - Mary K Gale
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, Georgia, USA.,Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Bing Ji
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, Georgia, USA.,Department of Radiology & Imaging Sciences, Emory University, Atlanta, Georgia, USA
| | - Kaundinya Gopinath
- Department of Radiology & Imaging Sciences, Emory University, Atlanta, Georgia, USA
| | - Bruce Crosson
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, Georgia, USA.,Department of Neurology, Emory University, Atlanta, Georgia, USA.,Department of Psychology, Georgia State University, Atlanta, Georgia, USA
| |
Collapse
|
11
|
Janssen N, Meij MVD, López-Pérez PJ, Barber HA. Exploring the temporal dynamics of speech production with EEG and group ICA. Sci Rep 2020; 10:3667. [PMID: 32111868 PMCID: PMC7048769 DOI: 10.1038/s41598-020-60301-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Accepted: 02/11/2020] [Indexed: 12/22/2022] Open
Abstract
Speech production is a complex skill whose neural implementation relies on a large number of different regions in the brain. How neural activity in these different regions varies as a function of time during the production of speech remains poorly understood. Previous MEG studies on this topic have concluded that activity proceeds from posterior to anterior regions of the brain in a sequential manner. Here we tested this claim using the EEG technique. Specifically, participants performed a picture naming task while their naming latencies and scalp potentials were recorded. We performed group temporal Independent Component Analysis (group tICA) to obtain temporally independent component timecourses and their corresponding topographic maps. We identified fifteen components whose estimated neural sources were located in various areas of the brain. The trial-by-trial component timecourses were predictive of the naming latency, implying their involvement in the task. Crucially, we computed the degree of concurrent activity of each component timecourse to test whether activity was sequential or parallel. Our results revealed that these fifteen distinct neural sources exhibit largely concurrent activity during speech production. These results suggest that speech production relies on neural activity that takes place in parallel networks of distributed neural sources.
Collapse
Affiliation(s)
- Niels Janssen
- Departamento de Psicología, Universidad de la Laguna, La Laguna, Spain. .,Instituto de Tecnologías Biomedicas, Universidad de la Laguna, La Laguna, Spain. .,Instituto de Neurociencias, Universidad de la Laguna, La Laguna, Spain.
| | | | | | - Horacio A Barber
- Departamento de Psicología, Universidad de la Laguna, La Laguna, Spain.,Instituto de Tecnologías Biomedicas, Universidad de la Laguna, La Laguna, Spain.,Instituto de Neurociencias, Universidad de la Laguna, La Laguna, Spain.,Basque Center on Cognition, Brain and Language (BCBL), Donostia, Spain
| |
Collapse
|