1
|
Littlejohn KT, Cho CJ, Liu JR, Silva AB, Yu B, Anderson VR, Kurtz-Miott CM, Brosler S, Kashyap AP, Hallinan IP, Shah A, Tu-Chan A, Ganguly K, Moses DA, Chang EF, Anumanchipalli GK. A streaming brain-to-voice neuroprosthesis to restore naturalistic communication. Nat Neurosci 2025; 28:902-912. [PMID: 40164740 DOI: 10.1038/s41593-025-01905-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 01/29/2025] [Indexed: 04/02/2025]
Abstract
Natural spoken communication happens instantaneously. Speech delays longer than a few seconds can disrupt the natural flow of conversation. This makes it difficult for individuals with paralysis to participate in meaningful dialogue, potentially leading to feelings of isolation and frustration. Here we used high-density surface recordings of the speech sensorimotor cortex in a clinical trial participant with severe paralysis and anarthria to drive a continuously streaming naturalistic speech synthesizer. We designed and used deep learning recurrent neural network transducer models to achieve online large-vocabulary intelligible fluent speech synthesis personalized to the participant's preinjury voice with neural decoding in 80-ms increments. Offline, the models demonstrated implicit speech detection capabilities and could continuously decode speech indefinitely, enabling uninterrupted use of the decoder and further increasing speed. Our framework also successfully generalized to other silent-speech interfaces, including single-unit recordings and electromyography. Our findings introduce a speech-neuroprosthetic paradigm to restore naturalistic spoken communication to people with paralysis.
Collapse
Affiliation(s)
- Kaylo T Littlejohn
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Cheol Jun Cho
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Graduate Program in Bioengineering, University of California, Berkeley-University of California, San Francisco, Berkeley, CA, USA
| | - Bohan Yu
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Vanessa R Anderson
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Cady M Kurtz-Miott
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Samantha Brosler
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Graduate Program in Bioengineering, University of California, Berkeley-University of California, San Francisco, Berkeley, CA, USA
| | - Anshul P Kashyap
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Irina P Hallinan
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Adit Shah
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Adelyn Tu-Chan
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Karunesh Ganguly
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
- Graduate Program in Bioengineering, University of California, Berkeley-University of California, San Francisco, Berkeley, CA, USA.
| | - Gopala K Anumanchipalli
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA.
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
2
|
Vitória MA, Fernandes FG, van den Boom M, Ramsey N, Raemaekers M. Decoding Single and Paired Phonemes Using 7T Functional MRI. Brain Topogr 2024; 37:731-747. [PMID: 38261272 PMCID: PMC11393141 DOI: 10.1007/s10548-024-01034-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/12/2024] [Indexed: 01/24/2024]
Abstract
Several studies have shown that mouth movements related to the pronunciation of individual phonemes are represented in the sensorimotor cortex. This would theoretically allow for brain computer interfaces that are capable of decoding continuous speech by training classifiers based on the activity in the sensorimotor cortex related to the production of individual phonemes. To address this, we investigated the decodability of trials with individual and paired phonemes (pronounced consecutively with one second interval) using activity in the sensorimotor cortex. Fifteen participants pronounced 3 different phonemes and 3 combinations of two of the same phonemes in a 7T functional MRI experiment. We confirmed that support vector machine (SVM) classification of single and paired phonemes was possible. Importantly, by combining classifiers trained on single phonemes, we were able to classify paired phonemes with an accuracy of 53% (33% chance level), demonstrating that activity of isolated phonemes is present and distinguishable in combined phonemes. A SVM searchlight analysis showed that the phoneme representations are widely distributed in the ventral sensorimotor cortex. These findings provide insights about the neural representations of single and paired phonemes. Furthermore, it supports the notion that speech BCI may be feasible based on machine learning algorithms trained on individual phonemes using intracranial electrode grids.
Collapse
Affiliation(s)
- Maria Araújo Vitória
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Francisco Guerreiro Fernandes
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Max van den Boom
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA
| | - Nick Ramsey
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Mathijs Raemaekers
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
3
|
Silva AB, Liu JR, Metzger SL, Bhaya-Grossman I, Dougherty ME, Seaton MP, Littlejohn KT, Tu-Chan A, Ganguly K, Moses DA, Chang EF. A bilingual speech neuroprosthesis driven by cortical articulatory representations shared between languages. Nat Biomed Eng 2024; 8:977-991. [PMID: 38769157 PMCID: PMC11554235 DOI: 10.1038/s41551-024-01207-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 04/01/2024] [Indexed: 05/22/2024]
Abstract
Advancements in decoding speech from brain activity have focused on decoding a single language. Hence, the extent to which bilingual speech production relies on unique or shared cortical activity across languages has remained unclear. Here, we leveraged electrocorticography, along with deep-learning and statistical natural-language models of English and Spanish, to record and decode activity from speech-motor cortex of a Spanish-English bilingual with vocal-tract and limb paralysis into sentences in either language. This was achieved without requiring the participant to manually specify the target language. Decoding models relied on shared vocal-tract articulatory representations across languages, which allowed us to build a syllable classifier that generalized across a shared set of English and Spanish syllables. Transfer learning expedited training of the bilingual decoder by enabling neural data recorded in one language to improve decoding in the other language. Overall, our findings suggest shared cortical articulatory representations that persist after paralysis and enable the decoding of multiple languages without the need to train separate language-specific decoders.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Sean L Metzger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Ilina Bhaya-Grossman
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Maximilian E Dougherty
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Margaret P Seaton
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Adelyn Tu-Chan
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Karunesh Ganguly
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA.
| |
Collapse
|
4
|
Metzger SL, Littlejohn KT, Silva AB, Moses DA, Seaton MP, Wang R, Dougherty ME, Liu JR, Wu P, Berger MA, Zhuravleva I, Tu-Chan A, Ganguly K, Anumanchipalli GK, Chang EF. A high-performance neuroprosthesis for speech decoding and avatar control. Nature 2023; 620:1037-1046. [PMID: 37612505 PMCID: PMC10826467 DOI: 10.1038/s41586-023-06443-4] [Citation(s) in RCA: 126] [Impact Index Per Article: 63.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 07/17/2023] [Indexed: 08/25/2023]
Abstract
Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive1. Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant's pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.
Collapse
Affiliation(s)
- Sean L Metzger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley-University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley-University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Margaret P Seaton
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Ran Wang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Maximilian E Dougherty
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley-University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Peter Wu
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | | | - Inga Zhuravleva
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Adelyn Tu-Chan
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Karunesh Ganguly
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Gopala K Anumanchipalli
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
- University of California, Berkeley-University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA.
| |
Collapse
|
5
|
Matsunaga Y, Haba T, Kobayashi M, Suzuki S, Asada Y, Chida K. Evaluation of radiation dose for inferior vena cava filter placement during pregnancy: A comparison of dosimetry and dose calculation software. J Appl Clin Med Phys 2023; 24:e13884. [PMID: 36546565 PMCID: PMC9924124 DOI: 10.1002/acm2.13884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 11/21/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Numerous medical conditions are associated with pregnancy in women, including pulmonary thromboembolism, which can be fatal. An effective treatment of this condition is the positioning of an inferior vena cava filter (IVC-F) under the guidance of X-ray imaging. However, this procedure involves the risk of high radiation exposure to pregnant women and fetuses. Moreover, there are no published reports comparing the values of fetal dose, received during IVC-F placement in pregnant women, determined using dose calculation software and actual measurements. To address this issue, we compared the fetal radiation dose and entrance surface dose (ESD) for pregnant women for gestation periods of 6 and 9 months based on software calculations and actual measurements. The ESD and fetal doses were estimated for a pregnant woman for gestation periods of 6 and 9 months during IVC-F placement. For actual measurements, one pregnant model phantom was constructed using an anthropomorphic phantom, and two custom-made different-sized abdomen phantoms were used to simulate pregnancy. The custom-made abdomen phantoms were constructed using polyurethane. For software calculations, the software utilized a set of anatomically realistic pregnant patient phantoms. The ESD estimated using the software was consistent with the measured ESD, but the fetal dose estimations were more complicated due to fetal positioning. During fetal dose evaluation using software calculations, the user must carefully consider how much of the fetal length is in the irradiation field to prevent underestimation or overestimation. Despite the errors, the software can assist the user in identifying the magnitude of the dose approaching critical limits.
Collapse
Affiliation(s)
- Yuta Matsunaga
- Department of ImagingNagoya Kyoritsu HospitalNagoyaAichiJapan
- Department of Radiological TechnologyFaculty of Health SciencesTohoku University Graduate School of MedicineSendaiMiyagiJapan
| | - Tomonobu Haba
- School of Health SciencesFujita Health UniversityToyoakeAichiJapan
| | | | - Shoichi Suzuki
- School of Health SciencesFujita Health UniversityToyoakeAichiJapan
| | - Yasuki Asada
- School of Health SciencesFujita Health UniversityToyoakeAichiJapan
| | - Koichi Chida
- Department of Radiological TechnologyFaculty of Health SciencesTohoku University Graduate School of MedicineSendaiMiyagiJapan
| |
Collapse
|
6
|
Metzger SL, Liu JR, Moses DA, Dougherty ME, Seaton MP, Littlejohn KT, Chartier J, Anumanchipalli GK, Tu-Chan A, Ganguly K, Chang EF. Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis. Nat Commun 2022; 13:6510. [PMID: 36347863 PMCID: PMC9643551 DOI: 10.1038/s41467-022-33611-3] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/26/2022] [Indexed: 11/09/2022] Open
Abstract
Neuroprostheses have the potential to restore communication to people who cannot speak or type due to paralysis. However, it is unclear if silent attempts to speak can be used to control a communication neuroprosthesis. Here, we translated direct cortical signals in a clinical-trial participant (ClinicalTrials.gov; NCT03698149) with severe limb and vocal-tract paralysis into single letters to spell out full sentences in real time. We used deep-learning and language-modeling techniques to decode letter sequences as the participant attempted to silently spell using code words that represented the 26 English letters (e.g. "alpha" for "a"). We leveraged broad electrode coverage beyond speech-motor cortex to include supplemental control signals from hand cortex and complementary information from low- and high-frequency signal components to improve decoding accuracy. We decoded sentences using words from a 1,152-word vocabulary at a median character error rate of 6.13% and speed of 29.4 characters per minute. In offline simulations, we showed that our approach generalized to large vocabularies containing over 9,000 words (median character error rate of 8.23%). These results illustrate the clinical viability of a silently controlled speech neuroprosthesis to generate sentences from a large vocabulary through a spelling-based approach, complementing previous demonstrations of direct full-word decoding.
Collapse
Affiliation(s)
- Sean L. Metzger
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA USA
| | - Jessie R. Liu
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA USA
| | - David A. Moses
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA
| | - Maximilian E. Dougherty
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA
| | - Margaret P. Seaton
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA
| | - Kaylo T. Littlejohn
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA USA
| | - Josh Chartier
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA
| | - Gopala K. Anumanchipalli
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA USA
| | - Adelyn Tu-Chan
- grid.266102.10000 0001 2297 6811Department of Neurology, University of California, San Francisco, San Francisco, CA USA
| | - Karunesh Ganguly
- grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Department of Neurology, University of California, San Francisco, San Francisco, CA USA
| | - Edward F. Chang
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA USA
| |
Collapse
|
7
|
Krishnan S, Cler GJ, Smith HJ, Willis HE, Asaridou SS, Healy MP, Papp D, Watkins KE. Quantitative MRI reveals differences in striatal myelin in children with DLD. eLife 2022; 11:e74242. [PMID: 36164824 PMCID: PMC9514847 DOI: 10.7554/elife.74242] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 07/21/2022] [Indexed: 12/25/2022] Open
Abstract
Developmental language disorder (DLD) is a common neurodevelopmental disorder characterised by receptive or expressive language difficulties or both. While theoretical frameworks and empirical studies support the idea that there may be neural correlates of DLD in frontostriatal loops, findings are inconsistent across studies. Here, we use a novel semiquantitative imaging protocol - multi-parameter mapping (MPM) - to investigate microstructural neural differences in children with DLD. The MPM protocol allows us to reproducibly map specific indices of tissue microstructure. In 56 typically developing children and 33 children with DLD, we derived maps of (1) longitudinal relaxation rate R1 (1/T1), (2) transverse relaxation rate R2* (1/T2*), and (3) Magnetization Transfer saturation (MTsat). R1 and MTsat predominantly index myelin, while R2* is sensitive to iron content. Children with DLD showed reductions in MTsat values in the caudate nucleus bilaterally, as well as in the left ventral sensorimotor cortex and Heschl's gyrus. They also had globally lower R1 values. No group differences were noted in R2* maps. Differences in MTsat and R1 were coincident in the caudate nucleus bilaterally. These findings support our hypothesis of corticostriatal abnormalities in DLD and indicate abnormal levels of myelin in the dorsal striatum in children with DLD.
Collapse
Affiliation(s)
- Saloni Krishnan
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Department of Psychology, Royal Holloway, University of London, Egham HillLondonUnited Kingdom
| | - Gabriel J Cler
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Department of Speech and Hearing Sciences, University of WashingtonSeattleUnited States
| | - Harriet J Smith
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- MRC Cognition and Brain Sciences Unit, University of CambridgeCambridgeUnited Kingdom
| | - Hanna E Willis
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Nuffield Department of Clinical Neurosciences, John Radcliffe HospitalOxfordUnited Kingdom
| | - Salomi S Asaridou
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
| | - Máiréad P Healy
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Department of Psychology, University of CambridgeCambridgeUnited Kingdom
| | - Daniel Papp
- NeuroPoly Lab, Biomedical Engineering Department, Polytechnique MontrealMontrealCanada
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neuroscience, University of OxfordOxfordUnited Kingdom
| | - Kate E Watkins
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
8
|
Wilson BS, Tucci DL, Moses DA, Chang EF, Young NM, Zeng FG, Lesica NA, Bur AM, Kavookjian H, Mussatto C, Penn J, Goodwin S, Kraft S, Wang G, Cohen JM, Ginsburg GS, Dawson G, Francis HW. Harnessing the Power of Artificial Intelligence in Otolaryngology and the Communication Sciences. J Assoc Res Otolaryngol 2022; 23:319-349. [PMID: 35441936 PMCID: PMC9086071 DOI: 10.1007/s10162-022-00846-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 04/02/2022] [Indexed: 02/01/2023] Open
Abstract
Use of artificial intelligence (AI) is a burgeoning field in otolaryngology and the communication sciences. A virtual symposium on the topic was convened from Duke University on October 26, 2020, and was attended by more than 170 participants worldwide. This review presents summaries of all but one of the talks presented during the symposium; recordings of all the talks, along with the discussions for the talks, are available at https://www.youtube.com/watch?v=ktfewrXvEFg and https://www.youtube.com/watch?v=-gQ5qX2v3rg . Each of the summaries is about 2500 words in length and each summary includes two figures. This level of detail far exceeds the brief summaries presented in traditional reviews and thus provides a more-informed glimpse into the power and diversity of current AI applications in otolaryngology and the communication sciences and how to harness that power for future applications.
Collapse
Affiliation(s)
- Blake S. Wilson
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
- Duke Hearing Center, Duke University School of Medicine, Durham, NC 27710 USA
- Department of Electrical & Computer Engineering, Duke University, Durham, NC 27708 USA
- Department of Biomedical Engineering, Duke University, Durham, NC 27708 USA
- Department of Otolaryngology – Head & Neck Surgery, University of North Carolina, Chapel Hill, Chapel Hill, NC 27599 USA
| | - Debara L. Tucci
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
- National Institute On Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892 USA
| | - David A. Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143 USA
- UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94117 USA
| | - Edward F. Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143 USA
- UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94117 USA
| | - Nancy M. Young
- Division of Otolaryngology, Ann and Robert H. Lurie Childrens Hospital of Chicago, Chicago, IL 60611 USA
- Department of Otolaryngology - Head and Neck Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL 60611 USA
- Department of Communication, Knowles Hearing Center, Northwestern University, Evanston, IL 60208 USA
| | - Fan-Gang Zeng
- Center for Hearing Research, University of California, Irvine, Irvine, CA 92697 USA
- Department of Anatomy and Neurobiology, University of California, Irvine, Irvine, CA 92697 USA
- Department of Biomedical Engineering, University of California, Irvine, Irvine, CA 92697 USA
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697 USA
- Department of Otolaryngology – Head and Neck Surgery, University of California, Irvine, CA 92697 USA
| | | | - Andrés M. Bur
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Hannah Kavookjian
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Caroline Mussatto
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Joseph Penn
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Sara Goodwin
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Shannon Kraft
- Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Guanghui Wang
- Department of Computer Science, Ryerson University, Toronto, ON M5B 2K3 Canada
| | - Jonathan M. Cohen
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
- ENT Department, Kaplan Medical Center, 7661041 Rehovot, Israel
| | - Geoffrey S. Ginsburg
- Department of Biomedical Engineering, Duke University, Durham, NC 27708 USA
- MEDx (Medicine & Engineering at Duke), Duke University, Durham, NC 27708 USA
- Center for Applied Genomics & Precision Medicine, Duke University School of Medicine, Durham, NC 27710 USA
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710 USA
- Department of Pathology, Duke University School of Medicine, Durham, NC 27710 USA
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27710 USA
| | - Geraldine Dawson
- Duke Institute for Brain Sciences, Duke University, Durham, NC 27710 USA
- Duke Center for Autism and Brain Development, Duke University School of Medicine and the Duke Institute for Brain Sciences, NIH Autism Center of Excellence, Durham, NC 27705 USA
- Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC 27701 USA
| | - Howard W. Francis
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
| |
Collapse
|
9
|
Sereno MI, Sood MR, Huang RS. Topological Maps and Brain Computations From Low to High. Front Syst Neurosci 2022; 16:787737. [PMID: 35747394 PMCID: PMC9210993 DOI: 10.3389/fnsys.2022.787737] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/29/2022] [Indexed: 01/02/2023] Open
Abstract
We first briefly summarize data from microelectrode studies on visual maps in non-human primates and other mammals, and characterize differences among the features of the approximately topological maps in the three main sensory modalities. We then explore the almost 50% of human neocortex that contains straightforward topological visual, auditory, and somatomotor maps by presenting a new parcellation as well as a movie atlas of cortical area maps on the FreeSurfer average surface, fsaverage. Third, we review data on moveable map phenomena as well as a recent study showing that cortical activity during sensorimotor actions may involve spatially locally coherent traveling wave and bump activity. Finally, by analogy with remapping phenomena and sensorimotor activity, we speculate briefly on the testable possibility that coherent localized spatial activity patterns might be able to ‘escape’ from topologically mapped cortex during ‘serial assembly of content’ operations such as scene and language comprehension, to form composite ‘molecular’ patterns that can move across some cortical areas and possibly return to topologically mapped cortex to generate motor output there.
Collapse
Affiliation(s)
- Martin I. Sereno
- Department of Psychology, San Diego State University, San Diego, CA, United States
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- *Correspondence: Martin I. Sereno,
| | - Mariam Reeny Sood
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Ruey-Song Huang
- Centre for Cognitive and Brain Sciences, University of Macau, Macau, Macao SAR, China
| |
Collapse
|
10
|
Aye N, Lehmann N, Kaufmann J, Heinze HJ, Düzel E, Taubert M, Ziegler G. Test-retest reliability of multi-parametric maps (MPM) of brain microstructure. Neuroimage 2022; 256:119249. [PMID: 35487455 DOI: 10.1016/j.neuroimage.2022.119249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 04/22/2022] [Accepted: 04/25/2022] [Indexed: 10/18/2022] Open
Abstract
Multiparameter mapping (MPM) is a quantitative MRI protocol that is promising for studying microstructural brain changes in vivo with high specificity. Reliability values are an important prior knowledge for efficient study design and facilitating replicable findings in development, aging and neuroplasticity research. To explore longitudinal reliability of MPM we acquired the protocol in 31 healthy young subjects twice over a rescan interval of 4 weeks. We assessed the within-subject coefficient of variation (WCV), the between-subject coefficient of variation (BCV), and the intraclass correlation coefficient (ICC). Using these metrics, we investigated the reliability of (semi-) quantitative magnetization transfer saturation (MTsat), proton density (PD), transversal relaxation (R2*) and longitudinal relaxation (R1). To increase relevance for explorative studies in development and training-induced plasticity, we assess reliability both on local voxel- as well as ROI-level. Finally, we disentangle contributions and interplay of within- and between-subject variability to ICC and assess the optimal degree of spatial smoothing applied to the data. We reveal evidence that voxelwise ICC reliability of MPMs is moderate to good with median values in cortex (subcortical GM): MT: 0.789 (0.447) PD: 0.553 (0.264) R1: 0.555 (0.369) R2*: 0.624 (0.477). The Gaussian smoothing kernel of 2 to 4 mm FWHM resulted in optimal reproducibility. We discuss these findings in the context of longitudinal intervention studies and the application to research designs in neuroimaging field.
Collapse
Affiliation(s)
- Norman Aye
- Faculty of Human Sciences, Institute III, Department of Sport Science, Otto von Guericke University, Zschokkestraße 32, 39104 Magdeburg, Germany.
| | - Nico Lehmann
- Faculty of Human Sciences, Institute III, Department of Sport Science, Otto von Guericke University, Zschokkestraße 32, 39104 Magdeburg, Germany; Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
| | - Jörn Kaufmann
- Department of Neurology, Otto von Guericke University, Leipziger Straße 44, 39120 Magdeburg, Germany
| | - Hans-Jochen Heinze
- Department of Neurology, Otto von Guericke University, Leipziger Straße 44, 39120 Magdeburg, Germany; German Center for Neurodegenerative Diseases (DZNE), Leipziger Straße 44, 39120 Magdeburg, Germany; Center for Behavioral and Brain Science (CBBS), Otto von Guericke University, Universitätsplatz 2, 39106 Magdeburg, Germany; Leibniz-Institute for Neurobiology (LIN), Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Emrah Düzel
- German Center for Neurodegenerative Diseases (DZNE), Leipziger Straße 44, 39120 Magdeburg, Germany; Center for Behavioral and Brain Science (CBBS), Otto von Guericke University, Universitätsplatz 2, 39106 Magdeburg, Germany; Institute of Cognitive Neurology and Dementia Research, Otto von Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany; Institute of Cognitive Neuroscience, University College London, Alexandra House, 17-19 Queen Square, Bloomsbury, London, WC1N 3AZ United Kingdom
| | - Marco Taubert
- Faculty of Human Sciences, Institute III, Department of Sport Science, Otto von Guericke University, Zschokkestraße 32, 39104 Magdeburg, Germany; Center for Behavioral and Brain Science (CBBS), Otto von Guericke University, Universitätsplatz 2, 39106 Magdeburg, Germany
| | - Gabriel Ziegler
- German Center for Neurodegenerative Diseases (DZNE), Leipziger Straße 44, 39120 Magdeburg, Germany; Institute of Cognitive Neurology and Dementia Research, Otto von Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany
| |
Collapse
|
11
|
Waters S, Kanber E, Lavan N, Belyk M, Carey D, Cartei V, Lally C, Miquel M, McGettigan C. Singers show enhanced performance and neural representation of vocal imitation. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200399. [PMID: 34719245 PMCID: PMC8558773 DOI: 10.1098/rstb.2020.0399] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/06/2021] [Indexed: 12/17/2022] Open
Abstract
Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of the right somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Sheena Waters
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Wolfson Institute of Preventive Medicine, Barts and The London School of Medicine and Dentistry, Charterhouse Square, London EC1M 6BQ, UK
| | - Elise Kanber
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Nadine Lavan
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
- Department of Biological and Experimental Psychology, Queen Mary University of London, Mile End Road, Bethnal Green, London E1 4NS, UK
| | - Michel Belyk
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Daniel Carey
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Data & AI, Novartis Pharmaceuticals, Novartis Global Service Center, 203 Merrion Road, Dublin 4 D04 NN12, Ireland
| | - Valentina Cartei
- Equipe de Neuro-Ethologie Sensorielle (ENES), Centre de Recherche en Neurosciences de Lyon, Université de Lyon/Saint-Etienne, 21 rue du Docteur Paul Michelon, 42100 Saint-Etienne, France
- Department of Psychology, Institute of Education, Health and Social Sciences, University of Chichester, College Lane, Chichester, West Sussex PO19 6PE, UK
| | - Clare Lally
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Marc Miquel
- Department of Clinical Physics, Barts Health NHS Trust, London EC1A 7BE, UK
- William Harvey Research Institute, Queen Mary University of London, London EC1M 6BQ, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
12
|
Ruthven M, Miquel ME, King AP. Deep-learning-based segmentation of the vocal tract and articulators in real-time magnetic resonance images of speech. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105814. [PMID: 33197740 PMCID: PMC7732702 DOI: 10.1016/j.cmpb.2020.105814] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 10/19/2020] [Indexed: 06/01/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance (MR) imaging is increasingly used in studies of speech as it enables non-invasive visualisation of the vocal tract and articulators, thus providing information about their shape, size, motion and position. Extraction of this information for quantitative analysis is achieved using segmentation. Methods have been developed to segment the vocal tract, however, none of these also fully segment any articulators. The objective of this work was to develop a method to fully segment multiple groups of articulators as well as the vocal tract in two-dimensional MR images of speech, thus overcoming the limitations of existing methods. METHODS Five speech MR image sets (392 MR images in total), each of a different healthy adult volunteer, were used in this work. A fully convolutional network with an architecture similar to the original U-Net was developed to segment the following six regions in the image sets: the head, soft palate, jaw, tongue, vocal tract and tooth space. A five-fold cross-validation was performed to investigate the segmentation accuracy and generalisability of the network. The segmentation accuracy was assessed using standard overlap-based metrics (Dice coefficient and general Hausdorff distance) and a novel clinically relevant metric based on velopharyngeal closure. RESULTS The segmentations created by the method had a median Dice coefficient of 0.92 and a median general Hausdorff distance of 5mm. The method segmented the head most accurately (median Dice coefficient of 0.99), and the soft palate and tooth space least accurately (median Dice coefficients of 0.92 and 0.93 respectively). The segmentations created by the method correctly showed 90% (27 out of 30) of the velopharyngeal closures in the MR image sets. CONCLUSIONS An automatic method to fully segment multiple groups of articulators as well as the vocal tract in two-dimensional MR images of speech was successfully developed. The method is intended for use in clinical and non-clinical speech studies which involve quantitative analysis of the shape, size, motion and position of the vocal tract and articulators. In addition, a novel clinically relevant metric for assessing the accuracy of vocal tract and articulator segmentation methods was developed.
Collapse
Affiliation(s)
- Matthieu Ruthven
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom; School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom.
| | - Marc E Miquel
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom; Centre for Advanced Cardiovascular Imaging, NIHR Barts Biomedical Research Centre, William Harvey Institute, Queen Mary University of London, London EC1M 6BQ, United Kingdom
| | - Andrew P King
- School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom
| |
Collapse
|
13
|
Eichert N, Watkins KE, Mars RB, Petrides M. Morphological and functional variability in central and subcentral motor cortex of the human brain. Brain Struct Funct 2020; 226:263-279. [PMID: 33355695 PMCID: PMC7817568 DOI: 10.1007/s00429-020-02180-w] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 11/16/2020] [Indexed: 11/30/2022]
Abstract
There is a long-established link between anatomy and function in the somatomotor system in the mammalian cerebral cortex. The morphology of the central sulcus is predictive of the location of functional activation peaks relating to movement of different effectors in individuals. By contrast, morphological variation in the subcentral region and its relationship to function is, as yet, unknown. Investigating the subcentral region is particularly important in the context of speech, since control of the larynx during human speech production is related to activity in this region. Here, we examined the relationship between morphology in the central and subcentral region and the location of functional activity during movement of the hand, lips, tongue, and larynx at the individual participant level. We provide a systematic description of the sulcal patterns of the subcentral and adjacent opercular cortex, including the inter-individual variability in sulcal morphology. We show that, in the majority of participants, the anterior subcentral sulcus is not continuous, but consists of two distinct segments. A robust relationship between morphology of the central and subcentral sulcal segments and movement of different effectors is demonstrated. Inter-individual variability of underlying anatomy might thus explain previous inconsistent findings, in particular regarding the ventral larynx area in subcentral cortex. A surface registration based on sulcal labels indicated that such anatomical information can improve the alignment of functional data for group studies.
Collapse
Affiliation(s)
- Nicole Eichert
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK.
| | - Kate E Watkins
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, UK
| | - Rogier B Mars
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ, Nijmegen, The Netherlands
| | - Michael Petrides
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, 3801 University Street, Montreal, QC, H3A 2B4, Canada.,Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1, Canada
| |
Collapse
|
14
|
Eichert N, Papp D, Mars RB, Watkins KE. Mapping Human Laryngeal Motor Cortex during Vocalization. Cereb Cortex 2020; 30:6254-6269. [PMID: 32728706 PMCID: PMC7610685 DOI: 10.1093/cercor/bhaa182] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 06/01/2020] [Accepted: 06/06/2020] [Indexed: 01/17/2023] Open
Abstract
The representations of the articulators involved in human speech production are organized somatotopically in primary motor cortex. The neural representation of the larynx, however, remains debated. Both a dorsal and a ventral larynx representation have been previously described. It is unknown, however, whether both representations are located in primary motor cortex. Here, we mapped the motor representations of the human larynx using functional magnetic resonance imaging and characterized the cortical microstructure underlying the activated regions. We isolated brain activity related to laryngeal activity during vocalization while controlling for breathing. We also mapped the articulators (the lips and tongue) and the hand area. We found two separate activations during vocalization-a dorsal and a ventral larynx representation. Structural and quantitative neuroimaging revealed that myelin content and cortical thickness underlying the dorsal, but not the ventral larynx representation, are similar to those of other primary motor representations. This finding confirms that the dorsal larynx representation is located in primary motor cortex and that the ventral one is not. We further speculate that the location of the ventral larynx representation is in premotor cortex, as seen in other primates. It remains unclear, however, whether and how these two representations differentially contribute to laryngeal motor control.
Collapse
Affiliation(s)
- Nicole Eichert
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Daniel Papp
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Rogier B. Mars
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Kate E. Watkins
- Department of Experimental Psychology, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| |
Collapse
|
15
|
Braga RM, DiNicola LM, Becker HC, Buckner RL. Situating the left-lateralized language network in the broader organization of multiple specialized large-scale distributed networks. J Neurophysiol 2020; 124:1415-1448. [PMID: 32965153 PMCID: PMC8356783 DOI: 10.1152/jn.00753.2019] [Citation(s) in RCA: 133] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Using procedures optimized to explore network organization within the individual, the topography of a candidate language network was characterized and situated within the broader context of adjacent networks. The candidate network was first identified using functional connectivity and replicated across individuals, acquisition tasks, and analytical methods. In addition to classical language regions near the perisylvian cortex and temporal pole, regions were also observed in dorsal posterior cingulate, midcingulate, and anterior superior frontal and inferior temporal cortex. The candidate network was selectively activated when processing meaningful (as contrasted with nonword) sentences, whereas spatially adjacent networks showed minimal or even decreased activity. Results were replicated and triplicated across two prospectively acquired cohorts. Examined in relation to adjacent networks, the topography of the language network was found to parallel the motif of other association networks, including the transmodal association networks linked to theory of mind and episodic remembering (often collectively called the default network). The several networks contained juxtaposed regions in multiple association zones. Outside of these juxtaposed higher-order networks, we further noted a distinct frontotemporal network situated between language regions and a frontal orofacial motor region and a temporal auditory region. A possibility is that these functionally related sensorimotor regions might anchor specialization of neighboring association regions that develop into a language network. What is most striking is that the canonical language network appears to be just one of multiple similarly organized, differentially specialized distributed networks that populate the evolutionarily expanded zones of human association cortex. NEW & NOTEWORTHY This research shows that a language network can be identified within individuals using functional connectivity. Organizational details reveal that the language network shares a common spatial motif with other association networks, including default and frontoparietal control networks. The language network is activated by language task demands, whereas closely juxtaposed networks are not, suggesting that similarly organized but differentially specialized distributed networks populate association cortex.
Collapse
Affiliation(s)
- Rodrigo M Braga
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts.,Department of Neurology and Neurological Sciences, Stanford University, Stanford, California.,The Computational, Cognitive, and Clinical Neuroimaging Laboratory, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts
| | - Lauren M DiNicola
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts
| | - Hannah C Becker
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts
| | - Randy L Buckner
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Harvard Medical School, Boston, Massachusetts.,Department of Psychiatry, Massachusetts General Hospital, Charlestown, Massachusetts
| |
Collapse
|
16
|
Correia JM, Caballero-Gaudes C, Guediche S, Carreiras M. Phonatory and articulatory representations of speech production in cortical and subcortical fMRI responses. Sci Rep 2020; 10:4529. [PMID: 32161310 PMCID: PMC7066132 DOI: 10.1038/s41598-020-61435-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 02/24/2020] [Indexed: 11/25/2022] Open
Abstract
Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.
Collapse
Affiliation(s)
- Joao M Correia
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain. .,Centre for Biomedical Research (CBMR)/Department of Psychology, University of Algarve, Faro, Portugal.
| | | | - Sara Guediche
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain
| | - Manuel Carreiras
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain.,Ikerbasque. Basque Foundation for Science, Bilbao, Spain.,University of the Basque Country. UPV/EHU, Bilbao, Spain
| |
Collapse
|
17
|
Ambarlı H. Analysis of wolf–human conflicts: implications for damage mitigation measures. EUR J WILDLIFE RES 2019. [DOI: 10.1007/s10344-019-1320-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
18
|
Tam WK, Wu T, Zhao Q, Keefer E, Yang Z. Human motor decoding from neural signals: a review. BMC Biomed Eng 2019; 1:22. [PMID: 32903354 PMCID: PMC7422484 DOI: 10.1186/s42490-019-0022-z] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 07/21/2019] [Indexed: 01/24/2023] Open
Abstract
Many people suffer from movement disability due to amputation or neurological diseases. Fortunately, with modern neurotechnology now it is possible to intercept motor control signals at various points along the neural transduction pathway and use that to drive external devices for communication or control. Here we will review the latest developments in human motor decoding. We reviewed the various strategies to decode motor intention from human and their respective advantages and challenges. Neural control signals can be intercepted at various points in the neural signal transduction pathway, including the brain (electroencephalography, electrocorticography, intracortical recordings), the nerves (peripheral nerve recordings) and the muscles (electromyography). We systematically discussed the sites of signal acquisition, available neural features, signal processing techniques and decoding algorithms in each of these potential interception points. Examples of applications and the current state-of-the-art performance were also reviewed. Although great strides have been made in human motor decoding, we are still far away from achieving naturalistic and dexterous control like our native limbs. Concerted efforts from material scientists, electrical engineers, and healthcare professionals are needed to further advance the field and make the technology widely available in clinical use.
Collapse
Affiliation(s)
- Wing-kin Tam
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| | - Tong Wu
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| | - Qi Zhao
- Department of Computer Science and Engineering, University of Minnesota Twin Cities, 4-192 Keller Hall, 200 Union Street SE, Minnesota, 55455 USA
| | - Edward Keefer
- Nerves Incorporated, Dallas, TX P. O. Box 141295 USA
| | - Zhi Yang
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| |
Collapse
|
19
|
Moses DA, Leonard MK, Makin JG, Chang EF. Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat Commun 2019; 10:3096. [PMID: 31363096 PMCID: PMC6667454 DOI: 10.1038/s41467-019-10994-4] [Citation(s) in RCA: 108] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 06/06/2019] [Indexed: 01/15/2023] Open
Abstract
Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.
Collapse
Affiliation(s)
- David A Moses
- Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA
| | - Matthew K Leonard
- Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA
| | - Joseph G Makin
- Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA
| | - Edward F Chang
- Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA.
| |
Collapse
|
20
|
Tabelow K, Balteau E, Ashburner J, Callaghan MF, Draganski B, Helms G, Kherif F, Leutritz T, Lutti A, Phillips C, Reimer E, Ruthotto L, Seif M, Weiskopf N, Ziegler G, Mohammadi S. hMRI - A toolbox for quantitative MRI in neuroscience and clinical research. Neuroimage 2019; 194:191-210. [PMID: 30677501 PMCID: PMC6547054 DOI: 10.1016/j.neuroimage.2019.01.029] [Citation(s) in RCA: 140] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 12/21/2018] [Accepted: 01/10/2019] [Indexed: 12/20/2022] Open
Abstract
Neuroscience and clinical researchers are increasingly interested in quantitative magnetic resonance imaging (qMRI) due to its sensitivity to micro-structural properties of brain tissue such as axon, myelin, iron and water concentration. We introduce the hMRI-toolbox, an open-source, easy-to-use tool available on GitHub, for qMRI data handling and processing, presented together with a tutorial and example dataset. This toolbox allows the estimation of high-quality multi-parameter qMRI maps (longitudinal and effective transverse relaxation rates R1 and R2⋆, proton density PD and magnetisation transfer MT saturation) that can be used for quantitative parameter analysis and accurate delineation of subcortical brain structures. The qMRI maps generated by the toolbox are key input parameters for biophysical models designed to estimate tissue microstructure properties such as the MR g-ratio and to derive standard and novel MRI biomarkers. Thus, the current version of the toolbox is a first step towards in vivo histology using MRI (hMRI) and is being extended further in this direction. Embedded in the Statistical Parametric Mapping (SPM) framework, it benefits from the extensive range of established SPM tools for high-accuracy spatial registration and statistical inferences and can be readily combined with existing SPM toolboxes for estimating diffusion MRI parameter maps. From a user's perspective, the hMRI-toolbox is an efficient, robust and simple framework for investigating qMRI data in neuroscience and clinical research.
Collapse
Affiliation(s)
| | | | | | | | - Bogdan Draganski
- Laboratory for Research in Neuroimaging, Department of Clinical Neuroscience, Lausanne University Hospital and University of Lausanne, Switzerland; Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gunther Helms
- Medical Radiation Physics, Department of Clinical Sciences Lund, Lund University, Lund, Sweden
| | - Ferath Kherif
- Laboratory for Research in Neuroimaging, Department of Clinical Neuroscience, Lausanne University Hospital and University of Lausanne, Switzerland
| | - Tobias Leutritz
- Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Antoine Lutti
- Laboratory for Research in Neuroimaging, Department of Clinical Neuroscience, Lausanne University Hospital and University of Lausanne, Switzerland
| | | | - Enrico Reimer
- Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | | | | - Nikolaus Weiskopf
- Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gabriel Ziegler
- Institute for Cognitive Neurology and Dementia Research, University of Magdeburg, Germany
| | | |
Collapse
|
21
|
Chrabaszcz A, Neumann WJ, Stretcu O, Lipski WJ, Bush A, Dastolfo-Hromack CA, Wang D, Crammond DJ, Shaiman S, Dickey MW, Holt LL, Turner RS, Fiez JA, Richardson RM. Subthalamic Nucleus and Sensorimotor Cortex Activity During Speech Production. J Neurosci 2019; 39:2698-2708. [PMID: 30700532 PMCID: PMC6445998 DOI: 10.1523/jneurosci.2842-18.2019] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 01/11/2019] [Accepted: 01/18/2019] [Indexed: 11/21/2022] Open
Abstract
The sensorimotor cortex is somatotopically organized to represent the vocal tract articulators such as lips, tongue, larynx, and jaw. How speech and articulatory features are encoded at the subcortical level, however, remains largely unknown. We analyzed LFP recordings from the subthalamic nucleus (STN) and simultaneous electrocorticography recordings from the sensorimotor cortex of 11 human subjects (1 female) with Parkinson's disease during implantation of deep-brain stimulation (DBS) electrodes while they read aloud three-phoneme words. The initial phonemes involved either articulation primarily with the tongue (coronal consonants) or the lips (labial consonants). We observed significant increases in high-gamma (60-150 Hz) power in both the STN and the sensorimotor cortex that began before speech onset and persisted for the duration of speech articulation. As expected from previous reports, in the sensorimotor cortex, the primary articulators involved in the production of the initial consonants were topographically represented by high-gamma activity. We found that STN high-gamma activity also demonstrated specificity for the primary articulator, although no clear topography was observed. In general, subthalamic high-gamma activity varied along the ventral-dorsal trajectory of the electrodes, with greater high-gamma power recorded in the dorsal locations of the STN. Interestingly, the majority of significant articulator-discriminative activity in the STN occurred before that in sensorimotor cortex. These results demonstrate that articulator-specific speech information is contained within high-gamma activity of the STN, but with different spatial and temporal organization compared with similar information encoded in the sensorimotor cortex.SIGNIFICANCE STATEMENT Clinical and electrophysiological evidence suggest that the subthalamic nucleus (STN) is involved in speech; however, this important basal ganglia node is ignored in current models of speech production. We previously showed that STN neurons differentially encode early and late aspects of speech production, but no previous studies have examined subthalamic functional organization for speech articulators. Using simultaneous LFP recordings from the sensorimotor cortex and the STN in patients with Parkinson's disease undergoing deep-brain stimulation surgery, we discovered that STN high-gamma activity tracks speech production at the level of vocal tract articulators before the onset of vocalization and often before related cortical encoding.
Collapse
Affiliation(s)
- Anna Chrabaszcz
- Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Wolf-Julian Neumann
- Movement Disorder and Neuromodulation Unit, Department of Neurology, Campus Mitte, Charité, Universitätsmedizin Berlin, Berlin, Germany 10117
| | - Otilia Stretcu
- Machine Learning Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| | - Witold J Lipski
- Brain Modulation Laboratory, Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania 15213
| | - Alan Bush
- Brain Modulation Laboratory, Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania 15213
- Department of Physics, FCEN, University of Buenos Aires and IFIBA-CONICET, Buenos Aires, Argentina 1428
| | - Christina A Dastolfo-Hromack
- Brain Modulation Laboratory, Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania 15213
| | - Dengyu Wang
- Brain Modulation Laboratory, Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania 15213
- School of Medicine, Tsinghua University, Beijing, China 100084
| | - Donald J Crammond
- Brain Modulation Laboratory, Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania 15213
| | - Susan Shaiman
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Michael W Dickey
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Lori L Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| | - Robert S Turner
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania 15213, and
- University of Pittsburgh Brain Institute, Pittsburgh, Pennsylvania 15213
| | - Julie A Fiez
- Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
- University of Pittsburgh Brain Institute, Pittsburgh, Pennsylvania 15213
| | - R Mark Richardson
- Brain Modulation Laboratory, Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania 15213,
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania 15213, and
- University of Pittsburgh Brain Institute, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
22
|
Chen CF, Kreutz-Delgado K, Sereno MI, Huang RS. Unraveling the spatiotemporal brain dynamics during a simulated reach-to-eat task. Neuroimage 2019; 185:58-71. [PMID: 30315910 PMCID: PMC6325169 DOI: 10.1016/j.neuroimage.2018.10.028] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 09/11/2018] [Accepted: 10/09/2018] [Indexed: 01/17/2023] Open
Abstract
The reach-to-eat task involves a sequence of action components including looking, reaching, grasping, and feeding. While cortical representations of individual action components have been mapped in human functional magnetic resonance imaging (fMRI) studies, little is known about the continuous spatiotemporal dynamics among these representations during the reach-to-eat task. In a periodic event-related fMRI experiment, subjects were scanned while they reached toward a food image, grasped the virtual food, and brought it to their mouth within each 16-s cycle. Fourier-based analysis of fMRI time series revealed periodic signals and noise distributed across the brain. Independent component analysis was used to remove periodic or aperiodic motion artifacts. Time-frequency analysis was used to analyze the temporal characteristics of periodic signals in each voxel. Circular statistics was then used to estimate mean phase angles of periodic signals and select voxels based on the distribution of phase angles. By sorting mean phase angles across regions, we were able to show the real-time spatiotemporal brain dynamics as continuous traveling waves over the cortical surface. The activation sequence consisted of approximately the following stages: (1) stimulus related activations in occipital and temporal cortices; (2) movement planning related activations in dorsal premotor and superior parietal cortices; (3) reaching related activations in primary sensorimotor cortex and supplementary motor area; (4) grasping related activations in postcentral gyrus and sulcus; (5) feeding related activations in orofacial areas. These results suggest that phase-encoded design and analysis can be used to unravel sequential activations among brain regions during a simulated reach-to-eat task.
Collapse
Affiliation(s)
- Ching-Fu Chen
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, 92093, USA
| | - Kenneth Kreutz-Delgado
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, 92093, USA; Institute for Neural Computation, University of California, San Diego, La Jolla, CA, 92093, USA
| | - Martin I Sereno
- Department of Psychology and Neuroimaging Center, San Diego State University, San Diego, CA, 92182, USA; Experimental Psychology, University College London, London, WC1H 0AP, UK
| | - Ruey-Song Huang
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, 92093, USA.
| |
Collapse
|
23
|
Abstract
PubMed contains more than 27 million documents, and this number is growing at an estimated 4% per year. Even within specialized topics, it is no longer possible for a researcher to read any field in its entirety, and thus nobody has a complete picture of the scientific knowledge in any given field at any time. Text mining provides a means to automatically read this corpus and to extract the relations found therein as structured information. Having data in a structured format is a huge boon for computational efforts to access, cross reference, and mine the data stored therein. This is increasingly useful as biological research is becoming more focused on systems and multi-omics integration. This chapter provides an overview of the steps that are required for text mining: tokenization, named entity recognition, normalization, event extraction, and benchmarking. It discusses a variety of approaches to these tasks and then goes into detail on how to prepare data for use specifically with the JensenLab tagger. This software uses a dictionary-based approach and provides the text mining evidence for STRING and several other databases.
Collapse
Affiliation(s)
- Helen V Cook
- School of Clinical Medicine, University of Cambridge, Cambridge, UK.,Novo Nordisk Foundation Center for Protein Research, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Lars Juhl Jensen
- Novo Nordisk Foundation Center for Protein Research, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
24
|
Anatomical and microstructural determinants of hippocampal subfield functional connectome embedding. Proc Natl Acad Sci U S A 2018; 115:10154-10159. [PMID: 30249658 PMCID: PMC6176604 DOI: 10.1073/pnas.1803667115] [Citation(s) in RCA: 157] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Despite the progress made by postmortem cytoarchitectonic analyses and animal electrophysiology in studying the structure and function of the hippocampal circuitry, complex anatomical challenges have prevented a detailed understanding of its substructural organization in living humans. By integrating high-resolution structural and resting-state functional neuroimaging, we demonstrate two main axes of substructural organization in the human hippocampus: one that respects its long axis and a second that follows patterns of hippocampal infolding and significantly correlates with an intracortical microstructure. Given the importance of the hippocampus for cognition, affect, and disease, our results provide an integrated hippocampal coordinate system that is relevant to cognitive neuroscience, clinical neuroimaging, and network neuroscience. The hippocampus plays key roles in cognition and affect and serves as a model system for structure/function studies in animals. So far, its complex anatomy has challenged investigations targeting its substructural organization in humans. State-of-the-art MRI offers the resolution and versatility to identify hippocampal subfields, assess its microstructure, and study topographical principles of its connectivity in vivo. We developed an approach to unfold the human hippocampus and examine spatial variations of intrinsic functional connectivity in a large cohort of healthy adults. In addition to mapping common and unique connections across subfields, we identified two main axes of subregional connectivity transitions. An anterior/posterior gradient followed long-axis landmarks and metaanalytical findings from task-based functional MRI, while a medial/lateral gradient followed hippocampal infolding and correlated with proxies of cortical myelin. Findings were consistent in an independent sample and highly stable across resting-state scans. Our results provide robust evidence for long-axis specialization in the resting human hippocampus and suggest an intriguing interplay between connectivity and microstructure.
Collapse
|
25
|
Chartier J, Anumanchipalli GK, Johnson K, Chang EF. Encoding of Articulatory Kinematic Trajectories in Human Speech Sensorimotor Cortex. Neuron 2018; 98:1042-1054.e4. [PMID: 29779940 PMCID: PMC5992088 DOI: 10.1016/j.neuron.2018.04.031] [Citation(s) in RCA: 109] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Revised: 12/16/2017] [Accepted: 04/22/2018] [Indexed: 11/19/2022]
Abstract
When speaking, we dynamically coordinate movements of our jaw, tongue, lips, and larynx. To investigate the neural mechanisms underlying articulation, we used direct cortical recordings from human sensorimotor cortex while participants spoke natural sentences that included sounds spanning the entire English phonetic inventory. We used deep neural networks to infer speakers' articulator movements from produced speech acoustics. Individual electrodes encoded a diversity of articulatory kinematic trajectories (AKTs), each revealing coordinated articulator movements toward specific vocal tract shapes. AKTs captured a wide range of movement types, yet they could be differentiated by the place of vocal tract constriction. Additionally, AKTs manifested out-and-back trajectories with harmonic oscillator dynamics. While AKTs were functionally stereotyped across different sentences, context-dependent encoding of preceding and following movements during production of the same phoneme demonstrated the cortical representation of coarticulation. Articulatory movements encoded in sensorimotor cortex give rise to the complex kinematics underlying continuous speech production. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Josh Chartier
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Joint Program in Bioengineering, University of California, Berkeley and University of California, San Francisco, Berkeley, CA 94720, USA
| | - Gopala K Anumanchipalli
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Keith Johnson
- Department of Linguistics, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Edward F Chang
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA.
| |
Collapse
|
26
|
Belyk M, Johnson JF, Kotz SA. Poor neuro-motor tuning of the human larynx: a comparison of sung and whistled pitch imitation. ROYAL SOCIETY OPEN SCIENCE 2018; 5:171544. [PMID: 29765635 PMCID: PMC5936900 DOI: 10.1098/rsos.171544] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 03/13/2018] [Indexed: 06/08/2023]
Abstract
Vocal imitation is a hallmark of human communication that underlies the capacity to learn to speak and sing. Even so, poor vocal imitation abilities are surprisingly common in the general population and even expert vocalists cannot match the precision of a musical instrument. Although humans have evolved a greater degree of control over the laryngeal muscles that govern voice production, this ability may be underdeveloped compared with control over the articulatory muscles, such as the tongue and lips, volitional control of which emerged earlier in primate evolution. Human participants imitated simple melodies by either singing (i.e. producing pitch with the larynx) or whistling (i.e. producing pitch with the lips and tongue). Sung notes were systematically biased towards each individual's habitual pitch, which we hypothesize may act to conserve muscular effort. Furthermore, while participants who sung more precisely also whistled more precisely, sung imitations were less precise than whistled imitations. The laryngeal muscles that control voice production are under less precise control than the oral muscles that are involved in whistling. This imprecision may be due to the relatively recent evolution of volitional laryngeal-motor control in humans, which may be tuned just well enough for the coarse modulation of vocal-pitch in speech.
Collapse
Affiliation(s)
- Michel Belyk
- Bloorview Research Institute, 150 Kilgour Road, Toronto, CanadaM4G 1R8
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | - Joseph F. Johnson
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | - Sonja A. Kotz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human and Cognitive Sciences, Leipzig, Germany
| |
Collapse
|
27
|
Carey D, Miquel ME, Evans BG, Adank P, McGettigan C. Vocal Tract Images Reveal Neural Representations of Sensorimotor Transformation During Speech Imitation. Cereb Cortex 2018; 27:3064-3079. [PMID: 28334401 PMCID: PMC5939209 DOI: 10.1093/cercor/bhx056] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Indexed: 12/23/2022] Open
Abstract
Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants’ vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, London TW20 0EX, UK.,Combined Universities Brain Imaging Centre, Royal Holloway, University of London, London TW20 0EX, UK.,The Irish Longitudinal Study on Ageing (TILDA), Department of Medical Gerontology, Trinity College Dublin, Dublin, Ireland
| | - Marc E Miquel
- William Harvey Research Institute, Queen Mary, University of London, London EC1M 6BQ, UK.,Clinical Physics, Barts Health NHS Trust, London EC1A 7BE, UK
| | - Bronwen G Evans
- Department of Speech, Hearing & Phonetic Sciences, University College London, London WC1E 6BT, UK
| | - Patti Adank
- Department of Speech, Hearing & Phonetic Sciences, University College London, London WC1E 6BT, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, London TW20 0EX, UK.,Combined Universities Brain Imaging Centre, Royal Holloway, University of London, London TW20 0EX, UK.,Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, UK
| |
Collapse
|
28
|
Carey D, Caprini F, Allen M, Lutti A, Weiskopf N, Rees G, Callaghan MF, Dick F. Quantitative MRI provides markers of intra-, inter-regional, and age-related differences in young adult cortical microstructure. Neuroimage 2017; 182:429-440. [PMID: 29203455 PMCID: PMC6189523 DOI: 10.1016/j.neuroimage.2017.11.066] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Revised: 10/19/2017] [Accepted: 11/29/2017] [Indexed: 12/17/2022] Open
Abstract
Measuring the structural composition of the cortex is critical to understanding typical development, yet few investigations in humans have charted markers in vivo that are sensitive to tissue microstructural attributes. Here, we used a well-validated quantitative MR protocol to measure four parameters (R1, MT, R2*, PD*) that differ in their sensitivity to facets of the tissue microstructural environment (R1, MT: myelin, macromolecular content; R2*: myelin, paramagnetic ions, i.e., iron; PD*: free water content). Mapping these parameters across cortical regions in a young adult cohort (18–39 years, N = 93) revealed expected patterns of increased macromolecular content as well as reduced tissue water content in primary and primary adjacent cortical regions. Mapping across cortical depth within regions showed decreased expression of myelin and related processes – but increased tissue water content – when progressing from the grey/white to the grey/pial boundary, in all regions. Charting developmental change in cortical microstructure cross-sectionally, we found that parameters with sensitivity to tissue myelin (R1 & MT) showed linear increases with age across frontal and parietal cortex (change 0.5–1.0% per year). Overlap of robust age effects for both parameters emerged in left inferior frontal, right parietal and bilateral pre-central regions. Our findings afford an improved understanding of ontogeny in early adulthood and offer normative quantitative MR data for inter- and intra-cortical composition, which may be used as benchmarks in further studies. We mapped multi-parameter maps (MPMs) across and within cortical regions. We charted age effects (ages 18–39) on myelin and related processes. MPMs sensitive to myelin (R1, MT) showed elevated values in primary areas over most cortical depths. R2* map foci tended to overlap MPMs sensitive to myelin (R1, MT). R1 and MT increased with age (0.5–1.0% per year) at mid-depth in frontal and parietal cortex.
Collapse
Affiliation(s)
- Daniel Carey
- The Irish Longitudinal Study on Aging (TILDA), Trinity College Dublin, Dublin 2, Ireland; Centre for Brain and Cognitive Development (CBCD), Birkbeck College, University of London, UK.
| | - Francesco Caprini
- Centre for Brain and Cognitive Development (CBCD), Birkbeck College, University of London, UK
| | - Micah Allen
- Institute of Cognitive Neuroscience, University College London, Queen Square, London, UK; Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London, UK
| | - Antoine Lutti
- Institute of Cognitive Neuroscience, University College London, Queen Square, London, UK; Laboratoire de Recherche en Neuroimagerie - LREN, Departement des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Nikolaus Weiskopf
- Institute of Cognitive Neuroscience, University College London, Queen Square, London, UK; Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Geraint Rees
- Institute of Cognitive Neuroscience, University College London, Queen Square, London, UK; Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London, UK
| | - Martina F Callaghan
- Institute of Cognitive Neuroscience, University College London, Queen Square, London, UK
| | - Frederic Dick
- Centre for Brain and Cognitive Development (CBCD), Birkbeck College, University of London, UK; Birkbeck/UCL Centre for Neuroimaging (BUCNI), 26 Bedford Way, London, UK
| |
Collapse
|
29
|
Crops that feed the world: Production and improvement of cassava for food, feed, and industrial uses. Food Secur 2017. [DOI: 10.1007/s12571-017-0717-8] [Citation(s) in RCA: 62] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
30
|
Coakeley S, Strafella AP. Imaging tau pathology in Parkinsonisms. NPJ Parkinsons Dis 2017; 3:22. [PMID: 28685158 PMCID: PMC5491530 DOI: 10.1038/s41531-017-0023-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 06/01/2017] [Accepted: 06/02/2017] [Indexed: 12/23/2022] Open
Abstract
The recent development of positron emission tomography radiotracers targeting pathological tau in vivo has led to numerous human trials. While investigations have primarily focused on the most common tauopathy, Alzheimer's disease, it is imperative that testing also be performed in parkinsonian tauopathies, such as progressive supranuclear palsy, corticobasal degeneration, and frontotemporal dementia and parkinsonism linked to chromosome 17. Tau aggregates differ in isoforms and conformations across disorders, and as a result one radiotracer may not be appropriate for all tauopathies. In this review, we evaluate the preclinical and clinical reports of current tau radiotracers in parkinsonian disorders. These radiotracers include [18F]FDDNP, [11C]PBB3, [18F]THK-5317, [18F]THK-5351, and [18F]AV-1451 ([18F]T807). There are concerns of off-target binding with [18F]FDDNP and [11C]PBB3, which may increase the signal to noise ratio and thereby decrease the efficacy of these radiotracers. Testing in [18F]THK-5317, [18F]THK-5351, and [18F]AV-1451 has been performed in progressive supranuclear palsy, while [18F]THK-5317 and [18F]AV-1451 have also been tested in corticobasal degeneration patients. [18F]THK-5317 and [18F]THK-5351 have demonstrated binding in brain regions known to be afflicted with pathological tau; however, due to small sample sizes these studies should be replicated before concluding their appropriateness in parkinsonian tauopathies. [18F]AV-1451 has demonstrated mixed results in progressive supranuclear palsy patients and post-mortem analysis shows minimal to no binding to non-Alzheimer's disease tauopathies brain slices.
Collapse
Affiliation(s)
- Sarah Coakeley
- Research Imaging Centre, Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, University of Toronto, Toronto, ON Canada
- Division of Brain, Imaging and Behaviour—Systems Neuroscience, Krembil Research Institute, UHN, University of Toronto, Toronto, ON Canada
| | - Antonio P. Strafella
- Research Imaging Centre, Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, University of Toronto, Toronto, ON Canada
- Division of Brain, Imaging and Behaviour—Systems Neuroscience, Krembil Research Institute, UHN, University of Toronto, Toronto, ON Canada
- Morton and Gloria Shulman Movement Disorder Unit and E.J. Safra Parkinson Disease Program, Neurology Division, Dept. of Medicine, Toronto Western Hospital, UHN, University of Toronto, Toronto, ON Canada
| |
Collapse
|