1
|
Tabari F, Patron C, Cryer H, Johari K. HD-tDCS over left supplementary motor area differentially modulated neural correlates of motor planning for speech vs. limb movement. Int J Psychophysiol 2024; 201:112357. [PMID: 38701898 DOI: 10.1016/j.ijpsycho.2024.112357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 04/15/2024] [Accepted: 04/30/2024] [Indexed: 05/05/2024]
Abstract
The supplementary motor area (SMA) is implicated in planning, execution, and control of speech production and limb movement. The SMA is among putative generators of pre-movement EEG activity which is thought to be neural markers of motor planning. In neurological conditions such as Parkinson's disease, abnormal pre-movement neural activity within the SMA has been reported during speech production and limb movement. Therefore, this region can be a potential target for non-invasive brain stimulation for both speech and limb movement. The present study took an initial step in examining the application of high-definition transcranial direct current stimulation (HD-tDCS) over the left SMA in 24 neurologically intact adults. Subsequently, event-related potentials (ERPs) were recorded while participants performed speech and limb movement tasks. Participants' data were collected in three counterbalanced sessions: anodal, cathodal and sham HD-tDCS. Relative to sham stimulation, anodal, but not cathodal, HD-tDCS significantly attenuated ERPs prior to the onset of the speech production. In contrast, neither anodal nor cathodal HD-tDCS significantly modulated ERPs prior to the onset of limb movement compared to sham stimulation. These findings showed that neural correlates of motor planning can be modulated using HD-tDCS over the left SMA in neurotypical adults, with translational implications for neurological conditions that impair speech production. The absence of a stimulation effect on ERPs prior to the onset of limb movement was not expected in this study, and future studies are warranted to further explore this effect.
Collapse
Affiliation(s)
- Fatemeh Tabari
- Human Neurophysiology and Neuromodulation Lab, Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA
| | - Celeste Patron
- Human Neurophysiology and Neuromodulation Lab, Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA
| | - Hope Cryer
- Human Neurophysiology and Neuromodulation Lab, Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA
| | - Karim Johari
- Human Neurophysiology and Neuromodulation Lab, Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA.
| |
Collapse
|
2
|
Kent RD. The Feel of Speech: Multisystem and Polymodal Somatosensation in Speech Production. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1424-1460. [PMID: 38593006 DOI: 10.1044/2024_jslhr-23-00575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
PURPOSE The oral structures such as the tongue and lips have remarkable somatosensory capacities, but understanding the roles of somatosensation in speech production requires a more comprehensive knowledge of somatosensation in the speech production system in its entirety, including the respiratory, laryngeal, and supralaryngeal subsystems. This review was conducted to summarize the system-wide somatosensory information available for speech production. METHOD The search was conducted with PubMed/Medline and Google Scholar for articles published until November 2023. Numerous search terms were used in conducting the review, which covered the topics of psychophysics, basic and clinical behavioral research, neuroanatomy, and neuroscience. RESULTS AND CONCLUSIONS The current understanding of speech somatosensation rests primarily on the two pillars of psychophysics and neuroscience. The confluence of polymodal afferent streams supports the development, maintenance, and refinement of speech production. Receptors are both canonical and noncanonical, with the latter occurring especially in the muscles innervated by the facial nerve. Somatosensory representation in the cortex is disproportionately large and provides for sensory interactions. Speech somatosensory function is robust over the lifespan, with possible declines in advanced aging. The understanding of somatosensation in speech disorders is largely disconnected from research and theory on speech production. A speech somatoscape is proposed as the generalized, system-wide sensation of speech production, with implications for speech development, speech motor control, and speech disorders.
Collapse
|
3
|
Upton E, Doogan C, Fleming V, Leyton PQ, Barbera D, Zeidman P, Hope T, Latham W, Coley-Fisher H, Price C, Crinion J, Leff A. Efficacy of a gamified digital therapy for speech production in people with chronic aphasia (iTalkBetter): behavioural and imaging outcomes of a phase II item-randomised clinical trial. EClinicalMedicine 2024; 70:102483. [PMID: 38685927 PMCID: PMC11056404 DOI: 10.1016/j.eclinm.2024.102483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/26/2024] [Accepted: 01/26/2024] [Indexed: 05/02/2024] Open
Abstract
Background Aphasia is among the most debilitating of symptoms affecting stroke survivors. Speech and language therapy (SLT) is effective, but many hours of practice are required to make clinically meaningful gains. One solution to this 'dosage' problem is to automate therapeutic approaches via self-supporting apps so people with aphasia (PWA) can amass practice as it suits them. However, response to therapy is variable and no clinical trial has yet identified the key brain regions required to engage with word-retrieval therapy. Methods Between Sep 7, 2020 and Mar 1, 2022 at University College London in the UK, we carried out a phase II, item-randomised clinical trial in 27 PWA using a novel, self-led app, 'iTalkBetter', which utilises confrontation naming therapy. Unlike previously reported apps, it has a real-time utterance verification system that drives its adaptive therapy algorithm. Therapy items were individually randomised to provide balanced lists of 'trained' and 'untrained' items matched on key psycholinguistic variables and baseline performance. PWA practised with iTalkBetter over a 6-week therapy block. Structural and functional MRI data were collected to identify therapy-related changes in brain states. A repeated-measures design was employed. The trial was registered at ClinicalTrials.gov (NCT04566081). Findings iTalkBetter significantly improved naming ability by 13% for trained items compared with no change for untrained items, an average increase of 29 words (SD = 26) per person; beneficial effects persisted at three months. PWA's propositional speech also significantly improved. iTalkBetter use was associated with brain volume increases in right auditory and left anterior prefrontal cortices. Task-based fMRI identified dose-related activity in the right temporoparietal junction. Interpretation Our findings suggested that iTalkBetter significantly improves PWAs' naming ability on trained items. The effect size is similar to a previous RCT of computerised therapy, but this is the first study to show transfer to a naturalistic speaking task. iTalkBetter usage and dose caused observable changes in brain structure and function to key parts of the surviving language perception, production and control networks. iTalkBetter is being rolled-out as an app for all PWA and anomia: https://www.ucl.ac.uk/icn/research/research-groups/neurotherapeutics/projects/digital-interventions-neuro-rehabilitation-0 so that they can increase their dosage of practice-based SLT. Funding National Institute for Health and Care Research, Wellcome Centre for Human Neuroimaging.
Collapse
Affiliation(s)
- Emily Upton
- UCL Queen Square Institute of Neurology, University College London, UK
- Institute of Cognitive Neuroscience, University College London, UK
- Department of Psychology and Language Sciences, University College London, UK
| | - Catherine Doogan
- UCL Queen Square Institute of Neurology, University College London, UK
- Institute of Cognitive Neuroscience, University College London, UK
- St George’s, University of London, UK
| | - Victoria Fleming
- Department of Psychology and Language Sciences, University College London, UK
| | | | - David Barbera
- Institute of Cognitive Neuroscience, University College London, UK
| | - Peter Zeidman
- Wellcome Centre for Human Neuroimaging, University College London, UK
| | - Tom Hope
- Wellcome Centre for Human Neuroimaging, University College London, UK
- Department of Psychology and Social Science, John Cabot University, Rome, Italy
| | - William Latham
- Department of Computing, Goldsmiths, University of London, UK
| | | | - Cathy Price
- Wellcome Centre for Human Neuroimaging, University College London, UK
| | - Jennifer Crinion
- Institute of Cognitive Neuroscience, University College London, UK
- Department of Psychology and Language Sciences, University College London, UK
| | - Alex Leff
- UCL Queen Square Institute of Neurology, University College London, UK
- Institute of Cognitive Neuroscience, University College London, UK
- University College London Hospitals NHS Trust, UK
| |
Collapse
|
4
|
Tolkacheva V, Brownsett SLE, McMahon KL, de Zubicaray GI. Perceiving and misperceiving speech: lexical and sublexical processing in the superior temporal lobes. Cereb Cortex 2024; 34:bhae087. [PMID: 38494418 PMCID: PMC10944697 DOI: 10.1093/cercor/bhae087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/19/2024] Open
Abstract
Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.
Collapse
Affiliation(s)
- Valeriya Tolkacheva
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| | - Sonia L E Brownsett
- Queensland Aphasia Research Centre, School of Health and Rehabilitation Sciences, University of Queensland, Surgical Treatment and Rehabilitation Services, Herston, Queensland, 4006, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, Health Sciences Building 1, 1 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Katie L McMahon
- Herston Imaging Research Facility, Royal Brisbane & Women’s Hospital, Building 71/918, Royal Brisbane & Women’s Hospital, Herston, Queensland, 4006, Australia
- Queensland University of Technology, School of Clinical Sciences and Centre for Biomedical Technologies, 60 Musk Avenue, Kelvin Grove, Queensland, 4059, Australia
| | - Greig I de Zubicaray
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| |
Collapse
|
5
|
Liuzzi AG, Meersmans K, Peeters R, De Deyne S, Dupont P, Vandenberghe R. Semantic representations in inferior frontal and lateral temporal cortex during picture naming, reading, and repetition. Hum Brain Mapp 2024; 45:e26603. [PMID: 38339900 PMCID: PMC10836176 DOI: 10.1002/hbm.26603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 12/12/2023] [Accepted: 01/09/2024] [Indexed: 02/12/2024] Open
Abstract
Reading, naming, and repetition are classical neuropsychological tasks widely used in the clinic and psycholinguistic research. While reading and repetition can be accomplished by following a direct or an indirect route, pictures can be named only by means of semantic mediation. By means of fMRI multivariate pattern analysis, we evaluated whether this well-established fundamental difference at the cognitive level is associated at the brain level with a difference in the degree to which semantic representations are activated during these tasks. Semantic similarity between words was estimated based on a word association model. Twenty subjects participated in an event-related fMRI study where the three tasks were presented in pseudo-random order. Linear discriminant analysis of fMRI patterns identified a set of regions that allow to discriminate between words at a high level of word-specificity across tasks. Representational similarity analysis was used to determine whether semantic similarity was represented in these regions and whether this depended on the task performed. The similarity between neural patterns of the left Brodmann area 45 (BA45) and of the superior portion of the left supramarginal gyrus correlated with the similarity in meaning between entities during picture naming. In both regions, no significant effects were seen for repetition or reading. The semantic similarity effect during picture naming was significantly larger than the similarity effect during the two other tasks. In contrast, several regions including left anterior superior temporal gyrus and left ventral BA44/frontal operculum, among others, coded for semantic similarity in a task-independent manner. These findings provide new evidence for the dynamic, task-dependent nature of semantic representations in the left BA45 and a more task-independent nature of the representational activation in the lateral temporal cortex and ventral BA44/frontal operculum.
Collapse
Affiliation(s)
- Antonietta Gabriella Liuzzi
- Laboratory for Cognitive Neurology, Department of NeurosciencesLeuven Brain Institute, KU LeuvenLeuvenBelgium
| | - Karen Meersmans
- Laboratory for Cognitive Neurology, Department of NeurosciencesLeuven Brain Institute, KU LeuvenLeuvenBelgium
| | - Ronald Peeters
- Radiology DepartmentUniversity Hospitals LeuvenLeuvenBelgium
| | - Simon De Deyne
- School of Psychological SciencesUniversity of MelbourneMelbourneAustralia
| | - Patrick Dupont
- Laboratory for Cognitive Neurology, Department of NeurosciencesLeuven Brain Institute, KU LeuvenLeuvenBelgium
| | - Rik Vandenberghe
- Laboratory for Cognitive Neurology, Department of NeurosciencesLeuven Brain Institute, KU LeuvenLeuvenBelgium
- Neurology DepartmentUniversity Hospitals LeuvenLeuvenBelgium
| |
Collapse
|
6
|
Liu D, Chang Y, Dai G, Guo Z, Jones JA, Li T, Chen X, Chen M, Li J, Wu X, Liu P, Liu H. Right, but not left, posterior superior temporal gyrus is causally involved in vocal feedback control. Neuroimage 2023; 278:120282. [PMID: 37468021 DOI: 10.1016/j.neuroimage.2023.120282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 06/25/2023] [Accepted: 07/16/2023] [Indexed: 07/21/2023] Open
Abstract
The posterior superior temporal gyrus (pSTG) has been implicated in the integration of auditory feedback and motor system for controlling vocal production. However, the question as to whether and how the pSTG is causally involved in vocal feedback control is currently unclear. To this end, the present study selectively stimulated the left or right pSTG with continuous theta burst stimulation (c-TBS) in healthy participants, then used event-related potentials to investigate neurobehavioral changes in response to altered auditory feedback during vocal pitch regulation. The results showed that, compared to control (vertex) stimulation, c-TBS over the right pSTG led to smaller vocal compensations for pitch perturbations accompanied by smaller cortical N1 and larger P2 responses. Enhanced P2 responses received contributions from the right-lateralized temporal and parietal regions as well as the insula, and were significantly correlated with suppressed vocal compensations. Surprisingly, these effects were not found when comparing c-TBS over the left pSTG with control stimulation. Our findings provide evidence, for the first time, that supports a causal relationship between right, but not left, pSTG and auditory-motor integration for vocal pitch regulation. This lends support to a right-lateralized contribution of the pSTG in not only the bottom-up detection of vocal feedback errors but also the involvement of driving motor commands for error correction in a top-down manner.
Collapse
Affiliation(s)
- Dongxu Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yichen Chang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Guangyan Dai
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zhiqiang Guo
- School of Computer, Zhuhai College of Science and Technology, Zhuhai, China
| | - Jeffery A Jones
- Department of Psychology and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, Ontario N2L 3C5, Canada
| | - Tingni Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China; Centre for Eye and Vision Research, 17W Science Park, Hong Kong SAR, China
| | - Xi Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Mingyun Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jingting Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xiuqin Wu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
7
|
Avcu E, Newman O, Ahlfors SP, Gow DW. Neural evidence suggests phonological acceptability judgments reflect similarity, not constraint evaluation. Cognition 2023; 230:105322. [PMID: 36370613 PMCID: PMC9712273 DOI: 10.1016/j.cognition.2022.105322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 10/24/2022] [Accepted: 11/01/2022] [Indexed: 11/11/2022]
Abstract
Acceptability judgments are a primary source of evidence in formal linguistic research. Within the generative linguistic tradition, these judgments are attributed to evaluation of novel forms based on implicit knowledge of rules or constraints governing well-formedness. In the domain of phonological acceptability judgments, other factors including ease of articulation and similarity to known forms have been hypothesized to influence evaluation. We used data-driven neural techniques to identify the relative contributions of these factors. Granger causality analysis of magnetic resonance imaging (MRI)-constrained magnetoencephalography (MEG) and electroencephalography (EEG) data revealed patterns of interaction between brain regions that support explicit judgments of the phonological acceptability of spoken nonwords. Comparisons of data obtained with nonwords that varied in terms of onset consonant cluster attestation and acceptability revealed different cortical regions and effective connectivity patterns associated with phonological acceptability judgments. Attested forms produced stronger influences of brain regions implicated in lexical representation and sensorimotor simulation on acoustic-phonetic regions, whereas unattested forms produced stronger influence of phonological control mechanisms on acoustic-phonetic processing. Unacceptable forms produced widespread patterns of interaction consistent with attempted search or repair. Together, these results suggest that speakers' phonological acceptability judgments reflect lexical and sensorimotor factors.
Collapse
Affiliation(s)
- Enes Avcu
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America.
| | - Olivia Newman
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States of America; Department of Radiology, Harvard Medical School, Boston, MA, United States of America
| | - David W Gow
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States of America; Department of Psychology, Salem State University, Salem, MA, United States of America; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA 02139, United States of America
| |
Collapse
|
8
|
Li Z, Hong B, Wang D, Nolte G, Engel AK, Zhang D. Speaker-listener neural coupling reveals a right-lateralized mechanism for non-native speech-in-noise comprehension. Cereb Cortex 2022; 33:3701-3714. [PMID: 35975617 DOI: 10.1093/cercor/bhac302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 07/08/2022] [Accepted: 07/09/2022] [Indexed: 11/14/2022] Open
Abstract
While the increasingly globalized world has brought more and more demands for non-native language communication, the prevalence of background noise in everyday life poses a great challenge to non-native speech comprehension. The present study employed an interbrain approach based on functional near-infrared spectroscopy (fNIRS) to explore how people adapt to comprehend non-native speech information in noise. A group of Korean participants who acquired Chinese as their non-native language was invited to listen to Chinese narratives at 4 noise levels (no noise, 2 dB, -6 dB, and - 9 dB). These narratives were real-life stories spoken by native Chinese speakers. Processing of the non-native speech was associated with significant fNIRS-based listener-speaker neural couplings mainly over the right hemisphere at both the listener's and the speaker's sides. More importantly, the neural couplings from the listener's right superior temporal gyrus, the right middle temporal gyrus, as well as the right postcentral gyrus were found to be positively correlated with their individual comprehension performance at the strongest noise level (-9 dB). These results provide interbrain evidence in support of the right-lateralized mechanism for non-native speech processing and suggest that both an auditory-based and a sensorimotor-based mechanism contributed to the non-native speech-in-noise comprehension.
Collapse
Affiliation(s)
- Zhuoran Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China.,Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Bo Hong
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Daifa Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China.,Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| |
Collapse
|
9
|
Stefaniak JD, Geranmayeh F, Lambon Ralph MA. The multidimensional nature of aphasia recovery post-stroke. Brain 2022; 145:1354-1367. [PMID: 35265968 PMCID: PMC9128817 DOI: 10.1093/brain/awab377] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 08/20/2021] [Accepted: 09/18/2021] [Indexed: 12/28/2022] Open
Abstract
Language is not a single function, but instead results from interactions between neural representations and computations that can be damaged independently of each other. Although there is now clear evidence that the language profile in post-stroke aphasia reflects graded variations along multiple underlying dimensions ('components'), it is still entirely unknown if these distinct language components have different recovery trajectories and rely on the same, or different, neural regions during aphasia recovery. Accordingly, this study examined whether language components in the subacute stage: (i) mirror those observed in the chronic stage; (ii) recover together in a homogeneous manner; and (iii) have recovery trajectories that relate to changing activation in distinct or overlapping underlying brain regions. We analysed longitudinal data from 26 individuals with mild-moderate aphasia following left hemispheric infarct who underwent functional MRI and behavioural testing at ∼2 weeks and ∼4 months post-stroke. The language profiles in early post-stroke aphasia reflected three orthogonal principal components consisting of fluency, semantic/executive function and phonology. These components did not recover in a singular, homogeneous manner; rather, their longitudinal trajectories were uncorrelated, suggesting that aphasia recovery is heterogeneous and multidimensional. Mean regional brain activation during overt speech production in unlesioned areas was compared with patient scores on the three principal components of language at both the early and late time points. In addition, the change in brain activation over time was compared with the change on each of the principal component scores, both before and after controlling for baseline scores. We found that different language components were associated with changing activation in multiple, non-overlapping bilateral brain regions during aphasia recovery. Specifically, fluency recovery was associated with increasing activation in bilateral middle frontal gyri and right temporo-occipital middle temporal gyrus; semantic/executive recovery was associated with reducing activation in bilateral anterior temporal lobes; while phonology recovery was associated with reducing activation in bilateral precentral gyri, dorso-medial frontal poles and the precuneus. Overlapping clusters in the ventromedial prefrontal cortex were positively associated with fluency recovery but negatively associated with semantic/executive and phonology recovery. This combination of detailed behavioural and functional MRI data provides novel insights into the neural basis of aphasia recovery. Because different aspects of language seem to rely on different neural regions for recovery, treatment strategies that target the same neural region in all stroke survivors with aphasia might be entirely ineffective or even impair recovery, depending on the specific language profile of each individual patient.
Collapse
Affiliation(s)
- James D Stefaniak
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK
- Department of Psychiatry, University of Cambridge, Cambridge CB2 0SZ, UK
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester Academic Health Science Centre, Manchester M13 9GB, UK
| | - Fatemeh Geranmayeh
- Computational Cognitive and Clinical Neuroimaging Laboratory, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, London W12 0NN, UK
| | | |
Collapse
|
10
|
Yamamoto AK, Sanjuán A, Pope R, Parker Jones O, Hope TMH, Prejawa S, Oberhuber M, Mancini L, Ekert JO, Garjardo-Vidal A, Creasey M, Yousry TA, Green DW, Price CJ. The Effect of Right Temporal Lobe Gliomas on Left and Right Hemisphere Neural Processing During Speech Perception and Production Tasks. Front Hum Neurosci 2022; 16:803163. [PMID: 35652007 PMCID: PMC9148966 DOI: 10.3389/fnhum.2022.803163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 03/28/2022] [Indexed: 11/28/2022] Open
Abstract
Using fMRI, we investigated how right temporal lobe gliomas affecting the posterior superior temporal sulcus alter neural processing observed during speech perception and production tasks. Behavioural language testing showed that three pre-operative neurosurgical patients with grade 2, grade 3 or grade 4 tumours had the same pattern of mild language impairment in the domains of object naming and written word comprehension. When matching heard words for semantic relatedness (a speech perception task), these patients showed under-activation in the tumour infiltrated right superior temporal lobe compared to 61 neurotypical participants and 16 patients with tumours that preserved the right postero-superior temporal lobe, with enhanced activation within the (tumour-free) contralateral left superior temporal lobe. In contrast, when correctly naming objects (a speech production task), the patients with right postero-superior temporal lobe tumours showed higher activation than both control groups in the same right postero-superior temporal lobe region that was under-activated during auditory semantic matching. The task dependent pattern of under-activation during the auditory speech task and over-activation during object naming was also observed in eight stroke patients with right hemisphere infarcts that affected the right postero-superior temporal lobe compared to eight stroke patients with right hemisphere infarcts that spared it. These task-specific and site-specific cross-pathology effects highlight the importance of the right temporal lobe for language processing and motivate further study of how right temporal lobe tumours affect language performance and neural reorganisation. These findings may have important implications for surgical management of these patients, as knowledge of the regions showing functional reorganisation may help to avoid their inadvertent damage during neurosurgery.
Collapse
Affiliation(s)
- Adam Kenji Yamamoto
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
- *Correspondence: Adam Kenji Yamamoto,
| | - Ana Sanjuán
- Neuropsychology and Functional Imaging Group, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castellón de La Plana, Spain
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Rebecca Pope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Oiwi Parker Jones
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- FMRIB Centre and Jesus College, University of Oxford, Oxford, United Kingdom
| | - Thomas M. H. Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Susan Prejawa
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Faculty of Medicine, Collaborative Research Centre 1052 “Obesity Mechanisms”, University Leipzig, Leipzig, Germany
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Marion Oberhuber
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Laura Mancini
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Justyna O. Ekert
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Andrea Garjardo-Vidal
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Faculty of Health Sciences, Universidad del Desarrollo, Concepcion, Chile
| | - Megan Creasey
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Tarek A. Yousry
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - David W. Green
- Experimental Psychology, University College London, London, United Kingdom
| | - Cathy J. Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
11
|
Ekert JO, Gajardo-Vidal A, Lorca-Puls DL, Hope TMH, Dick F, Crinion JT, Green DW, Price CJ. Dissociating the functions of three left posterior superior temporal regions that contribute to speech perception and production. Neuroimage 2021; 245:118764. [PMID: 34848301 PMCID: PMC9125162 DOI: 10.1016/j.neuroimage.2021.118764] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 11/15/2021] [Accepted: 11/24/2021] [Indexed: 11/28/2022] Open
Abstract
Prior studies have shown that the left posterior superior temporal sulcus (pSTS) and left temporo-parietal junction (TPJ) both contribute to phonological short-term memory, speech perception and speech production. Here, by conducting a within-subjects multi-factorial fMRI study, we dissociate the response profiles of these regions and a third region – the anterior ascending terminal branch of the left superior temporal sulcus (atSTS), which lies dorsal to pSTS and ventral to TPJ. First, we show that each region was more activated by (i) 1-back matching on visually presented verbal stimuli (words or pseudowords) compared to 1-back matching on visually presented non-verbal stimuli (pictures of objects or non-objects), and (ii) overt speech production than 1-back matching, across 8 types of stimuli (visually presented words, pseudowords, objects and non-objects and aurally presented words, pseudowords, object sounds and meaningless hums). The response properties of the three regions dissociated within the auditory modality. In left TPJ, activation was higher for auditory stimuli that were non-verbal (sounds of objects or meaningless hums) compared to verbal (words and pseudowords), irrespective of task (speech production or 1-back matching). In left pSTS, activation was higher for non-semantic stimuli (pseudowords and hums) than semantic stimuli (words and object sounds) on the dorsal pSTS surface (dpSTS), irrespective of task. In left atSTS, activation was not sensitive to either semantic or verbal content. The contrasting response properties of left TPJ, dpSTS and atSTS was cross-validated in an independent sample of 59 participants, using region-by-condition interactions. We also show that each region participates in non-overlapping networks of frontal, parietal and cerebellar regions. Our results challenge previous claims about functional specialisation in the left posterior superior temporal lobe and motivate future studies to determine the timing and directionality of information flow in the brain networks involved in speech perception and production.
Collapse
Affiliation(s)
- Justyna O Ekert
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom.
| | - Andrea Gajardo-Vidal
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom; Faculty of Health Sciences, Universidad del Desarrollo, Concepcion, Chile
| | - Diego L Lorca-Puls
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| | - Fred Dick
- Department of Experimental Psychology, University College London, London, United Kingdom; Department of Psychological Sciences, Birkbeck University of London, London, United Kingdom
| | - Jennifer T Crinion
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - David W Green
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| |
Collapse
|
12
|
LaCroix AN, James E, Rogalsky C. Neural Resources Supporting Language Production vs. Comprehension in Chronic Post-stroke Aphasia: A Meta-Analysis Using Activation Likelihood Estimates. Front Hum Neurosci 2021; 15:680933. [PMID: 34759804 PMCID: PMC8572938 DOI: 10.3389/fnhum.2021.680933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 09/22/2021] [Indexed: 02/04/2023] Open
Abstract
In post-stroke aphasia, language tasks recruit a combination of residual regions within the canonical language network, as well as regions outside of it in the left and right hemispheres. However, there is a lack of consensus as to how the neural resources engaged by language production and comprehension following a left hemisphere stroke differ from one another and from controls. The present meta-analysis used activation likelihood estimates to aggregate across 44 published fMRI and PET studies to characterize the functional reorganization patterns for expressive and receptive language processes in persons with chronic post-stroke aphasia (PWA). Our results in part replicate previous meta-analyses: we find that PWA activate residual regions within the left lateralized language network, regardless of task. Our results extend this work to show differential recruitment of the left and right hemispheres during language production and comprehension in PWA. First, we find that PWA engage left perilesional regions during language comprehension, and that the extent of this activation is likely driven by stimulus type and domain-general cognitive resources needed for task completion. In contrast to comprehension, language production was associated with activation of the right frontal and temporal cortices. Further analyses linked right hemisphere regions involved in motor speech planning for language production with successful naming in PWA, while unsuccessful naming was associated with the engagement of the right inferior frontal gyrus, a region often implicated in domain-general cognitive processes. While the within-group findings indicate that the engagement of the right hemisphere during language tasks in post-stroke aphasia differs for expressive vs. receptive tasks, the overall lack of major between-group differences between PWA and controls implies that PWA rely on similar cognitive-linguistic resources for language as controls. However, more studies are needed that report coordinates for PWA and controls completing the same tasks in order for future meta-analyses to characterize how aphasia affects the neural resources engaged during language, particularly for specific tasks and as a function of behavioral performance.
Collapse
Affiliation(s)
- Arianna N LaCroix
- College of Health Sciences, Midwestern University, Glendale, AZ, United States
| | - Eltonnelle James
- College of Health Sciences, Midwestern University, Glendale, AZ, United States
| | - Corianne Rogalsky
- College of Health Solutions, Arizona State University, Tempe, AZ, United States
| |
Collapse
|
13
|
Narayana S, Parsons MB, Zhang W, Franklin C, Schiller K, Choudhri AF, Fox PT, LeDoux MS, Cannito M. Mapping typical and hypokinetic dysarthric speech production network using a connected speech paradigm in functional MRI. NEUROIMAGE-CLINICAL 2020; 27:102285. [PMID: 32521476 PMCID: PMC7284131 DOI: 10.1016/j.nicl.2020.102285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2020] [Revised: 05/13/2020] [Accepted: 05/17/2020] [Indexed: 12/18/2022]
Abstract
We developed a task paradigm whereby subjects spoke aloud while minimizing head motion during functional MRI (fMRI) in order to better understand the neural circuitry involved in motor speech disorders due to dysfunction of the central nervous system. To validate our overt continuous speech paradigm, we mapped the speech production network (SPN) in typical speakers (n = 19, 10 females) and speakers with hypokinetic dysarthria as a manifestation of Parkinson disease (HKD; n = 21, 8 females) in fMRI. We then compared it with the SPN derived during overt speech production by 15O-water PET in the same group of typical speakers and another HKD cohort (n = 10, 2 females). The fMRI overt connected speech paradigm did not result in excessive motion artifacts and successfully identified the same brain areas demonstrated in the PET studies in the two cohorts. The SPN derived in fMRI demonstrated significant spatial overlap with the corresponding PET derived maps (typical speakers: r = 0.52; speakers with HKD: r = 0.43) and identified the components of the neural circuit of speech production belonging to the feedforward and feedback subsystems. The fMRI study in speakers with HKD identified significantly decreased activity in critical feedforward (bilateral dorsal premotor and motor cortices) and feedback (auditory and somatosensory areas) subsystems replicating previous PET study findings in this cohort. These results demonstrate that the overt connected speech paradigm is feasible during fMRI and can accurately localize the neural substrates of typical and disordered speech production. Our fMRI paradigm should prove useful for study of motor speech and voice disorders, including stuttering, apraxia of speech, dysarthria, and spasmodic dysphonia.
Collapse
Affiliation(s)
- Shalini Narayana
- Department of Pediatrics, Division of Pediatric Neurology, University of Tennessee Health Science Center, Memphis, TN 38103, USA; Neuroscience Institute, Le Bonheur Children's Hospital, Memphis, TN 38103, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Science Center, Memphis, TN 38103, USA.
| | - Megan B Parsons
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN 38152, USA
| | - Wei Zhang
- Research Imaging Institute, University of Texas Health San Antonio, San Antonio, TX 78229, USA
| | - Crystal Franklin
- Research Imaging Institute, University of Texas Health San Antonio, San Antonio, TX 78229, USA
| | - Katherine Schiller
- Department of Pediatrics, Division of Pediatric Neurology, University of Tennessee Health Science Center, Memphis, TN 38103, USA
| | - Asim F Choudhri
- Neuroscience Institute, Le Bonheur Children's Hospital, Memphis, TN 38103, USA; Department of Radiology, Division of Neuroradiology, University of Tennessee Health Science Center, Memphis, TN 38103, USA
| | - Peter T Fox
- Research Imaging Institute, University of Texas Health San Antonio, San Antonio, TX 78229, USA
| | - Mark S LeDoux
- Veracity Neuroscience LLC, Memphis, TN 38157, USA; Department of Psychology and School of Health Studies, University of Memphis, Memphis, TN 38152, USA
| | - Michael Cannito
- Department of Communicative Disorders, University of Louisiana at Lafayette, USA
| |
Collapse
|