1
|
Castellucci GA, Kovach CK, Tabasi F, Christianson D, Greenlee JD, Long MA. A frontal cortical network is critical for language planning during spoken interaction. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.26.554639. [PMID: 37693383 PMCID: PMC10491113 DOI: 10.1101/2023.08.26.554639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Many brain areas exhibit activity correlated with language planning, but the impact of these dynamics on spoken interaction remains unclear. Here we use direct electrical stimulation to transiently perturb cortical function in neurosurgical patient-volunteers performing a question-answer task. Stimulating structures involved in speech motor function evoked diverse articulatory deficits, while perturbations of caudal inferior and middle frontal gyri - which exhibit preparatory activity during conversational turn-taking - led to response errors. Perturbation of the same planning-related frontal regions slowed inter-speaker timing, while faster responses could result from stimulation of sites located in other areas. Taken together, these findings further indicate that caudal inferior and middle frontal gyri constitute a critical planning network essential for interactive language use.
Collapse
|
2
|
Meyer-Ortmanns H. Heteroclinic networks for brain dynamics. FRONTIERS IN NETWORK PHYSIOLOGY 2023; 3:1276401. [PMID: 38020242 PMCID: PMC10663269 DOI: 10.3389/fnetp.2023.1276401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Heteroclinic networks are a mathematical concept in dynamic systems theory that is suited to describe metastable states and switching events in brain dynamics. The framework is sensitive to external input and, at the same time, reproducible and robust against perturbations. Solutions of the corresponding differential equations are spatiotemporal patterns that are supposed to encode information both in space and time coordinates. We focus on the concept of winnerless competition as realized in generalized Lotka-Volterra equations and report on results for binding and chunking dynamics, synchronization on spatial grids, and entrainment to heteroclinic motion. We summarize proposals of how to design heteroclinic networks as desired in view of reproducing experimental observations from neuronal networks and discuss the subtle role of noise. The review is on a phenomenological level with possible applications to brain dynamics, while we refer to the literature for a rigorous mathematical treatment. We conclude with promising perspectives for future research.
Collapse
Affiliation(s)
- Hildegard Meyer-Ortmanns
- School of Science, Constructor University, Bremen, Germany
- Complexity Science Hub Vienna, Vienna, Austria
| |
Collapse
|
3
|
Dynamic auditory contributions to error detection revealed in the discrimination of Same and Different syllable pairs. Neuropsychologia 2022; 176:108388. [PMID: 36183800 DOI: 10.1016/j.neuropsychologia.2022.108388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 09/20/2022] [Accepted: 09/27/2022] [Indexed: 11/22/2022]
Abstract
During speech production auditory regions operate in concert with the anterior dorsal stream to facilitate online error detection. As the dorsal stream also is known to activate in speech perception, the purpose of the current study was to probe the role of auditory regions in error detection during auditory discrimination tasks as stimuli are encoded and maintained in working memory. A priori assumptions are that sensory mismatch (i.e., error) occurs during the discrimination of Different (mismatched) but not Same (matched) syllable pairs. Independent component analysis was applied to raw EEG data recorded from 42 participants to identify bilateral auditory alpha rhythms, which were decomposed across time and frequency to reveal robust patterns of event related synchronization (ERS; inhibition) and desynchronization (ERD; processing) over the time course of discrimination events. Results were characterized by bilateral peri-stimulus alpha ERD transitioning to alpha ERS in the late trial epoch, with ERD interpreted as evidence of working memory encoding via Analysis by Synthesis and ERS considered evidence of speech-induced-suppression arising during covert articulatory rehearsal to facilitate working memory maintenance. The transition from ERD to ERS occurred later in the left hemisphere in Different trials than in Same trials, with ERD and ERS temporally overlapping during the early post-stimulus window. Results were interpreted to suggest that the sensory mismatch (i.e., error) arising from the comparison of the first and second syllable elicits further processing in the left hemisphere to support working memory encoding and maintenance. Results are consistent with auditory contributions to error detection during both encoding and maintenance stages of working memory, with encoding stage error detection associated with stimulus concordance and maintenance stage error detection associated with task-specific retention demands.
Collapse
|
4
|
Rivera-Urbina GN, Martínez-Castañeda MF, Núñez-Gómez AM, Molero-Chamizo A, Nitsche MA, Alameda-Bailén JR. Effects of tDCS applied over the left IFG and pSTG language areas on verb recognition task performance. Psychophysiology 2022; 59:e14134. [PMID: 35780078 DOI: 10.1111/psyp.14134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 06/07/2022] [Accepted: 06/14/2022] [Indexed: 11/28/2022]
Abstract
Knowledge about the relevance of the left inferior frontal gyrus (lIFG) and the left posterior superior temporal gyrus (lpSTG) in visual recognition of word categories is limited at present. tDCS is a non-invasive brain stimulation method that alters cortical activity and excitability, and thus might be a useful tool for delineating the specific impact of both areas on word recognition. The objective of this study was to explore whether the visual recognition process of verb categories is improved by a single tDCS session. lIFG and lpSTG areas were separately modulated by anodal tDCS to evaluate its effects on verbal recognition. Compared to sham stimulation, motor reaction times (RTs) were reduced after anodal tDCS over the lpSTG, and this effect was independent of the performing hand (right/left). These findings suggest that this region is involved in visual word recognition independently from the performing hand.
Collapse
Affiliation(s)
| | | | | | | | - Michael A Nitsche
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany.,Department of Neurology, University Medical Hospital Bergmannsheil, Bochum, Germany
| | | |
Collapse
|
5
|
Riccardi N, Rorden C, Fridriksson J, Desai RH. Canonical Sentence Processing and the Inferior Frontal Cortex: Is There a Connection? NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:318-344. [PMID: 37215558 PMCID: PMC10158581 DOI: 10.1162/nol_a_00067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 01/21/2022] [Indexed: 05/24/2023]
Abstract
The role of left inferior frontal cortex (LIFC) in canonical sentence comprehension is controversial. Many studies have found involvement of LIFC in sentence production or complex sentence comprehension, but negative or mixed results are often found in comprehension of simple or canonical sentences. We used voxel-, region-, and connectivity-based lesion symptom mapping (VLSM, RLSM, CLSM) in left-hemisphere chronic stroke survivors to investigate canonical sentence comprehension while controlling for lexical-semantic, executive, and phonological processes. We investigated how damage and disrupted white matter connectivity of LIFC and two other language-related regions, the left anterior temporal lobe (LATL) and posterior temporal-inferior parietal area (LpT-iP), affected sentence comprehension. VLSM and RLSM revealed that LIFC damage was not associated with canonical sentence comprehension measured by a sensibility judgment task. LIFC damage was associated instead with impairments in a lexical semantic similarity judgment task with high semantic/executive demands. Damage to the LpT-iP, specifically posterior middle temporal gyrus (pMTG), predicted worse sentence comprehension after controlling for visual lexical access, semantic knowledge, and auditory-verbal short-term memory (STM), but not auditory single-word comprehension, suggesting pMTG is vital for auditory language comprehension. CLSM revealed that disruption of left-lateralized white-matter connections from LIFC to LATL and LpT-iP was associated with worse sentence comprehension, controlling for performance in tasks related to lexical access, auditory word comprehension, and auditory-verbal STM. However, the LIFC connections were accounted for by the lexical semantic similarity judgment task, which had high semantic/executive demands. This suggests that LIFC connectivity is relevant to canonical sentence comprehension when task-related semantic/executive demands are high.
Collapse
Affiliation(s)
- Nicholas Riccardi
- Department of Psychology, University of South Carolina, Columbia, SC
| | - Chris Rorden
- Department of Psychology, University of South Carolina, Columbia, SC
- Institute for Mind and Brain, University of South Carolina, Columbia, SC
| | - Julius Fridriksson
- Institute for Mind and Brain, University of South Carolina, Columbia, SC
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC
| | - Rutvik H. Desai
- Department of Psychology, University of South Carolina, Columbia, SC
- Institute for Mind and Brain, University of South Carolina, Columbia, SC
| |
Collapse
|
6
|
Castellucci GA, Kovach CK, Howard MA, Greenlee JDW, Long MA. A speech planning network for interactive language use. Nature 2022; 602:117-122. [PMID: 34987226 PMCID: PMC9990513 DOI: 10.1038/s41586-021-04270-z] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 11/19/2021] [Indexed: 11/09/2022]
Abstract
During conversation, people take turns speaking by rapidly responding to their partners while simultaneously avoiding interruption1,2. Such interactions display a remarkable degree of coordination, as gaps between turns are typically about 200 milliseconds3-approximately the duration of an eyeblink4. These latencies are considerably shorter than those observed in simple word-production tasks, which indicates that speakers often plan their responses while listening to their partners2. Although a distributed network of brain regions has been implicated in speech planning5-9, the neural dynamics underlying the specific preparatory processes that enable rapid turn-taking are poorly understood. Here we use intracranial electrocorticography to precisely measure neural activity as participants perform interactive tasks, and we observe a functionally and anatomically distinct class of planning-related cortical dynamics. We localize these responses to a frontotemporal circuit centred on the language-critical caudal inferior frontal cortex10 (Broca's region) and the caudal middle frontal gyrus-a region not normally implicated in speech planning11-13. Using a series of motor tasks, we then show that this planning network is more active when preparing speech as opposed to non-linguistic actions. Finally, we delineate planning-related circuitry during natural conversation that is nearly identical to the network mapped with our interactive tasks, and we find this circuit to be most active before participant speech during unconstrained turn-taking. Therefore, we have identified a speech planning network that is central to natural language generation during social interaction.
Collapse
Affiliation(s)
- Gregg A Castellucci
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | | | - Matthew A Howard
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | | | - Michael A Long
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
7
|
Jenson D, Saltuklaroglu T. Sensorimotor contributions to working memory differ between the discrimination of Same and Different syllable pairs. Neuropsychologia 2021; 159:107947. [PMID: 34216594 DOI: 10.1016/j.neuropsychologia.2021.107947] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 02/01/2021] [Accepted: 06/27/2021] [Indexed: 10/21/2022]
Abstract
Sensorimotor activity during speech perception is both pervasive and highly variable, changing as a function of the cognitive demands imposed by the task. The purpose of the current study was to evaluate whether the discrimination of Same (matched) and Different (unmatched) syllable pairs elicit different patterns of sensorimotor activity as stimuli are processed in working memory. Raw EEG data recorded from 42 participants were decomposed with independent component analysis to identify bilateral sensorimotor mu rhythms from 36 subjects. Time frequency decomposition of mu rhythms revealed concurrent event related desynchronization (ERD) in alpha and beta frequency bands across the peri- and post-stimulus time periods, which were interpreted as evidence of sensorimotor contributions to working memory encoding and maintenance. Left hemisphere alpha/beta ERD was stronger in Different trials than Same trials during the post-stimulus period, while right hemisphere alpha/beta ERD was stronger in Same trials than Different trials. A between-hemispheres contrast revealed no differences during Same trials, while post-stimulus alpha/beta ERD was stronger in the left hemisphere than the right during Different trials. Results were interpreted to suggest that predictive coding mechanisms lead to repetition suppression effects in Same trials. Mismatches arising from predictive coding mechanisms in Different trials shift subsequent working memory processing to the speech-dominant left hemisphere. Findings clarify how sensorimotor activity differentially supports working memory encoding and maintenance stages during speech discrimination tasks and have potential to inform sensorimotor models of speech perception and working memory.
Collapse
Affiliation(s)
- David Jenson
- Washington State University, Elson S. Floyd College of Medicine, Department of Speech and Hearing Sciences, Spokane, WA, USA.
| | - Tim Saltuklaroglu
- University of Tennessee Health Science Center, College of Health Professions, Department of Audiology and Speech-Pathology, Knoxville, TN, USA
| |
Collapse
|
8
|
Jenson D, Thornton D, Harkrider AW, Saltuklaroglu T. Influences of cognitive load on sensorimotor contributions to working memory: An EEG investigation of mu rhythm activity during speech discrimination. Neurobiol Learn Mem 2019; 166:107098. [DOI: 10.1016/j.nlm.2019.107098] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 09/11/2019] [Accepted: 10/09/2019] [Indexed: 11/16/2022]
|
9
|
Spatial attention underpins social word learning in the right fronto-parietal network. Neuroimage 2019; 195:165-173. [DOI: 10.1016/j.neuroimage.2019.03.071] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 03/28/2019] [Accepted: 03/29/2019] [Indexed: 11/20/2022] Open
|
10
|
Sakreida K, Blume-Schnitzler J, Heim S, Willmes K, Clusmann H, Neuloh G. Phonological picture–word interference in language mapping with transcranial magnetic stimulation: an objective approach for functional parcellation of Broca’s region. Brain Struct Funct 2019; 224:2027-2044. [DOI: 10.1007/s00429-019-01891-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Accepted: 05/11/2019] [Indexed: 10/26/2022]
|
11
|
Thornton D, Harkrider AW, Jenson D, Saltuklaroglu T. Sensorimotor activity measured via oscillations of EEG mu rhythms in speech and non-speech discrimination tasks with and without segmentation demands. BRAIN AND LANGUAGE 2018; 187:62-73. [PMID: 28431691 DOI: 10.1016/j.bandl.2017.03.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 01/24/2017] [Accepted: 03/31/2017] [Indexed: 06/07/2023]
Abstract
Better understanding of the role of sensorimotor processing in speech and non-speech segmentation can be achieved with more temporally precise measures. Twenty adults made same/different discriminations of speech and non-speech stimuli pairs, with and without segmentation demands. Independent component analysis of 64-channel EEG data revealed clear sensorimotor mu components, with characteristic alpha and beta peaks, localized to premotor regions in 70% of participants.Time-frequency analyses of mu components from accurate trials showed that (1) segmentation tasks elicited greater event-related synchronization immediately following offset of the first stimulus, suggestive of inhibitory activity; (2) strong late event-related desynchronization in all conditions, suggesting that working memory/covert replay contributed substantially to sensorimotor activity in all conditions; (3) stronger beta desynchronization in speech versus non-speech stimuli during stimulus presentation, suggesting stronger auditory-motor transforms for speech versus non-speech stimuli. Findings support the continued use of oscillatory approaches for helping understand segmentation and other cognitive tasks.
Collapse
Affiliation(s)
- David Thornton
- University of Tennessee Health Science Center, United States.
| | | | - David Jenson
- University of Tennessee Health Science Center, United States
| | | |
Collapse
|
12
|
Jenson D, Reilly KJ, Harkrider AW, Thornton D, Saltuklaroglu T. Trait related sensorimotor deficits in people who stutter: An EEG investigation of μ rhythm dynamics during spontaneous fluency. Neuroimage Clin 2018; 19:690-702. [PMID: 29872634 PMCID: PMC5986168 DOI: 10.1016/j.nicl.2018.05.026] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 03/28/2018] [Accepted: 05/20/2018] [Indexed: 01/09/2023]
Abstract
Stuttering is associated with compromised sensorimotor control (i.e., internal modeling) across the dorsal stream and oscillations of EEG mu (μ) rhythms have been proposed as reliable indices of anterior dorsal stream processing. The purpose of this study was to compare μ rhythm oscillatory activity between (PWS) and matched typically fluent speakers (TFS) during spontaneously fluent overt and covert speech production tasks. Independent component analysis identified bilateral μ components from 24/27 PWS and matched TFS that localized over premotor cortex. Time-frequency analysis of the left hemisphere μ clusters demonstrated significantly reduced μ-α and μ-β ERD (pCLUSTER < 0.05) in PWS across the time course of overt and covert speech production, while no group differences were found in the right hemisphere in any condition. Results were interpreted through the framework of State Feedback Control. They suggest that weak forward modeling and evaluation of sensory feedback across the time course of speech production characterizes the trait related sensorimotor impairment in PWS. This weakness is proposed to represent an underlying sensorimotor instability that may predispose the speech of PWS to breakdown.
Collapse
Affiliation(s)
- David Jenson
- University of Tennessee Health Science Center, Dept. of Audiology and Speech Pathology, United States.
| | - Kevin J Reilly
- University of Tennessee Health Science Center, Dept. of Audiology and Speech Pathology, United States
| | - Ashley W Harkrider
- University of Tennessee Health Science Center, Dept. of Audiology and Speech Pathology, United States
| | - David Thornton
- University of Tennessee Health Science Center, Dept. of Audiology and Speech Pathology, United States
| | - Tim Saltuklaroglu
- University of Tennessee Health Science Center, Dept. of Audiology and Speech Pathology, United States
| |
Collapse
|
13
|
Verga L, Kotz SA. Help me if I can't: Social interaction effects in adult contextual word learning. Cognition 2017; 168:76-90. [PMID: 28658646 DOI: 10.1016/j.cognition.2017.06.018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 06/16/2017] [Accepted: 06/16/2017] [Indexed: 10/19/2022]
Abstract
A major challenge in second language acquisition is to build up new vocabulary. How is it possible to identify the meaning of a new word among several possible referents? Adult learners typically use contextual information, which reduces the number of possible referents a new word can have. Alternatively, a social partner may facilitate word learning by directing the learner's attention toward the correct new word meaning. While much is known about the role of this form of 'joint attention' in first language acquisition, little is known about its efficacy in second language acquisition. Consequently, we introduce and validate a novel visual word learning game to evaluate how joint attention affects the contextual learning of new words in a second language. Adult learners either acquired new words in a constant or variable sentence context by playing the game with a knowledgeable partner, or by playing the game alone on a computer. Results clearly show that participants who learned new words in social interaction (i) are faster in identifying a correct new word referent in variable sentence contexts, and (ii) temporally coordinate their behavior with a social partner. Testing the learned words in a post-learning recall or recognition task showed that participants, who learned interactively, better recognized words originally learned in a variable context. While this result may suggest that interactive learning facilitates the allocation of attention to a target referent, the differences in the performance during recognition and recall call for further studies investigating the effect of social interaction on learning performance. In summary, we provide first evidence on the role joint attention in second language learning. Furthermore, the new interactive learning game offers itself to further testing in complex neuroimaging research, where the lack of appropriate experimental set-ups has so far limited the investigation of the neural basis of adult word learning in social interaction.
Collapse
Affiliation(s)
- Laura Verga
- Max Planck Institute for Human Cognitive and Brain Sciences, Dept. of Neuropsychology, Leipzig, Germany
| | - Sonja A Kotz
- Max Planck Institute for Human Cognitive and Brain Sciences, Dept. of Neuropsychology, Leipzig, Germany; Faculty of Psychology and Neuroscience, Dept. of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, The Netherlands.
| |
Collapse
|
14
|
The neural correlates of lexical processing in disorders of consciousness. Brain Imaging Behav 2016; 11:1526-1537. [DOI: 10.1007/s11682-016-9613-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
15
|
Verga L, Bigand E, Kotz SA. Play along: effects of music and social interaction on word learning. Front Psychol 2015; 6:1316. [PMID: 26388818 PMCID: PMC4554937 DOI: 10.3389/fpsyg.2015.01316] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2015] [Accepted: 08/17/2015] [Indexed: 11/22/2022] Open
Abstract
Learning new words is an increasingly common necessity in everyday life. External factors, among which music and social interaction are particularly debated, are claimed to facilitate this task. Due to their influence on the learner's temporal behavior, these stimuli are able to drive the learner's attention to the correct referent of new words at the correct point in time. However, do music and social interaction impact learning behavior in the same way? The current study aims to answer this question. Native German speakers (N = 80) were requested to learn new words (pseudo-words) during a contextual learning game. This learning task was performed alone with a computer or with a partner, with or without music. Results showed that music and social interaction had a different impact on the learner's behavior: Participants tended to temporally coordinate their behavior more with a partner than with music, and in both cases more than with a computer. However, when both music and social interaction were present, this temporal coordination was hindered. These results suggest that while music and social interaction do influence participants' learning behavior, they have a different impact. Moreover, impaired behavior when both music and a partner are present suggests that different mechanisms are employed to coordinate with the two types of stimuli. Whether one or the other approach is more efficient for word learning, however, is a question still requiring further investigation, as no differences were observed between conditions in a retrieval phase, which took place immediately after the learning session. This study contributes to the literature on word learning in adults by investigating two possible facilitating factors, and has important implications for situations such as music therapy, in which music and social interaction are present at the same time.
Collapse
Affiliation(s)
- Laura Verga
- Department of Neuropsychology, Research Group Subcortical Contributions to Comprehension, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Movement to Health Laboratory (M2H), EuroMov – Montpellier-1 UniversityMontpellier, France
| | - Emmanuel Bigand
- Laboratoire d’Etude de l’Apprentissage et du Développement, Department of Psychology, University of BurgundyDijon, France
| | - Sonja A. Kotz
- Department of Neuropsychology, Research Group Subcortical Contributions to Comprehension, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- School of Psychological Sciences, The University of ManchesterManchester, UK
| |
Collapse
|
16
|
Rhetorical features facilitate prosodic processing while handicapping ease of semantic comprehension. Cognition 2015; 143:48-60. [PMID: 26113449 DOI: 10.1016/j.cognition.2015.05.026] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Revised: 04/30/2015] [Accepted: 05/26/2015] [Indexed: 11/22/2022]
Abstract
Studies on rhetorical features of language have reported both enhancing and adverse effects on ease of processing. We hypothesized that two explanations may account for these inconclusive findings. First, the respective gains and losses in ease of processing may apply to different dimensions of language processing (specifically, prosodic and semantic processing) and different types of fluency (perceptual vs. conceptual) and may well allow for an integration into a more comprehensive framework. Second, the effects of rhetorical features may be sensitive to interactions with other rhetorical features; employing a feature separately or in combination with others may then predict starkly different effects. We designed a series of experiments in which we expected the same rhetorical features of the very same sentences to exert adverse effects on semantic (conceptual) fluency and enhancing effects on prosodic (perceptual) fluency. We focused on proverbs that each employ three rhetorical features: rhyme, meter, and brevitas (i.e., artful shortness). The presence of these target features decreased ease of conceptual fluency (semantic comprehension) while enhancing perceptual fluency as reflected in beauty and succinctness ratings that were mainly driven by prosodic features. The rhetorical features also predicted choices for persuasive purposes, yet only for the sentence versions featuring all three rhetorical features; the presence of only one or two rhetorical features had an adverse effect on the choices made. We suggest that the facilitating effects of a combination of rhyme, meter, and rhetorical brevitas on perceptual (prosodic) fluency overcompensated for their adverse effects on conceptual (semantic) fluency, thus resulting in a total net gain both in processing ease and in choices for persuasive purposes.
Collapse
|
17
|
Instruments, conductors, dancers, and intendants. Phys Life Rev 2015; 13:99-106. [DOI: 10.1016/j.plrev.2015.04.036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2015] [Accepted: 04/27/2015] [Indexed: 02/05/2023]
|
18
|
Olivier G, Bottineau D. Gestural Dimension of the Perceptuomotor Compatibility Effect in the Speech Domain. SWISS JOURNAL OF PSYCHOLOGY 2015. [DOI: 10.1024/1421-0185/a000153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
This behavioral study shows for the first time that the auditory perception of vowels influences silent labial responses. During a perceptual decision task, participants were instructed to choose and execute a silent labial response (lip protrusion versus chin lowering) as quickly as possible depending on the vowel they had perceived auditorily. The main result showed that gestural compatibility between the silent labial response and the articulation of the perceived vowel led to better performance (in terms of response times and errors) than an incompatibility between them. By including a somatic compatibility effect in a more dynamic gestural compatibility effect, this new result suggests that the role of motor activity during speech auditory perception lies in mentally simulating an articulation of the perceived phoneme.
Collapse
Affiliation(s)
- Gérard Olivier
- Laboratoire Interdisciplinaire Récits Cultures et Sociétés, University of Nice, France
| | - Didier Bottineau
- Laboratoire MoDyCo, CNRS, University of Paris Ouest-Nanterre, France
| |
Collapse
|
19
|
Méndez Orellana CP, van de Sandt-Koenderman ME, Saliasi E, van der Meulen I, Klip S, van der Lugt A, Smits M. Insight into the neurophysiological processes of melodically intoned language with functional MRI. Brain Behav 2014; 4:615-25. [PMID: 25328839 PMCID: PMC4107379 DOI: 10.1002/brb3.245] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Revised: 06/05/2014] [Accepted: 06/09/2014] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND Melodic Intonation Therapy (MIT) uses the melodic elements of speech to improve language production in severe nonfluent aphasia. A crucial element of MIT is the melodically intoned auditory input: the patient listens to the therapist singing a target utterance. Such input of melodically intoned language facilitates production, whereas auditory input of spoken language does not. METHODS Using a sparse sampling fMRI sequence, we examined the differential auditory processing of spoken and melodically intoned language. Nineteen right-handed healthy volunteers performed an auditory lexical decision task in an event related design consisting of spoken and melodically intoned meaningful and meaningless items. The control conditions consisted of neutral utterances, either melodically intoned or spoken. RESULTS Irrespective of whether the items were normally spoken or melodically intoned, meaningful items showed greater activation in the supramarginal gyrus and inferior parietal lobule, predominantly in the left hemisphere. Melodically intoned language activated both temporal lobes rather symmetrically, as well as the right frontal lobe cortices, indicating that these regions are engaged in the acoustic complexity of melodically intoned stimuli. Compared to spoken language, melodically intoned language activated sensory motor regions and articulatory language networks in the left hemisphere, but only when meaningful language was used. DISCUSSION Our results suggest that the facilitatory effect of MIT may - in part - depend on an auditory input which combines melody and meaning. CONCLUSION Combined melody and meaning provide a sound basis for the further investigation of melodic language processing in aphasic patients, and eventually the neurophysiological processes underlying MIT.
Collapse
Affiliation(s)
- Carolina P Méndez Orellana
- Department of Radiology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands ; Department of Neurology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands
| | - Mieke E van de Sandt-Koenderman
- Rehabilitation Medicine, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands ; Rijndam Rehabilitation Center Rotterdam, The Netherlands
| | - Emi Saliasi
- Department of Neurology - University Medical Center Groningen Groningen, The Netherlands
| | - Ineke van der Meulen
- Rehabilitation Medicine, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands ; Rijndam Rehabilitation Center Rotterdam, The Netherlands
| | - Simone Klip
- Department of Radiology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands
| | - Aad van der Lugt
- Department of Radiology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands
| | - Marion Smits
- Department of Radiology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands
| |
Collapse
|
20
|
Hertrich I, Dietrich S, Ackermann H. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects? Front Psychol 2013; 4:530. [PMID: 23966968 PMCID: PMC3745084 DOI: 10.3389/fpsyg.2013.00530] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Accepted: 07/26/2013] [Indexed: 11/13/2022] Open
Abstract
In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen Tübingen, Germany
| | | | | |
Collapse
|
21
|
Abstract
Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.
Collapse
|
22
|
Horn H, Jann K, Federspiel A, Walther S, Wiest R, Müller T, Strik W. Semantic network disconnection in formal thought disorder. Neuropsychobiology 2012; 66:14-23. [PMID: 22797273 DOI: 10.1159/000337133] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2011] [Accepted: 01/30/2012] [Indexed: 01/08/2023]
Abstract
BACKGROUND Structural and functional findings in schizophrenic patients with formal thought disorder (FTD) show abnormalities within left-side semantic areas. The present study investigate the network function of the involved brain regions as a function of FTD severity. METHODS We examined a group of 16 schizophrenia patients differing in FTD, but not in overall symptom severity, and 18 matched healthy controls. A passive word reading paradigm was applied during functional MRI (fMRI). A concatenated independent component analysis approach separated the fMRI signal into independent components, and spatial similarity was used to estimate the individual differences in spatial configuration of networks. RESULTS The semantic network was identified for both groups encompassing structures of the left inferior frontal gyrus, the left angular gyrus and the left middle temporal gyrus. The differences between the semantic networks of patients and controls increased with increasing severity of FTD. This difference was due to a decreasing contribution of the left inferior frontal gyrus (Brodmann area 45 and 47). CONCLUSION Severity of FTD was correlated with a disruption of the left semantic network in schizophrenic patients. We suggest that FTD is a consequence of a frontal-parietal/temporal disconnection due to a complex interaction between structural and functional abnormalities within the left semantic network.
Collapse
Affiliation(s)
- Helge Horn
- University Hospital of Psychiatry, Bern, Switzerland.
| | | | | | | | | | | | | |
Collapse
|
23
|
Learning by doing? The effect of gestures on implicit retrieval of newly acquired words. Cortex 2012; 49:2553-68. [PMID: 23357203 DOI: 10.1016/j.cortex.2012.11.016] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2012] [Revised: 11/13/2012] [Accepted: 11/13/2012] [Indexed: 11/19/2022]
Abstract
Meaningful gestures enhance speech comprehensibility. However, their role during novel-word acquisition remains elusive. Here we investigate how meaningful versus meaningless gestures impact on novel-word learning and contrast these conditions to a purely verbal training. After training, neuronal processing of the novel words was assessed by blood-oxygen-level-dependent functional magnetic resonance imaging (BOLD-fMRI), disclosing that networks affording retrieval differ depending on the training condition. Over 3 days participants learned pseudowords for common objects (e.g., /klira/ -cap). For training they repeated the novel word while performing (i) an iconic, (ii) a grooming or (iii) no gesture. For the two conditions involving gestures, these were either actively repeated or passively observed during training. Behaviorally no substantial differences between the five different training conditions were found while fMRI disclosed differential networks affording implicit retrieval of the learned pseudowords depending on the training procedure. Most notably training with actively performed iconic gestures yielded larger activation in a semantic network comprising left inferior frontal (BA47) and inferior temporal gyri. Additionally hippocampal activation was stronger for all trained compared to unknown pseudowords of identical structure. The behavioral results challenge the generality of an 'enactment-effect' for single word learning. Imaging results, however, suggest that actively performed meaningful gestures lead to a deeper semantic encoding of novel words. The findings are discussed regarding their implications for theoretical accounts and for empirical approaches of gesture-based strategies in language (re)learning.
Collapse
|
24
|
Fridriksson J, Hubbard HI, Hudspeth SG, Holland AL, Bonilha L, Fromm D, Rorden C. Speech entrainment enables patients with Broca's aphasia to produce fluent speech. Brain 2012; 135:3815-29. [PMID: 23250889 PMCID: PMC3525061 DOI: 10.1093/brain/aws301] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2012] [Revised: 09/17/2012] [Accepted: 09/24/2012] [Indexed: 12/29/2022] Open
Abstract
A distinguishing feature of Broca's aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect 'speech entrainment' and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca's aphasia. In Experiment 1, 13 patients with Broca's aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca's area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production. Behavioural and functional magnetic resonance imaging data were collected before and after the treatment phase. Patients were able to produce a greater variety of words with and without speech entrainment at 1 and 6 weeks after training. Treatment-related decrease in cortical activation associated with speech entrainment was found in areas of the left posterior-inferior parietal lobe. We conclude that speech entrainment allows patients with Broca's aphasia to double their speech output compared with spontaneous speech. Neuroimaging results suggest that speech entrainment allows patients to produce fluent speech by providing an external gating mechanism that yokes a ventral language network that encodes conceptual aspects of speech. Preliminary results suggest that training with speech entrainment improves speech production in Broca's aphasia providing a potential therapeutic method for a disorder that has been shown to be particularly resistant to treatment.
Collapse
Affiliation(s)
- Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA.
| | | | | | | | | | | | | |
Collapse
|
25
|
Cuellar M, Bowers A, Harkrider AW, Wilson M, Saltuklaroglu T. Mu suppression as an index of sensorimotor contributions to speech processing: Evidence from continuous EEG signals. Int J Psychophysiol 2012; 85:242-8. [DOI: 10.1016/j.ijpsycho.2012.04.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2011] [Revised: 03/09/2012] [Accepted: 04/10/2012] [Indexed: 11/30/2022]
|
26
|
Tettamanti M, Moro A. Can syntax appear in a mirror (system)? Cortex 2012; 48:923-35. [DOI: 10.1016/j.cortex.2011.05.020] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2010] [Revised: 08/23/2010] [Accepted: 05/20/2011] [Indexed: 10/18/2022]
|
27
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1257] [Impact Index Per Article: 104.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|
28
|
Roby-Brami A, Hermsdörfer J, Roy AC, Jacobs S. A neuropsychological perspective on the link between language and praxis in modern humans. Philos Trans R Soc Lond B Biol Sci 2012; 367:144-60. [PMID: 22106433 DOI: 10.1098/rstb.2011.0122] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Abstract
Hypotheses about the emergence of human cognitive abilities postulate strong evolutionary links between language and praxis, including the possibility that language was originally gestural. The present review considers functional and neuroanatomical links between language and praxis in brain-damaged patients with aphasia and/or apraxia. The neural systems supporting these functions are predominantly located in the left hemisphere. There are many parallels between action and language for recognition, imitation and gestural communication suggesting that they rely partially on large, common networks, differentially recruited depending on the nature of the task. However, this relationship is not unequivocal and the production and understanding of gestural communication are dependent on the context in apraxic patients and remains to be clarified in aphasic patients. The phonological, semantic and syntactic levels of language seem to share some common cognitive resources with the praxic system. In conclusion, neuropsychological observations do not allow support or rejection of the hypothesis that gestural communication may have constituted an evolutionary link between tool use and language. Rather they suggest that the complexity of human behaviour is based on large interconnected networks and on the evolution of specific properties within strategic areas of the left cerebral hemisphere.
Collapse
Affiliation(s)
- Agnes Roby-Brami
- Laboratory of Neurophysics and Physiology, University Paris Descartes, CNRS UMR 8119, 45 rue des Saints Pères, 75006 Paris, France.
| | | | | | | |
Collapse
|
29
|
Zhao J, Liu J, Li J, Liang J, Feng L, Ai L, Lee K, Tian J. Intrinsically organized network for word processing during the resting state. Neurosci Lett 2010; 487:27-31. [PMID: 20932878 DOI: 10.1016/j.neulet.2010.09.067] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2010] [Revised: 09/23/2010] [Accepted: 09/23/2010] [Indexed: 11/30/2022]
Abstract
Neural mechanisms underlying word processing have been extensively studied. It has been revealed that when individuals are engaged in active word processing, a complex network of cortical regions is activated. However, it is entirely unknown whether the word-processing regions are intrinsically organized without any explicit processing tasks during the resting state. The present study investigated the intrinsic functional connectivity between word-processing regions during the resting state with the use of fMRI methodology. The low-frequency fluctuations were observed between the left middle fusiform gyrus and a number of cortical regions. They included the left angular gyrus, left supramarginal gyrus, bilateral pars opercularis, and left pars triangularis of the inferior frontal gyrus, which have been implicated in phonological and semantic processing. Additionally, the activations were also observed in the bilateral superior parietal lobule and dorsal lateral prefrontal cortex, which have been suggested to provide top-down monitoring on the visual-spatial processing of words. The findings of our study indicate an intrinsically organized network during the resting state that likely prepares the visual system to anticipate the highly probable word input for ready and effective processing.
Collapse
Affiliation(s)
- Jizheng Zhao
- Life Sciences Research Center, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | | | | | | | | | | | | | | |
Collapse
|
30
|
Abstract
In this review of 100 fMRI studies of speech comprehension and production, published in 2009, activation is reported for: prelexical speech perception in bilateral superior temporal gyri; meaningful speech in middle and inferior temporal cortex; semantic retrieval in the left angular gyrus and pars orbitalis; and sentence comprehension in bilateral superior temporal sulci. For incomprehensible sentences, activation increases in four inferior frontal regions, posterior planum temporale, and ventral supramarginal gyrus. These effects are associated with the use of prior knowledge of semantic associations, word sequences, and articulation that predict the content of the sentence. Speech production activates the same set of regions as speech comprehension but in addition, activation is reported for: word retrieval in left middle frontal cortex; articulatory planning in the left anterior insula; the initiation and execution of speech in left putamen, pre-SMA, SMA, and motor cortex; and for suppressing unintended responses in the anterior cingulate and bilateral head of caudate nuclei. Anatomical and functional connectivity studies are now required to identify the processing pathways that integrate these areas to support language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, London, UK.
| |
Collapse
|