1
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
2
|
Liu YF, Wilson C, Bedny M. Contribution of the language network to the comprehension of Python programming code. BRAIN AND LANGUAGE 2024; 251:105392. [PMID: 38387220 DOI: 10.1016/j.bandl.2024.105392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 02/08/2024] [Accepted: 02/14/2024] [Indexed: 02/24/2024]
Abstract
Does the perisylvian language network contribute to comprehension of programming languages, like Python? Univariate neuroimaging studies find high responses to code in fronto-parietal executive areas but not in fronto-temporal language areas, suggesting the language network does little. We used multivariate-pattern-analysis to test whether the language network encodes Python functions. Python programmers read functions while undergoing fMRI. A linear SVM decoded for-loops from if-conditionals based on activity in lateral temporal (LT) language cortex. In searchlight analysis, decoding accuracy was higher in LT language cortex than anywhere else. Follow up analysis showed that decoding was not driven by presence of different words across functions, "for" vs "if," but by compositional program properties. Finally, univariate responses to code peaked earlier in LT language-cortex than in the fronto-parietal network. We propose that the language system forms initial "surface meaning" representations of programs, which input to the reasoning network for processing of algorithms.
Collapse
Affiliation(s)
- Yun-Fei Liu
- Department of Psychological and Brain Sciences, Johns Hopkins Universtiy, 232 Ames Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Colin Wilson
- Department of Cognitive Science, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins Universtiy, 232 Ames Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
3
|
Hauptman M, Elli G, Pant R, Bedny M. Neural specialization for 'visual' concepts emerges in the absence of vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.552701. [PMID: 37662234 PMCID: PMC10473738 DOI: 10.1101/2023.08.23.552701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Vision provides a key source of information about many concepts, including 'living things' (e.g., tiger) and visual events (e.g., sparkle). According to a prominent theoretical framework, neural specialization for different conceptual categories is shaped by sensory features, e.g., living things are neurally dissociable from navigable places because living things concepts depend more on visual features. We tested this framework by comparing the neural basis of 'visual' concepts across sighted (n=22) and congenitally blind (n=21) adults. Participants judged the similarity of words varying in their reliance on vision while undergoing fMRI. We compared neural responses to living things nouns (birds, mammals) and place nouns (natural, manmade). In addition, we compared visual event verbs (e.g., 'sparkle') to non-visual events (sound emission, hand motion, mouth motion). People born blind exhibited distinctive univariate and multivariate responses to living things in a temporo-parietal semantic network activated by nouns, including the precuneus (PC). To our knowledge, this is the first demonstration that neural selectivity for living things does not require vision. We additionally observed preserved neural signatures of 'visual' light events in the left middle temporal gyrus (LMTG+). Across a wide range of semantic types, neural representations of sensory concepts develop independent of sensory experience.
Collapse
Affiliation(s)
- Miriam Hauptman
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Giulia Elli
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Rashi Pant
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
- Department of Biological Psychology & Neuropsychology, Universität Hamburg, Germany
| | - Marina Bedny
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
4
|
Lee J, Park S. Multi-modal Representation of the Size of Space in the Human Brain. J Cogn Neurosci 2024; 36:340-361. [PMID: 38010320 DOI: 10.1162/jocn_a_02092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes the geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small- and large-sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberations. By using a multivoxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory-specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that angular gyrus and the right medial frontal gyrus had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory-specific regions and modality-integrated regions increases in the multimodal condition compared with single modality conditions. Our results suggest that spatial size perception relies on both sensory-specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
|
5
|
Liu YF, Rapp B, Bedny M. Reading Braille by Touch Recruits Posterior Parietal Cortex. J Cogn Neurosci 2023; 35:1593-1616. [PMID: 37584592 PMCID: PMC10877400 DOI: 10.1162/jocn_a_02041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Blind readers use a tactile reading system consisting of raised dot arrays: braille/⠃⠗⠇. How do human brains implement reading by touch? The current study looked for signatures of reading-specific orthographic processes in braille, separate from low-level somatosensory responses and semantic processes. Of specific interest were responses in posterior parietal cortices (PPCs), because of their role in high-level tactile perception. Congenitally blind, proficient braille readers read real words and pseudowords by touch while undergoing fMRI. We leveraged the system of contractions in English braille, where one braille cell can represent multiple English print letters (e.g., "ing" ⠬, "one" ⠐⠕), making it possible to separate physical and orthographic word length. All words in the study consisted of four braille cells, but their corresponding Roman letter spellings varied from four to seven letters (e.g., "con-c-er-t" ⠒⠉⠻⠞. contracted: four cells; uncontracted: seven letters). We found that the bilateral supramarginal gyrus in the PPC increased its activity as the uncontracted word length increased. By contrast, in the hand region of primary somatosensory cortex (S1), activity increased as a function of a low-level somatosensory feature: dot-number per word. The PPC also showed greater response to pseudowords than real words and distinguished between real and pseudowords in multivariate-pattern analysis. Parieto-occipital, early visual and ventral occipito-temporal, as well as prefrontal cortices also showed sensitivity to the real-versus-pseudoword distinction. We conclude that PPC is involved in orthographic processing for braille, that is, braille character and word recognition, possibly because of braille's tactile modality.
Collapse
Affiliation(s)
- Yun-Fei Liu
- Department of Psychological and Brain Sciences, Johns Hopkins University
| | - Brenda Rapp
- Department of Cognitive Science, Johns Hopkins University
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University
| |
Collapse
|
6
|
Lee J, Park S. Multi-modal representation of the size of space in the human brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550343. [PMID: 37546991 PMCID: PMC10402083 DOI: 10.1101/2023.07.24.550343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small and large sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberation. By using a multi-voxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that AG and the right IFG pars opercularis had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory specific regions and modality-integrated regions increase in the multimodal condition compared to single modality conditions. Our results suggest that the spatial size perception relies on both sensory specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
Affiliation(s)
- Jaeeun Lee
- Department of Psychology, University of Minnesota, Minneapolis, MN
| | - Soojin Park
- Department of Psychology, Yonsei University, Seoul, South Korea
| |
Collapse
|
7
|
Verdesoto ESB, Ortiz MYR, Herrera RDJG. A System for Converting and Recovering Texts Managed as Structured Information. Sci Rep 2022; 12:22249. [PMID: 36564471 PMCID: PMC9789096 DOI: 10.1038/s41598-022-26304-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022] Open
Abstract
This paper introduces a system that incorporates several strategies based on scientific models of how the brain records and recovers memories. Methodologically, an incremental prototyping approach has been applied to develop a satisfactory architecture that can be adapted to any language. A special case is studied and tested regarding the Spanish language. The applications of this proposal are vast because, in general, information such as text way, reports, emails, and web content, among others, is considered unstructured and, hence, the repositories based on SQL databases usually do not handle this kind of data correctly and efficiently. The conversion of unstructured textual information to structured one can be useful in contexts such as Natural Language Generation, Data Mining, and dynamic generation of theories, among others.
Collapse
Affiliation(s)
- Edgardo Samuel Barraza Verdesoto
- Universidad Americana de Europa (UNADE), Cancún, México ,grid.442059.e0000 0004 7716 9955Research Department, Tecnológica Autónoma de Bogotá (FABA), Bogotá, Colombia ,grid.442204.40000 0004 0486 1035Universidad de Santander (UDES), Bogotá, Colombia
| | - Marlly Yaneth Rojas Ortiz
- grid.442059.e0000 0004 7716 9955Research Department, Tecnológica Autónoma de Bogotá (FABA), Bogotá, Colombia
| | | |
Collapse
|
8
|
O’Shea H. Mapping relational links between motor imagery, action observation, action-related language, and action execution. Front Hum Neurosci 2022; 16:984053. [DOI: 10.3389/fnhum.2022.984053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Actions can be physically executed, observed, imagined, or simply thought about. Unifying mental processes, such as simulation, emulation, or predictive processing, are thought to underlie different action types, whether they are mental states, as in the case of motor imagery and action observation, or involve physical execution. While overlapping brain activity is typically observed across different actions which indicates commonalities, research interest is also concerned with investigating the distinct functional components of these action types. Unfortunately, untangling subtleties associated with the neurocognitive bases of different action types is a complex endeavour due to the high dimensional nature of their neural substrate (e.g., any action process is likely to activate multiple brain regions thereby having multiple dimensions to consider when comparing across them). This has impeded progress in action-related theorising and application. The present study addresses this challenge by using the novel approach of multidimensional modeling to reduce the high-dimensional neural substrate of four action-related behaviours (motor imagery, action observation, action-related language, and action execution), find the least number of dimensions that distinguish or relate these action types, and characterise their neurocognitive relational links. Data for the model comprised brain activations for action types from whole-brain analyses reported in 53 published articles. Eighty-two dimensions (i.e., 82 brain regions) for the action types were reduced to a three-dimensional model, that mapped action types in ordination space where the greater the distance between the action types, the more dissimilar they are. A series of one-way ANOVAs and post-hoc comparisons performed on the mean coordinates for each action type in the model showed that across all action types, action execution and concurrent action observation (AO)-motor imagery (MI) were most neurocognitively similar, while action execution and AO were most dissimilar. Most action types were similar on at least one neurocognitive dimension, the exception to this being action-related language. The import of the findings are discussed in terms of future research and implications for application.
Collapse
|
9
|
Wang Z, Xi Q, Zhang H, Song Y, Cao S. Different Neural Activities for Actions and Language within the Shared Brain Regions: Evidence from Action and Verb Generation. Behav Sci (Basel) 2022; 12:bs12070243. [PMID: 35877314 PMCID: PMC9312291 DOI: 10.3390/bs12070243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 07/11/2022] [Accepted: 07/19/2022] [Indexed: 01/27/2023] Open
Abstract
The Inferior Frontal Gyrus, Premotor Cortex and Inferior Parietal Lobe were suggested to be involved in action and language processing. However, the patterns of neural activities in the shared neural regions are still unclear. This study designed an fMRI experiment to analyze the neural activity associations between action and verb generation for object nouns. Using noun reading as a control task, we compared the differences and similarities of brain regions activated by action and verb generation. The results showed that the action generation task activated more in the dorsal Premotor Cortex (PMC), parts of the midline of PMC and the left Inferior Parietal Lobe (IPL) than the verb generation task. Subregions in the bilateral Supplementary Motor Area (SMA) and the left Inferior Frontal Gyrus (IFG) were found to be shared by action and verb generation. Then, mean activation level analysis and multi-voxel pattern analysis (MVPA) were performed in the overlapping activation regions of two generation tasks in the shared regions. The bilateral SMA and the left IFG were found to have overlapping activations with action and verb generation. All the shared regions were found to have different activation patterns, and the mean activation levels of the shared regions in the bilateral of SMA were significantly higher in the action generation. Based on the function of these brain regions, it can be inferred that the shared regions in the bilateral SMA and the left IFG process action and language generation in a task-specific and intention-specific manner, respectively.
Collapse
Affiliation(s)
- Zijian Wang
- School of Computer Science and Technology, Donghua University, Shanghai 200051, China
- Correspondence:
| | - Qian Xi
- Department of Radiology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai 200120, China;
| | - Hong Zhang
- Department of Computer Science and Technology, Taiyuan Normal University, Taiyuan 030000, China;
| | - Yalin Song
- School of Software, Henan University, Kaifeng 475000, China;
| | - Shiqi Cao
- Department of Orthopaedics, the Fourth Medical Center, Chinese PLA General Hospital, Beijing 100048, China;
- Department of Orthopaedics of TCM Clinical Unit, the Sixth Medical Center, Chinese PLA General Hospital, Beijing 100048, China
| |
Collapse
|
10
|
Hauptman M, Blanco-Elorrieta E, Pylkkänen L. Inflection across Categories: Tracking Abstract Morphological Processing in Language Production with MEG. Cereb Cortex 2021; 32:1721-1736. [PMID: 34515304 PMCID: PMC9016284 DOI: 10.1093/cercor/bhab309] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 11/21/2022] Open
Abstract
Coherent language production requires that speakers adapt words to their grammatical contexts. A fundamental challenge in establishing a functional delineation of this process in the brain is that each linguistic process tends to correlate with numerous others. Our work investigated the neural basis of morphological inflection by measuring magnetoencephalography during the planning of inflected and uninflected utterances that varied across several linguistic dimensions. Results reveal increased activity in the left lateral frontotemporal cortex when inflection is planned, irrespective of phonological specification, syntactic context, or semantic type. Additional findings from univariate and connectivity analyses suggest that the brain distinguishes between different types of inflection. Specifically, planning noun and verb utterances requiring the addition of the suffix -s elicited increased activity in the ventral prefrontal cortex. A broadly distributed effect of syntactic context (verb vs. noun) was also identified. Results from representational similarity analysis indicate that this effect cannot be explained in terms of word meaning. Together, these results 1) offer evidence for a neural representation of abstract inflection that separates from other stimulus properties and 2) challenge theories that emphasize semantic content as a source of verb/noun processing differences.
Collapse
Affiliation(s)
- Miriam Hauptman
- Department of Psychology, New York University, New York, NY 10003, USA.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, P.O. Box 129188, UAE
| | - Esti Blanco-Elorrieta
- Department of Psychology, New York University, New York, NY 10003, USA.,Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | - Liina Pylkkänen
- Department of Psychology, New York University, New York, NY 10003, USA.,NYUAD Institute, New York University Abu Dhabi, Abu Dhabi, P.O. Box 129188, UAE.,Department of Linguistics, New York University, New York, NY 10003, USA
| |
Collapse
|
11
|
Ivanova AA, Mineroff Z, Zimmerer V, Kanwisher N, Varley R, Fedorenko E. The Language Network Is Recruited but Not Required for Nonverbal Event Semantics. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:176-201. [PMID: 37216147 PMCID: PMC10158592 DOI: 10.1162/nol_a_00030] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/07/2021] [Indexed: 05/24/2023]
Abstract
The ability to combine individual concepts of objects, properties, and actions into complex representations of the world is often associated with language. Yet combinatorial event-level representations can also be constructed from nonverbal input, such as visual scenes. Here, we test whether the language network in the human brain is involved in and necessary for semantic processing of events presented nonverbally. In Experiment 1, we scanned participants with fMRI while they performed a semantic plausibility judgment task versus a difficult perceptual control task on sentences and line drawings that describe/depict simple agent-patient interactions. We found that the language network responded robustly during the semantic task performed on both sentences and pictures (although its response to sentences was stronger). Thus, language regions in healthy adults are engaged during a semantic task performed on pictorial depictions of events. But is this engagement necessary? In Experiment 2, we tested two individuals with global aphasia, who have sustained massive damage to perisylvian language areas and display severe language difficulties, against a group of age-matched control participants. Individuals with aphasia were severely impaired on the task of matching sentences to pictures. However, they performed close to controls in assessing the plausibility of pictorial depictions of agent-patient interactions. Overall, our results indicate that the left frontotemporal language network is recruited but not necessary for semantic processing of nonverbally presented events.
Collapse
Affiliation(s)
- Anna A. Ivanova
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zachary Mineroff
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Vitor Zimmerer
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rosemary Varley
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
12
|
Pant R, Kanjlia S, Bedny M. A sensitive period in the neural phenotype of language in blind individuals. Dev Cogn Neurosci 2020; 41:100744. [PMID: 31999565 PMCID: PMC6994632 DOI: 10.1016/j.dcn.2019.100744] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 11/15/2019] [Accepted: 11/29/2019] [Indexed: 01/18/2023] Open
Abstract
Congenital blindness modifies the neural basis of language: "visual" cortices respond to linguistic information, and fronto-temporal language networks are less left-lateralized. We tested the hypothesis that this plasticity follows a sensitive period by comparing the neural basis of sentence processing between adult-onset blind (AB, n = 16), congenitally blind (CB, n = 22) and blindfolded sighted adults (n = 18). In Experiment 1, participants made semantic judgments for spoken sentences and, in a control condition, solved math equations. In Experiment 2, participants answered "who did what to whom" yes/no questions for grammatically complex (with syntactic movement) and simpler sentences. In a control condition, participants performed a memory task with non-words. In both experiments, visual cortices of CB and AB but not sighted participants responded more to sentences than control conditions, but the effect was much larger in the CB group. Only the "visual" cortex of CB participants responded to grammatical complexity. Unlike the CB group, the AB group showed no reduction in left-lateralization of fronto-temporal language network, relative to the sighted. These results suggest that congenital blindness modifies the neural basis of language differently from adult-onset blindness, consistent with a developmental sensitive period hypothesis.
Collapse
Affiliation(s)
- Rashi Pant
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA; Biological Psychology and Neuropsychology, University of Hamburg, Germany.
| | - Shipra Kanjlia
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA
| |
Collapse
|