1
|
Elwér Å, Andin J. Geometry in the brain optimized for sign language - A unique role of the anterior superior parietal lobule in deaf signers. BRAIN AND LANGUAGE 2024; 253:105416. [PMID: 38703524 DOI: 10.1016/j.bandl.2024.105416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 03/18/2024] [Accepted: 04/22/2024] [Indexed: 05/06/2024]
Abstract
Geometry has been identified as a cognitive domain where deaf individuals exhibit relative strength, yet the neural mechanisms underlying geometry processing in this population remain poorly understood. This fMRI study aimed to investigate the neural correlates of geometry processing in deaf and hearing individuals. Twenty-two adult deaf signers and 25 hearing non-signers completed a geometry decision task. We found no group differences in performance, while there were some differences in parietal activation. As expected, the posterior superior parietal lobule (SPL) was recruited for both groups. The anterior SPL was significantly more activated in the deaf group, and the inferior parietal lobule was significantly more deactivated in the hearing group. In conclusion, despite similar performance across groups, there were differences in the recruitment of parietal regions. These differences may reflect inherent differences in brain organization due to different early sensory and linguistic experiences.
Collapse
Affiliation(s)
- Åsa Elwér
- Department of Behavioural Sciences and Learning, Linköping University, Sweden.
| | - Josefine Andin
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
2
|
Stamp R, Cohn D, Hel-Or H, Sandler W. Kinect-ing the Dots: Using Motion-Capture Technology to Distinguish Sign Language Linguistic From Gestural Expressions. LANGUAGE AND SPEECH 2024; 67:255-276. [PMID: 37313985 DOI: 10.1177/00238309231169502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Just as vocalization proceeds in a continuous stream in speech, so too do movements of the hands, face, and body in sign languages. Here, we use motion-capture technology to distinguish lexical signs in sign language from other common types of expression in the signing stream. One type of expression is constructed action, the enactment of (aspects of) referents and events by (parts of) the body. Another is classifier constructions, the manual representation of analogue and gradient motions and locations simultaneously with specified referent morphemes. The term signing is commonly used for all of these, but we show that not all visual signals in sign languages are of the same type. In this study of Israeli Sign Language, we use motion capture to show that the motion of lexical signs differs significantly along several kinematic parameters from that of the two other modes of expression: constructed action and the classifier forms. In so doing, we show how motion-capture technology can help to define the universal linguistic category "word," and to distinguish it from the expressive gestural elements that are commonly found across sign languages.
Collapse
Affiliation(s)
- Rose Stamp
- Department of English Literature and Linguistics, Bar-Ilan University, Israel
| | | | - Hagit Hel-Or
- Department of Computer Science, University of Haifa, Israel
| | - Wendy Sandler
- Sign Language Research Lab, University of Haifa, Israel
| |
Collapse
|
3
|
Caldwell HB. Sign and Spoken Language Processing Differences in the Brain: A Brief Review of Recent Research. Ann Neurosci 2022; 29:62-70. [PMID: 35875424 PMCID: PMC9305909 DOI: 10.1177/09727531211070538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 11/29/2021] [Indexed: 11/27/2022] Open
Abstract
Background: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language. Summary: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences’ importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages’ auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research. Key message: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
Collapse
Affiliation(s)
- Hayley Bree Caldwell
- Cognitive and Systems Neuroscience Research Hub (CSN-RH), School of Justice and Society, University of South Australia Magill Campus, Magill, South Australia, Australia
| |
Collapse
|
4
|
Emmorey K, Brozdowski C, McCullough S. The neural correlates for spatial language: Perspective-dependent and -independent relationships in American Sign Language and spoken English. BRAIN AND LANGUAGE 2021; 223:105044. [PMID: 34741986 PMCID: PMC9578291 DOI: 10.1016/j.bandl.2021.105044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 10/11/2021] [Accepted: 10/13/2021] [Indexed: 06/13/2023]
Abstract
In American Sign Language (ASL) spatial relationships are conveyed by the location of the hands in space, whereas English employs prepositional phrases. Using event-related fMRI, we examined comprehension of perspective-dependent (PD) (left, right) and perspective-independent (PI) (in, on) sentences in ASL and audiovisual English (sentence-picture matching task). In contrast to non-spatial control sentences, PD sentences engaged the superior parietal lobule (SPL) bilaterally for ASL and English, consistent with a previous study with written English. The ASL-English conjunction analysis revealed bilateral SPL activation for PD sentences, but left-lateralized activation for PI sentences. The direct contrast between PD and PI expressions revealed greater SPL activation for PD expressions only for ASL. Increased SPL activation for ASL PD expressions may reflect the mental transformation required to interpret locations in signing space from the signer's viewpoint. Overall, the results suggest both overlapping and distinct neural regions support spatial language comprehension in ASL and English.
Collapse
|
5
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|
6
|
Manhardt F, Brouwer S, Özyürek A. A Tale of Two Modalities: Sign and Speech Influence Each Other in Bimodal Bilinguals. Psychol Sci 2021; 32:424-436. [PMID: 33621474 DOI: 10.1177/0956797620968789] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals' expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals' speech and signs are shaped by two languages from different modalities.
Collapse
Affiliation(s)
| | | | - Aslı Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Donders Center for Cognition, Radboud University
| |
Collapse
|
7
|
Shum J, Fanda L, Dugan P, Doyle WK, Devinsky O, Flinker A. Neural correlates of sign language production revealed by electrocorticography. Neurology 2020; 95:e2880-e2889. [PMID: 32788249 PMCID: PMC7734739 DOI: 10.1212/wnl.0000000000010639] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 05/20/2020] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE The combined spatiotemporal dynamics underlying sign language production remain largely unknown. To investigate these dynamics compared to speech production, we used intracranial electrocorticography during a battery of language tasks. METHODS We report a unique case of direct cortical surface recordings obtained from a neurosurgical patient with intact hearing who is bilingual in English and American Sign Language. We designed a battery of cognitive tasks to capture multiple modalities of language processing and production. RESULTS We identified 2 spatially distinct cortical networks: ventral for speech and dorsal for sign production. Sign production recruited perirolandic, parietal, and posterior temporal regions, while speech production recruited frontal, perisylvian, and perirolandic regions. Electrical cortical stimulation confirmed this spatial segregation, identifying mouth areas for speech production and limb areas for sign production. The temporal dynamics revealed superior parietal cortex activity immediately before sign production, suggesting its role in planning and producing sign language. CONCLUSIONS Our findings reveal a distinct network for sign language and detail the temporal propagation supporting sign production.
Collapse
Affiliation(s)
- Jennifer Shum
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY.
| | - Lora Fanda
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Patricia Dugan
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Werner K Doyle
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Orrin Devinsky
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| | - Adeen Flinker
- From the Department of Neurology (J.S., L.F., P.D., W.K.D., O.D., A.F.), Comprehensive Epilepsy Center, and Department of Neurosurgery (W.K.D.), New York University School of Medicine, NY
| |
Collapse
|
8
|
Krebs J, Malaia E, Wilbur RB, Roehm D. Psycholinguistic mechanisms of classifier processing in sign language. J Exp Psychol Learn Mem Cogn 2020; 47:998-1011. [PMID: 33211523 DOI: 10.1037/xlm0000958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Nonsigners viewing sign language are sometimes able to guess the meaning of signs by relying on the overt connection between form and meaning, or iconicity (cf. Ortega, Özyürek, & Peeters, 2020; Strickland et al., 2015). One word class in sign languages that appears to be highly iconic is classifiers: verb-like signs that can refer to location change or handling. Classifier use and meaning are governed by linguistic rules, yet in comparison with lexical verb signs, classifiers are highly variable in their morpho-phonology (variety of potential handshapes and motion direction within the sign). These open-class linguistic items in sign languages prompt a question about the mechanisms of their processing: Are they part of a gestural-semiotic system (processed like the gestures of nonsigners), or are they processed as linguistic verbs? To examine the psychological mechanisms of classifier comprehension, we recorded the electroencephalogram (EEG) activity of signers who watched videos of signed sentences with classifiers. We manipulated the sentence word order of the stimuli (subject-object-verb [SOV] vs. object-subject-verb [OSV]), contrasting the two conditions, which, according to different processing hypotheses, should incur increased processing costs for OSV orders. As previously reported for lexical signs, we observed an N400 effect for OSV compared with SOV, reflecting increased cognitive load for linguistic processing. These findings support the hypothesis that classifiers are a linguistic part of speech in sign language, extending the current understanding of processing mechanisms at the interface of linguistic form and meaning. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Julia Krebs
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg
| | - Evie Malaia
- Department of Communicative Disorders, University of Alabama
| | | | - Dietmar Roehm
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg
| |
Collapse
|
9
|
SÜmer B, ÖzyÜrek A. No effects of modality in development of locative expressions of space in signing and speaking children. JOURNAL OF CHILD LANGUAGE 2020; 47:1101-1131. [PMID: 32178753 DOI: 10.1017/s0305000919000928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Linguistic expressions of locative spatial relations in sign languages are mostly visually motivated representations of space involving mapping of entities and spatial relations between them onto the hands and the signing space. These are also morphologically complex forms. It is debated whether modality-specific aspects of spatial expressions modulate spatial language development differently in signing compared to speaking children. In a picture description task, we compared the use of locative expressions for containment, support, and occlusion relations by deaf children acquiring Turkish Sign Language and hearing children acquiring Turkish (age 3;5-9;11). Unlike previous reports suggesting a boosting effect of iconicity, and/or a hindering effect of morphological complexity of the locative forms in sign languages, our results show similar developmental patterns for signing and speaking children's acquisition of these forms. Our results suggest the primacy of cognitive development guiding the acquisition of locative expressions by speaking and signing children.
Collapse
Affiliation(s)
- Beyza SÜmer
- Centre for Language Studies, Radboud University Nijmegen, The Netherlands
- Department of Linguistics, University of Amsterdam
| | - Aslı ÖzyÜrek
- Centre for Language Studies, Radboud University Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
10
|
Trettenbrein PC, Papitto G, Friederici AD, Zaccarella E. Functional neuroanatomy of language without speech: An ALE meta-analysis of sign language. Hum Brain Mapp 2020; 42:699-712. [PMID: 33118302 PMCID: PMC7814757 DOI: 10.1002/hbm.25254] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 10/09/2020] [Indexed: 12/19/2022] Open
Abstract
Sign language (SL) conveys linguistic information using gestures instead of sounds. Here, we apply a meta‐analytic estimation approach to neuroimaging studies (N = 23; subjects = 316) and ask whether SL comprehension in deaf signers relies on the same primarily left‐hemispheric cortical network implicated in spoken and written language (SWL) comprehension in hearing speakers. We show that: (a) SL recruits bilateral fronto‐temporo‐occipital regions with strong left‐lateralization in the posterior inferior frontal gyrus known as Broca's area, mirroring functional asymmetries observed for SWL. (b) Within this SL network, Broca's area constitutes a hub which attributes abstract linguistic information to gestures. (c) SL‐specific voxels in Broca's area are also crucially involved in SWL, as confirmed by meta‐analytic connectivity modeling using an independent large‐scale neuroimaging database. This strongly suggests that the human brain evolved a lateralized language network with a supramodal hub in Broca's area which computes linguistic information independent of speech.
Collapse
Affiliation(s)
- Patrick C Trettenbrein
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany
| | - Giorgio Papitto
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
11
|
Crossmodal reorganisation in deafness: Mechanisms for functional preservation and functional change. Neurosci Biobehav Rev 2020; 113:227-237. [DOI: 10.1016/j.neubiorev.2020.03.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 01/29/2020] [Accepted: 03/16/2020] [Indexed: 11/23/2022]
|
12
|
de Quadros RM, Davidson K, Lillo-Martin D, Emmorey K. Code-blending with depicting signs. LINGUISTIC APPROACHES TO BILINGUALISM 2020; 10:290-308. [PMID: 32922568 PMCID: PMC7486006 DOI: 10.1075/lab.17043.qua] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Bimodal bilinguals sometimes use code-blending, simultaneous production of (parts of) an utterance in both speech and sign. We ask what spoken language material is blended with entity and handling depicting signs (DS), representations of action that combine discrete components with iconic depictions of aspects of a referenced event in a gradient, analog manner. We test a semantic approach that DS may involve a demonstration, involving a predicate which obligatorily includes a modificational demonstrational component, and adopt a syntactic analysis which crucially distinguishes between entity and handling DS. Given the model of bilingualism we use, we expect both DS can be produced with speech that occurs in the verbal structure, along with vocal gestures, but speech that includes a subject is only expected to be blended with handling DS, not entity. The data we report from three Codas, native bimodal bilinguals, from the United States and one from Brazil conform with this prediction.
Collapse
|
13
|
Stroh AL, Rösler F, Dormal G, Salden U, Skotara N, Hänel-Faulhaber B, Röder B. Neural correlates of semantic and syntactic processing in German Sign Language. Neuroimage 2019; 200:231-241. [PMID: 31220577 DOI: 10.1016/j.neuroimage.2019.06.025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Revised: 05/16/2019] [Accepted: 06/12/2019] [Indexed: 11/24/2022] Open
Abstract
The study of deaf and hearing native users of signed languages can offer unique insights into how biological constraints and environmental input interact to shape the neural bases of language processing. Here, we use functional magnetic resonance imaging (fMRI) to address two questions: (1) Do semantic and syntactic processing in a signed language rely on anatomically and functionally distinct neural substrates as it has been shown for spoken languages? and (2) Does hearing status affect the neural correlates of these two types of linguistic processing? Deaf and hearing native signers performed a sentence judgement task on German Sign Language (Deutsche Gebärdensprache: DGS) sentences which were correct or contained either syntactic or semantic violations. We hypothesized that processing of semantic and syntactic violations in DGS relies on distinct neural substrates as it has been shown for spoken languages. Moreover, we hypothesized that effects of hearing status are observed within auditory regions, as deaf native signers have been shown to activate auditory areas to a greater extent than hearing native signers when processing a signed language. Semantic processing activated low-level visual areas and the left inferior frontal gyrus (IFG), suggesting both modality-dependent and independent processing mechanisms. Syntactic processing elicited increased activation in the right supramarginal gyrus (SMG). Moreover, psychophysiological interaction (PPI) analyses revealed a cluster in left middle occipital regions showing increased functional coupling with the right SMG during syntactic relative to semantic processing, possibly indicating spatial processing mechanisms that are specific to signed syntax. Effects of hearing status were observed in the right superior temporal cortex (STC): deaf but not hearing native signers showed greater activation for semantic violations than for syntactic violations in this region. Taken together, the present findings suggest that the neural correlates of language processing are partly determined by biological constraints, but that they may additionally be influenced by the unique processing demands of the language modality and different sensory experiences.
Collapse
Affiliation(s)
- Anna-Lena Stroh
- Biological Psychology and Neuropsychology, University of Hamburg, Germany.
| | - Frank Rösler
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Giulia Dormal
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Uta Salden
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Nils Skotara
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Barbara Hänel-Faulhaber
- Biological Psychology and Neuropsychology, University of Hamburg, Germany; Special Education, University of Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| |
Collapse
|
14
|
Malaia E, Wilbur RB. Visual and linguistic components of short-term memory: Generalized Neural Model (GNM) for spoken and sign languages. Cortex 2019; 112:69-79. [DOI: 10.1016/j.cortex.2018.05.020] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 04/02/2018] [Accepted: 05/29/2018] [Indexed: 10/14/2022]
|
15
|
fMRI Evidence of Magnitude Manipulation during Numerical Order Processing in Congenitally Deaf Signers. Neural Plast 2019; 2018:2576047. [PMID: 30662455 PMCID: PMC6312599 DOI: 10.1155/2018/2576047] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Revised: 10/26/2018] [Accepted: 11/13/2018] [Indexed: 12/02/2022] Open
Abstract
Congenital deafness is often compensated by early sign language use leading to typical language development with corresponding neural underpinnings. However, deaf individuals are frequently reported to have poorer numerical abilities than hearing individuals and it is not known whether the underlying neuronal networks differ between groups. In the present study, adult deaf signers and hearing nonsigners performed a digit and letter order tasks, during functional magnetic resonance imaging. We found the neuronal networks recruited in the two tasks to be generally similar across groups, with significant activation in the dorsal visual stream for the letter order task, suggesting letter identification and position encoding. For the digit order task, no significant activation was found for either of the two groups. Region of interest analyses on parietal numerical processing regions revealed different patterns of activation across groups. Importantly, deaf signers showed significant activation in the right horizontal portion of the intraparietal sulcus for the digit order task, suggesting engagement of magnitude manipulation during numerical order processing in this group.
Collapse
|
16
|
Emmorey K. Experimental approaches to studying visible meaning. THEORETICAL LINGUISTICS 2018; 44:259-263. [PMID: 31244504 PMCID: PMC6594711 DOI: 10.1515/tl-2018-0018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Affiliation(s)
- Karen Emmorey
- Corresponding author: Karen Emmorey, School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA 92120, USA,
| |
Collapse
|
17
|
Moreno A, Limousin F, Dehaene S, Pallier C. Brain correlates of constituent structure in sign language comprehension. Neuroimage 2018; 167:151-161. [PMID: 29175202 PMCID: PMC6044420 DOI: 10.1016/j.neuroimage.2017.11.040] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 10/27/2017] [Accepted: 11/19/2017] [Indexed: 01/16/2023] Open
Abstract
During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality.
Collapse
Affiliation(s)
- Antonio Moreno
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France.
| | - Fanny Limousin
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France; Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France
| | - Christophe Pallier
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France.
| |
Collapse
|
18
|
Conder J, Fridriksson J, Baylis GC, Smith CM, Boiteau TW, Almor A. Bilateral parietal contributions to spatial language. BRAIN AND LANGUAGE 2017; 164:16-24. [PMID: 27690125 PMCID: PMC5179296 DOI: 10.1016/j.bandl.2016.09.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Revised: 09/07/2016] [Accepted: 09/12/2016] [Indexed: 06/06/2023]
Abstract
It is commonly held that language is largely lateralized to the left hemisphere in most individuals, whereas spatial processing is associated with right hemisphere regions. In recent years, a number of neuroimaging studies have yielded conflicting results regarding the role of language and spatial processing areas in processing language about space (e.g., Carpenter, Just, Keller, Eddy, & Thulborn, 1999; Damasio et al., 2001). In the present study, we used sparse scanning event-related functional magnetic resonance imaging (fMRI) to investigate the neural correlates of spatial language, that is; language used to communicate the spatial relationship of one object to another. During scanning, participants listened to sentences about object relationships that were either spatial or non-spatial in nature (color or size relationships). Sentences describing spatial relationships elicited more activation in the superior parietal lobule and precuneus bilaterally in comparison to sentences describing size or color relationships. Activation of the precuneus suggests that spatial sentences elicit spatial-mental imagery, while the activation of the SPL suggests sentences containing spatial language involve integration of two distinct sets of information - linguistic and spatial.
Collapse
Affiliation(s)
- Julie Conder
- McMaster University, Department of Psychology, Neuroscience & Behaviour, Canada
| | - Julius Fridriksson
- University of South Carolina, Department of Communication Sciences and Disorders, United States
| | - Gordon C Baylis
- Western Kentucky University, Department of Psychological Sciences, United States
| | - Cameron M Smith
- Department of Psychology, University of South Carolina, United States; Institute for Mind and Brain, University of South Carolina, United States
| | - Timothy W Boiteau
- Department of Psychology, University of South Carolina, United States; Institute for Mind and Brain, University of South Carolina, United States
| | - Amit Almor
- Department of Psychology, University of South Carolina, United States; Institute for Mind and Brain, University of South Carolina, United States; Linguistics Program, University of South Carolina, United States.
| |
Collapse
|
19
|
Goldin-Meadow S, Brentari D. Gesture, sign, and language: The coming of age of sign language and gesture studies. Behav Brain Sci 2017; 40:e46. [PMID: 26434499 PMCID: PMC4821822 DOI: 10.1017/s0140525x15001247] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Departments of Psychology and Comparative Human Development,University of Chicago,Chicago,IL 60637;Center for Gesture, Sign, and Language,Chicago,IL ://goldin-meadow-lab.uchicago.edu
| | - Diane Brentari
- Department of Linguistics,University of Chicago,Chicago,IL 60637;Center for Gesture, Sign, and Language,Chicago,IL ://signlanguagelab.uchicago.edu
| |
Collapse
|
20
|
Emmorey K, McCullough S, Weisberg J. The neural underpinnings of reading skill in deaf adults. BRAIN AND LANGUAGE 2016; 160:11-20. [PMID: 27448530 DOI: 10.1016/j.bandl.2016.06.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2015] [Accepted: 06/15/2016] [Indexed: 05/26/2023]
Abstract
We investigated word-level reading circuits in skilled deaf readers (N=14; mean reading age=19.5years) and less skilled deaf readers (N=14; mean reading age=12years) who were all highly proficient users of American Sign Language. During fMRI scanning, participants performed a semantic decision (concrete concept?), a phonological decision (two syllables?), and a false-font control task (string underlined?). No significant group differences were observed with the full participant set. However, an analysis with the 10 most and 10 least skilled readers revealed that for the semantic task (vs. control task), proficient deaf readers exhibited greater activation in left inferior frontal and middle temporal gyri than less proficient readers. No group differences were observed for the phonological task. Whole-brain correlation analyses (all participants) revealed that for the semantic task, reading ability correlated positively with neural activity in the right inferior frontal gyrus and in a region associated with the orthography-semantics interface, located anterior to the visual word form area. Reading ability did not correlate with neural activity during the phonological task. Accuracy on the semantic task correlated positively with neural activity in left anterior temporal lobe (a region linked to conceptual processing), while accuracy on the phonological task correlated positively with neural activity in left posterior inferior frontal gyrus (a region linked to syllabification processes during speech production). Finally, reading comprehension scores correlated positively with vocabulary and print exposure measures, but not with phonological awareness scores.
Collapse
Affiliation(s)
- Karen Emmorey
- San Diego State University, Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, Suite 200, San Diego, CA 92012, USA.
| | - Stephen McCullough
- San Diego State University, Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, Suite 200, San Diego, CA 92012, USA
| | - Jill Weisberg
- San Diego State University, Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, Suite 200, San Diego, CA 92012, USA
| |
Collapse
|
21
|
Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture. Proc Natl Acad Sci U S A 2015; 112:11684-9. [PMID: 26283352 DOI: 10.1073/pnas.1510527112] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Collapse
|
22
|
Jednoróg K, Bola Ł, Mostowski P, Szwed M, Boguszewski PM, Marchewka A, Rutkowski P. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language. Neuropsychologia 2015; 71:191-200. [DOI: 10.1016/j.neuropsychologia.2015.03.031] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2014] [Revised: 03/30/2015] [Accepted: 03/31/2015] [Indexed: 10/23/2022]
|
23
|
Kemmerer D. Word classes in the brain: Implications of linguistic typology for cognitive neuroscience. Cortex 2014; 58:27-51. [DOI: 10.1016/j.cortex.2014.05.004] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2014] [Revised: 05/02/2014] [Accepted: 05/12/2014] [Indexed: 12/23/2022]
|
24
|
Emmorey K, McCullough S, Mehta S, Grabowski TJ. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language. Front Psychol 2014; 5:484. [PMID: 24904497 PMCID: PMC4033845 DOI: 10.3389/fpsyg.2014.00484] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Accepted: 05/03/2014] [Indexed: 11/24/2022] Open
Abstract
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
Collapse
Affiliation(s)
- Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Sonya Mehta
- Department of Psychology, University of Washington Seattle, WA, USA ; Department of Radiology, University of Washington Seattle, WA, USA
| | | |
Collapse
|