1
|
Malaia EA, Borneman SC, Borneman JD, Krebs J, Wilbur RB. Prediction underlying comprehension of human motion: an analysis of Deaf signer and non-signer EEG in response to visual stimuli. Front Neurosci 2023; 17:1218510. [PMID: 37901437 PMCID: PMC10602904 DOI: 10.3389/fnins.2023.1218510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/27/2023] [Indexed: 10/31/2023] Open
Abstract
Introduction Sensory inference and top-down predictive processing, reflected in human neural activity, play a critical role in higher-order cognitive processes, such as language comprehension. However, the neurobiological bases of predictive processing in higher-order cognitive processes are not well-understood. Methods This study used electroencephalography (EEG) to track participants' cortical dynamics in response to Austrian Sign Language and reversed sign language videos, measuring neural coherence to optical flow in the visual signal. We then used machine learning to assess entropy-based relevance of specific frequencies and regions of interest to brain state classification accuracy. Results EEG features highly relevant for classification were distributed across language processing-related regions in Deaf signers (frontal cortex and left hemisphere), while in non-signers such features were concentrated in visual and spatial processing regions. Discussion The results highlight functional significance of predictive processing time windows for sign language comprehension and biological motion processing, and the role of long-term experience (learning) in minimizing prediction error.
Collapse
Affiliation(s)
- Evie A. Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, United States
| | - Sean C. Borneman
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, United States
| | - Joshua D. Borneman
- Department of Linguistics, Purdue University, West Lafayette, IN, United States
| | - Julia Krebs
- Linguistics Department, University of Salzburg, Salzburg, Austria
- Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
| | - Ronnie B. Wilbur
- Department of Linguistics, Purdue University, West Lafayette, IN, United States
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
2
|
Blasi DE, Henrich J, Adamou E, Kemmerer D, Majid A. Over-reliance on English hinders cognitive science. Trends Cogn Sci 2022; 26:1153-1170. [PMID: 36253221 DOI: 10.1016/j.tics.2022.09.015] [Citation(s) in RCA: 78] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/19/2022] [Accepted: 09/22/2022] [Indexed: 11/05/2022]
Abstract
English is the dominant language in the study of human cognition and behavior: the individuals studied by cognitive scientists, as well as most of the scientists themselves, are frequently English speakers. However, English differs from other languages in ways that have consequences for the whole of the cognitive sciences, reaching far beyond the study of language itself. Here, we review an emerging body of evidence that highlights how the particular characteristics of English and the linguistic habits of English speakers bias the field by both warping research programs (e.g., overemphasizing features and mechanisms present in English over others) and overgeneralizing observations from English speakers' behaviors, brains, and cognition to our entire species. We propose mitigating strategies that could help avoid some of these pitfalls.
Collapse
Affiliation(s)
- Damián E Blasi
- Department of Human Evolutionary Biology, Harvard University, 11 Divinity Street, 02138 Cambridge, MA, USA; Department of Linguistic and Cultural Evolution, Max Planck Institute for Evolutionary Anthropology, Deutscher Pl. 6, 04103 Leipzig, Germany; Human Relations Area Files, 755 Prospect Street, New Haven, CT 06511-1225, USA.
| | - Joseph Henrich
- Department of Human Evolutionary Biology, Harvard University, 11 Divinity Street, 02138 Cambridge, MA, USA
| | - Evangelia Adamou
- Languages and Cultures of Oral Tradition lab, National Center for Scientific Research (CNRS), 7 Rue Guy Môquet, 94801 Villejuif, France
| | - David Kemmerer
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907, USA; Department of Psychological Sciences, Purdue University, 703 3rd Street, West Lafayette, IN 47907, USA
| | - Asifa Majid
- Department of Experimental Psychology, University of Oxford, Woodstock Road, Oxford OX2 6GG, UK.
| |
Collapse
|
3
|
Heled E, Ohayon M. Working Memory for Faces among Individuals with Congenital Deafness. J Am Acad Audiol 2022; 33:342-348. [PMID: 36446592 DOI: 10.1055/s-0042-1754369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
BACKGROUND Studies examining face processing among individuals with congenital deafness show inconsistent results that are often accounted for by sign language skill. However, working memory for faces as an aspect of face processing has not yet been examined in congenital deafness. PURPOSE To explore working memory for faces among individuals with congenital deafness who are skilled in sign language. RESEARCH DESIGN A quasi-experimental study of individuals with congenital deafness and a control group. STUDY SAMPLE Sixteen individuals with congenital deafness who are skilled in sign language and 18 participants with intact hearing, matched for age, and education. INTERVENTION The participants performed two conditions of the N-back test in ascending difficulty (i.e., 1-back and 2-back). DATA COLLECTION AND ANALYSIS Levene's and Shapiro-Wilk tests were used to assess group homoscedasticity and normality, respectively. A two-way repeated measures analysis of variance was applied to compare the groups in response time and accuracy of the N-back test, as well as Pearson correlation between response time and accuracy, and sign language skill duration. RESULTS The congenital deafness group performed better than controls, as was found in the response time but not in the accuracy variables. However, an interaction effect showed that this pattern was significant for the 1-back but not for the 2-back condition in the response time but not the accuracy. Further, there was a marginal effect in response time but a significant one in accuracy showing the 2-back was performed worse than the 1-back. No significant correlation was found between response time and accuracy, and sign language skill duration. CONCLUSION Face processing advantage associated with congenital deafness is dependent on cognitive load, but sign language duration does not affect this trend. In addition, response time and accuracy are not equally sensitive to performance differences in the N-back test.
Collapse
Affiliation(s)
- Eyal Heled
- Department of Psychology, Ariel University, Ariel, Israel
- Department of Neurological Rehabilitation, Sheba Medical Center, Ramat-Gan, Israel
| | - Maayon Ohayon
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
4
|
Radošević T, Malaia EA, Milković M. Predictive Processing in Sign Languages: A Systematic Review. Front Psychol 2022; 13:805792. [PMID: 35496220 PMCID: PMC9047358 DOI: 10.3389/fpsyg.2022.805792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/03/2022] [Indexed: 01/12/2023] Open
Abstract
The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing.
Collapse
Affiliation(s)
- Tomislav Radošević
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| | - Evie A Malaia
- Laboratory for Neuroscience of Dynamic Cognition, Department of Communicative Disorders, College of Arts and Sciences, University of Alabama, Tuscaloosa, AL, United States
| | - Marina Milković
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
5
|
Heled E, Ohayon M, Oshri O. Working memory in intact modalities among individuals with sensory deprivation. Heliyon 2022; 8:e09558. [PMID: 35706957 PMCID: PMC9189883 DOI: 10.1016/j.heliyon.2022.e09558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 04/30/2022] [Accepted: 05/25/2022] [Indexed: 10/25/2022] Open
|
6
|
Caldwell HB. Sign and Spoken Language Processing Differences in the Brain: A Brief Review of Recent Research. Ann Neurosci 2022; 29:62-70. [PMID: 35875424 PMCID: PMC9305909 DOI: 10.1177/09727531211070538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 11/29/2021] [Indexed: 11/27/2022] Open
Abstract
Background: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language. Summary: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences’ importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages’ auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research. Key message: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
Collapse
Affiliation(s)
- Hayley Bree Caldwell
- Cognitive and Systems Neuroscience Research Hub (CSN-RH), School of Justice and Society, University of South Australia Magill Campus, Magill, South Australia, Australia
| |
Collapse
|
7
|
Krebs J, Roehm D, Wilbur RB, Malaia EA. Age of sign language acquisition has lifelong effect on syntactic preferences in sign language users. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2021; 45:397-408. [PMID: 34690387 DOI: 10.1177/0165025420958193] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Acquisition of natural language has been shown to fundamentally impact both one's ability to use the first language, and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue, because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later - a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range: 28-58 years) with early (0-3 years) or later (4-7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject-object-verb vs. object-subject-verb) were examined in sentences that included: 1) simple sentences, 2) topicalized sentences, and 3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.
Collapse
Affiliation(s)
- Julia Krebs
- Research group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria.,Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria
| | - Dietmar Roehm
- Research group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria.,Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria
| | - Ronnie B Wilbur
- Linguistics Program, and Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Lyles-Porter Hall, West Lafayette, IN 47907, USA
| | - Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL 35404, USA
| |
Collapse
|
8
|
Heled E, Ohayon M. Visuospatial and Tactile Working Memory in Individuals with Congenital Deafness. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:314-321. [PMID: 34007997 DOI: 10.1093/deafed/enab005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 02/01/2021] [Accepted: 02/02/2021] [Indexed: 06/12/2023]
Abstract
Studies examining visuospatial working memory (WM) in individuals with congenital deafness have yielded inconsistent results, and tactile WM has rarely been examined. The current study examined WM span tasks in the two modalities among 20 individuals with congenital deafness and 20 participants with typical hearing. The congenital deafness group had longer forward and backward spans than typical hearing participants in a computerized Corsi block-tapping test (Visuospatial Span), whereas no such difference was found in the Tactual Span (tactile WM). In the congenital deafness group, age of sign language acquisition was not correlated with either condition of the visuospatial task, and Tactual and Visuospatial Spans scores were correlated in the backward but not the forward condition. The typical hearing group showed no correlation between the tasks. The findings suggest that early deafness leads to visuospatial but not tactile superiority in WM, specifically with respect to the storage component. More broadly, it appears that deafness-related compensation mechanisms in WM do not affect the other modalities in a uniform manner.
Collapse
Affiliation(s)
- Eyal Heled
- Department of Psychology, Ariel University, Israel
- Department of Neurological Rehabilitation, Sheba Medical Center, Israel
| | | |
Collapse
|
9
|
Malaia EA, Krebs J, Roehm D, Wilbur RB. Age of acquisition effects differ across linguistic domains in sign language: EEG evidence. BRAIN AND LANGUAGE 2020; 200:104708. [PMID: 31698097 PMCID: PMC6934356 DOI: 10.1016/j.bandl.2019.104708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 10/10/2019] [Accepted: 10/11/2019] [Indexed: 06/10/2023]
Abstract
One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Speech and Hearing Clinic, 700 Johnny Stallings Drive, Tuscaloosa, AL 35401, USA.
| | - Julia Krebs
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Dietmar Roehm
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Ronnie B Wilbur
- Department of Linguistics, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA; Department of Speech, Language, and Hearing Sciences, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA
| |
Collapse
|
10
|
Malaia EA, Wilbur RB. Syllable as a unit of information transfer in linguistic communication: The entropy syllable parsing model. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 11:e1518. [PMID: 31505710 DOI: 10.1002/wcs.1518] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 08/03/2019] [Accepted: 08/16/2019] [Indexed: 12/12/2022]
Abstract
To understand human language-both spoken and signed-the listener or viewer has to parse the continuous external signal into components. The question of what those components are (e.g., phrases, words, sounds, phonemes?) has been a subject of long-standing debate. We re-frame this question to ask: What properties of the incoming visual or auditory signal are indispensable to eliciting language comprehension? In this review, we assess the phenomenon of language parsing from modality-independent viewpoint. We show that the interplay between dynamic changes in the entropy of the signal and between neural entrainment to the signal at syllable level (4-5 Hz range) is causally related to language comprehension in both speech and sign language. This modality-independent Entropy Syllable Parsing model for the linguistic signal offers insight into the mechanisms of language processing, suggesting common neurocomputational bases for syllables in speech and sign language. This article is categorized under: Linguistics > Linguistic Theory Linguistics > Language in Mind and Brain Linguistics > Computational Models of Language Psychology > Language.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama
| | - Ronnie B Wilbur
- Department of Speech, Language, Hearing Sciences, College of Health and Human Sciences, Purdue University, West Lafayette, Indiana.,Linguistics, School of Interdisciplinary Studies, College of Liberal Arts, Purdue University, West Lafayette, Indiana
| |
Collapse
|
11
|
Introduction to impairments of short-term memory buffers: Do they exist? Cortex 2019; 112:1-4. [DOI: 10.1016/j.cortex.2019.02.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Accepted: 02/06/2019] [Indexed: 11/19/2022]
|
12
|
Blumenthal-Dramé A, Malaia E. Shared neural and cognitive mechanisms in action and language: The multiscale information transfer framework. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2018; 10:e1484. [PMID: 30417551 DOI: 10.1002/wcs.1484] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 09/20/2018] [Accepted: 10/02/2018] [Indexed: 11/11/2022]
Abstract
This review compares how humans process action and language sequences produced by other humans. On the one hand, we identify commonalities between action and language processing in terms of cognitive mechanisms (e.g., perceptual segmentation, predictive processing, integration across multiple temporal scales), neural resources (e.g., the left inferior frontal cortex), and processing algorithms (e.g., comprehension based on changes in signal entropy). On the other hand, drawing on sign language with its particularly strong motor component, we also highlight what differentiates (both oral and signed) linguistic communication from nonlinguistic action sequences. We propose the multiscale information transfer framework (MSIT) as a way of integrating these insights and highlight directions into which future empirical research inspired by the MSIT framework might fruitfully evolve. This article is categorized under: Psychology > Language Linguistics > Language in Mind and Brain Psychology > Motor Skill and Performance Psychology > Prediction.
Collapse
Affiliation(s)
- Alice Blumenthal-Dramé
- Department of English, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany.,Freiburg Institute for Advanced Studies, Freiburg, Germany
| | - Evie Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama.,Freiburg Institute for Advanced Studies, Freiburg, Germany
| |
Collapse
|