1
|
Loos C, German A, Meier RP. Simultaneous structures in sign languages: Acquisition and emergence. Front Psychol 2022; 13:992589. [PMID: 36619119 PMCID: PMC9815181 DOI: 10.3389/fpsyg.2022.992589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
The visual-gestural modality affords its users simultaneous movement of several independent articulators and thus lends itself to simultaneous encoding of information. Much research has focused on the fact that sign languages coordinate two manual articulators in addition to a range of non-manual articulators to present different types of linguistic information simultaneously, from phonological contrasts to inflection, spatial relations, and information structure. Children and adults acquiring a signed language arguably thus need to comprehend and produce simultaneous structures to a greater extent than individuals acquiring a spoken language. In this paper, we discuss the simultaneous encoding that is found in emerging and established sign languages; we also discuss places where sign languages are unexpectedly sequential. We explore potential constraints on simultaneity in cognition and motor coordination that might impact the acquisition and use of simultaneous structures.
Collapse
Affiliation(s)
- Cornelia Loos
- Institute of German Sign Language and Communication of the Deaf, Universität Hamburg, Hamburg, Germany,*Correspondence: Cornelia Loos,
| | - Austin German
- Department of Linguistics, University of Texas at Austin, Austin, TX, United States
| | - Richard P. Meier
- Department of Linguistics, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
2
|
Tomaszewski P, Krzysztofiak P, Morford JP, Eźlakowski W. Effects of Age-of-Acquisition on Proficiency in Polish Sign Language: Insights to the Critical Period Hypothesis. Front Psychol 2022; 13:896339. [PMID: 35693522 PMCID: PMC9174753 DOI: 10.3389/fpsyg.2022.896339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 04/26/2022] [Indexed: 11/30/2022] Open
Abstract
This study focuses on the relationship between the age of acquisition of Polish Sign Language (PJM) by deaf individuals and their receptive language skills at the phonological, morphological and syntactic levels. Sixty Deaf signers of PJM were recruited into three equal groups (n = 20): (1) a group exposed to PJM from birth from their deaf parents; (2) a group of childhood learners of PJM, who reported learning PJM between 4 and 8 years; (3) a group of adolescent learners of PJM, who reported learning PJM between 9 and 13 years. The PJM Perception and Comprehension Test was used to assess three aspects of language processing: phonological, morphological and syntactic. Participants were asked to decide whether a series of signs and sentences were acceptable in PJM. Results show that the age of PJM acquisition has a significant impact on performance on this task. The earlier deaf people acquire PJM, the more likely they were to distinguish signs and sentences considered permissible and impermissible in PJM by native signers. Native signers had significantly greater accuracy on the phonological, morphological, and syntactic items than either the Childhood or the Adolescent signers. Further, the Childhood signers had significantly greater accuracy than the Adolescent signers on all three parts of the test. Comparing performance on specific structures targeted within each part of the test revealed that multi-channel signs and negative suffixes posed the greatest challenge for Adolescent signers relative to the Native signers. The above results provide evidence from a less-commonly studied signed language that the age of onset of first language acquisition affects ultimate outcomes in language acquisition across all levels of grammatical structure. In addition, this research corroborates prior studies demonstrating that the critical period is independent of language modality. Contrary to a common public health assumption that early exposure to language is less vital to signed than to spoken language development, the results of this study demonstrate that early exposure to a signed language promotes sensitivity to phonological, morphological and syntactic patterns in language.
Collapse
Affiliation(s)
| | - Piotr Krzysztofiak
- Faculty of Psychology, SWPS University of Social Sciences and Humanities, Warsaw, Poland
| | - Jill P. Morford
- Department of Linguistics, University of New Mexico, Albuquerque, NM, United States
| | | |
Collapse
|
3
|
Abner N, Namboodiripad S, Spaepen E, Goldin-Meadow S. Emergent Morphology in Child Homesign: Evidence from Number Language. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2021; 18:16-40. [PMID: 35603228 PMCID: PMC9122328 DOI: 10.1080/15475441.2021.1922281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Human languages, signed and spoken, can be characterized by the structural patterns they use to associate communicative forms with meanings. One such pattern is paradigmatic morphology, where complex words are built from the systematic use and re-use of sub-lexical units. Here, we provide evidence of emergent paradigmatic morphology akin to number inflection in a communication system developed without input from a conventional language, homesign. We study the communication systems of four deaf child homesigners (mean age 8;02). Although these idiosyncratic systems vary from one another, we nevertheless find that all four children use handshape and movement devices productively to express cardinal and non-cardinal number information, and that their number expressions are consistent in both form and meaning. Our study shows, for the first time, that all four homesigners not only incorporate number devices into representational devices used as predicates , but also into gestures functioning as nominals, including deictic gestures. In other words, the homesigners express number by systematically combining and re-combining additive markers for number (qua inflectional morphemes) with representational and deictic gestures (qua bases). The creation of new, complex forms with predictable meanings across gesture types and linguistic functions constitutes evidence for an inflectional morphological paradigm in homesign and expands our understanding of the structural patterns of language that are, and are not, dependent on linguistic input.
Collapse
Affiliation(s)
- Natasha Abner
- Department of Linguistics, University of Michigan, Ann Arbor, MI USA Savithry, Namboodiripad, Spaepen
| | - Savithry Namboodiripad
- Department of Linguistics, University of Michigan, Ann Arbor, MI USA Savithry, Namboodiripad, Spaepen
| | | | | |
Collapse
|
4
|
Goldin‐Meadow S. Discovering the Biases Children Bring to Language Learning. CHILD DEVELOPMENT PERSPECTIVES 2020. [DOI: 10.1111/cdep.12379] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
5
|
Evolving artificial sign languages in the lab: From improvised gesture to systematic sign. Cognition 2019; 192:103964. [PMID: 31302362 DOI: 10.1016/j.cognition.2019.05.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/30/2019] [Accepted: 05/01/2019] [Indexed: 11/23/2022]
Abstract
Recent work on emerging sign languages provides evidence for how key properties of linguistic systems are created. Here we use laboratory experiments to investigate the contribution of two specific mechanisms-interaction and transmission-to the emergence of a manual communication system in silent gesturers. We show that the combined effects of these mechanisms, rather than either alone, maintain communicative efficiency, and lead to a gradual increase of regularity and systematic structure. The gestures initially produced by participants are unsystematic and resemble pantomime, but come to develop key language-like properties similar to those documented in newly emerging sign systems.
Collapse
|
6
|
Abstract
What role does language play in our thoughts? A longstanding proposal that has gained traction among supporters of embodied or grounded cognition suggests that it serves as a cognitive scaffold. This idea turns on the fact that language-with its ability to capture statistical regularities, leverage culturally acquired information, and engage grounded metaphors-is an effective and readily available support for our thinking. In this essay, I argue that language should be viewed as more than this; it should be viewed as a neuroenhancement. The neurologically realized language system is an important subcomponent of a flexible, multimodal, and multilevel conceptual system. It is not merely a source for information about the world but also a computational add-on that extends our conceptual reach. This approach provides a compelling explanation of the course of development, our facility with abstract concepts, and even the scope of language-specific influences on cognition.
Collapse
Affiliation(s)
- Guy Dove
- Department of Philosophy, University of Louisville, Louisville, KY, USA
| |
Collapse
|
7
|
Abstract
The commentaries have led us to entertain expansions of our paradigm to include new theoretical questions, new criteria for what counts as a gesture, and new data and populations to study. The expansions further reinforce the approach we took in the target article: namely, that linguistic and gestural components are two distinct yet integral sides of communication, which need to be studied together.
Collapse
|
8
|
Abstract
Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with which we look at language to include the manual modality. I suggest that human communication is most effective when it makes use of two types of formats--a discrete and segmented code, produced simultaneously along with an analog and mimetic code. The segmented code is supported by both the oral and the manual modalities. However, the mimetic code is more easily handled by the manual modality. We might then expect mimetic encoding to be done preferentially in the manual modality (gesture), leaving segmented encoding to the oral modality (speech). This argument rests on two assumptions: (1) The manual modality is as good at segmented encoding as the oral modality; sign languages, established and idiosyncratic, provide evidence for this assumption. (2) Mimetic encoding is important to human communication and best handled by the manual modality; co-speech gesture provides evidence for this assumption. By including the manual modality in two contexts--when it takes on the primary function of communication (sign language), and when it takes on a complementary communicative function (gesture)--in our analysis of language, we gain new perspectives on the origins and continuing development of language.
Collapse
|
9
|
Cartmill EA, Rissman L, Novack M, Goldin-Meadow S. The development of iconicity in children's co-speech gesture and homesign. ACTA ACUST UNITED AC 2017; 8:42-68. [PMID: 29034011 DOI: 10.1075/lia.8.1.03car] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Gesture can illustrate objects and events in the world by iconically reproducing elements of those objects and events. Children do not begin to express ideas iconically, however, until after they have begun to use conventional forms. In this paper, we investigate how children's use of iconic resources in gesture relates to the developing structure of their communicative systems. Using longitudinal video corpora, we compare the emergence of manual iconicity in hearing children who are learning a spoken language (co-speech gesture) to the emergence of manual iconicity in a deaf child who is creating a manual system of communication (homesign). We focus on one particular element of iconic gesture - the shape of the hand (handshape). We ask how handshape is used as an iconic resource in 1-5-year-olds, and how it relates to the semantic content of children's communicative acts. We find that patterns of handshape development are broadly similar between co-speech gesture and homesign, suggesting that the building blocks underlying children's ability to iconically map manual forms to meaning are shared across different communicative systems: those where gesture is produced alongside speech, and those where gesture is the primary mode of communication.
Collapse
|
10
|
Rissman L, Goldin-Meadow S. The Development of Causal Structure without a Language Model. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2017; 13:286-299. [PMID: 28983210 PMCID: PMC5624539 DOI: 10.1080/15475441.2016.1254633] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Across a diverse range of languages, children proceed through similar stages in their production of causal language: their initial verbs lack internal causal structure, followed by a period during which they produce causative overgeneralizations, indicating knowledge of a productive causative rule. We asked in this study whether a child not exposed to structured linguistic input could create linguistic devices for encoding causation and, if so, whether the emergence of this causal language would follow a trajectory similar to the one observed for children learning language from linguistic input. We show that the child in our study did develop causation-encoding morphology, but only after initially using verbs that lacked internal causal structure. These results suggest that the ability to encode causation linguistically can emerge in the absence of a language model, and that exposure to linguistic input is not the only factor guiding children from one stage to the next in their production of causal language.
Collapse
Affiliation(s)
| | - Susan Goldin-Meadow
- Department of Psychology, University of Chicago
- Center for Gesture, Sign, and Language, University of Chicago
| |
Collapse
|
11
|
Abstract
Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.
Collapse
Affiliation(s)
- Diane Brentari
- Department of Linguistics, University of Chicago, Chicago, Illinois 60637
| | | |
Collapse
|
12
|
Goldin-Meadow S, Brentari D. Gesture, sign, and language: The coming of age of sign language and gesture studies. Behav Brain Sci 2017; 40:e46. [PMID: 26434499 PMCID: PMC4821822 DOI: 10.1017/s0140525x15001247] [Citation(s) in RCA: 124] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Departments of Psychology and Comparative Human Development,University of Chicago,Chicago,IL 60637;Center for Gesture, Sign, and Language,Chicago,IL ://goldin-meadow-lab.uchicago.edu
| | - Diane Brentari
- Department of Linguistics,University of Chicago,Chicago,IL 60637;Center for Gesture, Sign, and Language,Chicago,IL ://signlanguagelab.uchicago.edu
| |
Collapse
|
13
|
Roberts G, Lewandowski J, Galantucci B. How communication changes when we cannot mime the world: Experimental evidence for the effect of iconicity on combinatoriality. Cognition 2015; 141:52-66. [PMID: 25919085 DOI: 10.1016/j.cognition.2015.04.001] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Revised: 03/27/2015] [Accepted: 04/01/2015] [Indexed: 11/19/2022]
Abstract
Communication systems are exposed to two different pressures: a pressure for transmission efficiency, such that messages are simple to produce and perceive, and a pressure for referential efficiency, such that messages are easy to understand with their intended meaning. A solution to the first pressure is combinatoriality--the recombination of a few basic meaningless forms to express an infinite number of meanings. A solution to the second is iconicity--the use of forms that resemble what they refer to. These two solutions appear to be incompatible with each other, as iconic forms are ill-suited for use as meaningless combinatorial units. Furthermore, in the early stages of a communication system, when basic referential forms are in the process of being established, the pressure for referential efficiency is likely to be particularly strong, which may lead it to trump the pressure for transmission efficiency. This means that, where iconicity is available as a strategy, it is likely to impede the emergence of combinatoriality. Although this hypothesis seems consistent with some observations of natural language, it was unclear until recently how it could be soundly tested. This has changed thanks to the development of a line of research, known as Experimental Semiotics, in which participants construct novel communication systems in the laboratory using an unfamiliar medium. We conducted an Experimental Semiotic study in which we manipulated the opportunity for iconicity by varying the kind of referents to be communicated, while keeping the communication medium constant. We then measured the combinatoriality and transmission efficiency of the communication systems. We found that, where iconicity was available, it provided scaffolding for the construction of communication systems and was overwhelmingly adopted. Where it was not available, however, the resulting communication systems were more combinatorial and their forms more efficient to produce. This study enriches our understanding of the fundamental design principles of human communication and contributes tools to enrich it further.
Collapse
Affiliation(s)
- Gareth Roberts
- Department of Psychology, Yeshiva University, New York, NY, USA; Department of Linguistics, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Bruno Galantucci
- Department of Psychology, Yeshiva University, New York, NY, USA; Haskins Laboratories, New Haven, CT, USA.
| |
Collapse
|
14
|
Goldin-Meadow S. Studying the mechanisms of language learning by varying the learning environment and the learner. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 30:899-911. [PMID: 26668813 PMCID: PMC4676577 DOI: 10.1080/23273798.2015.1016978] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Language learning is a resilient process, and many linguistic properties can be developed under a wide range of learning environments and learners. The first goal of this review is to describe properties of language that can be developed without exposure to a language model - the resilient properties of language - and to explore conditions under which more fragile properties emerge. But even if a linguistic property is resilient, the developmental course that the property follows is likely to vary as a function of learning environment and learner, that is, there are likely to be individual differences in the learning trajectories children follow. The second goal is to consider how the resilient properties are brought to bear on language learning when a child is exposed to a language model. The review ends by considering the implications of both sets of findings for mechanisms, focusing on the role that the body and linguistic input play in language learning.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Department of Psychology, University of Chicago, 5848 South University Avenue, Chicago, IL 60637, USA
| |
Collapse
|
15
|
Goldin-Meadow S, Brentari D, Coppola M, Horton L, Senghas A. Watching language grow in the manual modality: nominals, predicates, and handshapes. Cognition 2015; 136:381-95. [PMID: 25546342 PMCID: PMC4308574 DOI: 10.1016/j.cognition.2014.11.029] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2013] [Revised: 11/09/2014] [Accepted: 11/17/2014] [Indexed: 11/18/2022]
Abstract
All languages, both spoken and signed, make a formal distinction between two types of terms in a proposition--terms that identify what is to be talked about (nominals) and terms that say something about this topic (predicates). Here we explore conditions that could lead to this property by charting its development in a newly emerging language--Nicaraguan Sign Language (NSL). We examine how handshape is used in nominals vs. predicates in three Nicaraguan groups: (1) homesigners who are not part of the Deaf community and use their own gestures, called homesigns, to communicate; (2) NSL cohort 1 signers who fashioned the first stage of NSL; (3) NSL cohort 2 signers who learned NSL from cohort 1. We compare these three groups to a fourth: (4) native signers of American Sign Language (ASL), an established sign language. We focus on handshape in predicates that are part of a productive classifier system in ASL; handshape in these predicates varies systematically across agent vs. no-agent contexts, unlike handshape in the nominals we study, which does not vary across these contexts. We found that all four groups, including homesigners, used handshape differently in nominals vs. predicates--they displayed variability in handshape form across agent vs. no-agent contexts in predicates, but not in nominals. Variability thus differed in predicates and nominals: (1) In predicates, the variability across grammatical contexts (agent vs. no-agent) was systematic in all four groups, suggesting that handshape functioned as a productive morphological marker on predicate signs, even in homesign. This grammatical use of handshape can thus appear in the earliest stages of an emerging language. (2) In nominals, there was no variability across grammatical contexts (agent vs. no-agent), but there was variability within- and across-individuals in the handshape used in the nominal for a particular object. This variability was striking in homesigners (an individual homesigner did not necessarily use the same handshape in every nominal he produced for a particular object), but decreased in the first cohort of NSL and remained relatively constant in the second cohort. Stability in the lexical use of handshape in nominals thus does not seem to emerge unless there is pressure from a peer linguistic community. Taken together, our findings argue that a community of users is essential to arrive at a stable nominal lexicon, but not to establish a productive morphological marker in predicates. Examining the steps a manual communication system takes as it moves toward becoming a fully-fledged language offers a unique window onto factors that have made human language what it is.
Collapse
Affiliation(s)
| | | | - M Coppola
- University of Connecticut, United States
| | - L Horton
- University of Chicago, United States
| | | |
Collapse
|
16
|
Ozyürek A, Furman R, Goldin-Meadow S. On the way to language: event segmentation in homesign and gesture. JOURNAL OF CHILD LANGUAGE 2015; 42:64-94. [PMID: 24650738 PMCID: PMC4169751 DOI: 10.1017/s0305000913000512] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.
Collapse
Affiliation(s)
- Asli Ozyürek
- Radboud University Nijmegen and Max Planck Institute for Psycholinguistics,the Netherlands
| | | | | |
Collapse
|
17
|
Goldin-Meadow S. The impact of time on predicate forms in the manual modality: signers, homesigners, and silent gesturers. Top Cogn Sci 2015; 7:169-84. [PMID: 25329421 PMCID: PMC4310783 DOI: 10.1111/tops.12119] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2013] [Revised: 11/11/2013] [Accepted: 02/07/2014] [Indexed: 11/27/2022]
Abstract
It is difficult to create spoken forms that can be understood on the spot. But the manual modality, in large part because of its iconic potential, allows us to construct forms that are immediately understood, thus requiring essentially no time to develop. This paper contrasts manual forms for actions produced over three time spans-by silent gesturers who are asked to invent gestures on the spot; by homesigners who have created gesture systems over their life spans; and by signers who have learned a conventional sign language from other signers-and finds that properties of the predicate differ across these time spans. Silent gesturers use location to establish co-reference in the way established sign languages do, but they show little evidence of the segmentation sign languages display in motion forms for manner and path, and little evidence of the finger complexity sign languages display in handshapes in predicates representing events. Homesigners, in contrast, not only use location to establish co-reference but also display segmentation in their motion forms for manner and path and finger complexity in their object handshapes, although they have not yet decreased finger complexity to the levels found in sign languages in their handling handshapes. The manual modality thus allows us to watch language as it grows, offering insight into factors that may have shaped and may continue to shape human language.
Collapse
|
18
|
Morgan G. On language acquisition in speech and sign: development of combinatorial structure in both modalities. Front Psychol 2014; 5:1217. [PMID: 25426085 PMCID: PMC4227467 DOI: 10.3389/fpsyg.2014.01217] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Accepted: 10/07/2014] [Indexed: 11/25/2022] Open
Abstract
Languages are composed of a conventionalized system of parts which allow speakers and signers to generate an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: “how do I make a language with combinatorial structure”? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.
Collapse
Affiliation(s)
- Gary Morgan
- Language and Communication Science, City University London, London UK
| |
Collapse
|
19
|
Goldin-Meadow S, Namboodiripad S, Mylander C, Özyürek A, Sancar B. The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children. JOURNAL OF COGNITION AND DEVELOPMENT 2014; 16:55-80. [PMID: 25663828 DOI: 10.1080/15248372.2013.803970] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, that have many of the properties of natural language-the so-called resilient properties of language. We explored the resilience of structure built around the predicate-in particular, how manner and path are mapped onto the verb-in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children's gestures. Although co-speech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language.
Collapse
Affiliation(s)
| | | | | | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, Nijmegen ; Radboud University, Nijmegen
| | | |
Collapse
|
20
|
Coppola M, Brentari D. From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner. Front Psychol 2014; 5:830. [PMID: 25191283 PMCID: PMC4139701 DOI: 10.3389/fpsyg.2014.00830] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Accepted: 07/11/2014] [Indexed: 11/25/2022] Open
Abstract
Many sign languages display crosslinguistic consistencies in the use of two iconic aspects of handshape, handshape type and finger group complexity. Handshape type is used systematically in form-meaning pairings (morphology): Handling handshapes (Handling-HSs), representing how objects are handled, tend to be used to express events with an agent ("hand-as-hand" iconicity), and Object handshapes (Object-HSs), representing an object's size/shape, are used more often to express events without an agent ("hand-as-object" iconicity). Second, in the distribution of meaningless properties of form (morphophonology), Object-HSs display higher finger group complexity than Handling-HSs. Some adult homesigners, who have not acquired a signed or spoken language and instead use a self-generated gesture system, exhibit these two properties as well. This study illuminates the development over time of both phenomena for one child homesigner, "Julio," age 7;4 (years; months) to 12;8. We elicited descriptions of events with and without agents to determine whether morphophonology and morphosyntax can develop without linguistic input during childhood, and whether these structures develop together or independently. Within the time period studied: (1) Julio used handshape type differently in his responses to vignettes with and without an agent; however, he did not exhibit the same pattern that was found previously in signers, adult homesigners, or gesturers: while he was highly likely to use a Handling-HS for events with an agent (82%), he was less likely to use an Object-HS for non-agentive events (49%); i.e., his productions were heavily biased toward Handling-HSs; (2) Julio exhibited higher finger group complexity in Object- than in Handling-HSs, as in the sign language and adult homesigner groups previously studied; and (3) these two dimensions of language developed independently, with phonological structure showing a sign language-like pattern at an earlier age than morphosyntactic structure. We conclude that iconicity alone is not sufficient to explain the development of linguistic structure in homesign systems. Linguistic input is not required for some aspects of phonological structure to emerge in childhood, and while linguistic input is not required for morphology either, it takes time to emerge in homesign.
Collapse
Affiliation(s)
- Marie Coppola
- Departments of Psychology and Linguistics, Language Creation Laboratory, University of ConnecticutStorrs, CT, USA
| | - Diane Brentari
- Department of Linguistics, Sign Language Laboratory, University of ChicagoChicago, IL, USA
| |
Collapse
|
21
|
Caselli NK, Cohen-Goldberg AM. Lexical access in sign language: a computational model. Front Psychol 2014; 5:428. [PMID: 24860539 PMCID: PMC4030144 DOI: 10.3389/fpsyg.2014.00428] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 04/22/2014] [Indexed: 11/13/2022] Open
Abstract
PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Collapse
|
22
|
Applebaum L, Coppola M, Goldin-Meadow S. Prosody in a communication system developed without a language model. SIGN LANGUAGE AND LINGUISTICS 2014; 17:181-212. [PMID: 25574153 PMCID: PMC4285364 DOI: 10.1075/sll.17.2.02app] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Prosody, he "music" of language, is an important aspect of all natural languages, spoken and signed. We ask here whether prosody is also robust across learning conditions. If a child were not exposed to a conventional language and had to construct his own communication system, would that system contain prosodic structure? We address this question by observing a deaf child who received no sign language input and whose hearing loss prevented him from acquiring spoken language. Despite his lack of a conventional language model, this child developed his own gestural system. In this system, features known to mark phrase and utterance boundaries in established sign languages were used to consistently mark the ends of utterances, but not to mark phrase or utterance internal boundaries. A single child can thus develop the seeds of a prosodic system, but full elaboration may require more time, more users, or even more generations to blossom.
Collapse
|
23
|
|
24
|
Coppola M, Spaepen E, Goldin-Meadow S. Communicating about quantity without a language model: number devices in homesign grammar. Cogn Psychol 2013; 67:1-25. [PMID: 23872365 PMCID: PMC3870334 DOI: 10.1016/j.cogpsych.2013.05.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2012] [Revised: 12/05/2012] [Accepted: 05/03/2013] [Indexed: 11/16/2022]
Abstract
All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners' hearing communication partners displayed some, but not all, of the homesigners' linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners' gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners' linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input.
Collapse
Affiliation(s)
- Marie Coppola
- University of Chicago, Department of Psychology, 5848 S. University Ave., Chicago, IL 60637, United States
| | - Elizabet Spaepen
- University of Chicago, Department of Psychology, 5848 S. University Ave., Chicago, IL 60637, United States
| | - Susan Goldin-Meadow
- University of Chicago, Department of Psychology, 5848 S. University Ave., Chicago, IL 60637, United States
| |
Collapse
|
25
|
Hunsicker D, Goldin-Meadow S. How handshape type can distinguish between nouns and verbs in homesign. GESTURE (AMSTERDAM, NETHERLANDS) 2013; 175:354-376. [PMID: 25435844 PMCID: PMC4245027 DOI: 10.1075/gest.13.3.05hun] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
All established languages, spoken or signed, make a distinction between nouns and verbs. Even a young sign language emerging within a family of deaf individuals has been found to mark the noun-verb distinction, and to use handshape type to do so. Here we ask whether handshape type is used to mark the noun-verb distinction in a gesture system invented by a deaf child who does not have access to a usable model of either spoken or signed language. The child produces homesigns that have linguistic structure, but receives from his hearing parents co-speech gestures that are structured differently from his own gestures. Thus, unlike users of established and emerging languages, the homesigner is a producer of his system but does not receive it from others. Nevertheless, we found that the child used handshape type to mark the distinction between nouns and verbs at the early stages of development. The noun-verb distinction is thus so fundamental to language that it can arise in a homesign system not shared with others. We also found that the child abandoned handshape type as a device for distinguishing nouns from verbs at just the moment when he developed a combinatorial system of handshape and motion components that marked the distinction. The way the noun-verb distinction is marked thus depends on the full array of linguistic devices available within the system.
Collapse
|
26
|
So WC, Coppola M, Licciardello V, Goldin-Meadow S. The seeds of spatial grammar in the manual modality. Cogn Sci 2012; 29:1029-43. [PMID: 21702801 DOI: 10.1207/s15516709cog0000_38] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Sign languages modulate the production of signs in space and use this spatial modulation to refer back to entities-to maintain coreference. We ask here whether spatial modulation is so fundamental to language in the manual modality that it will be invented by individuals asked to create gestures on the spot. English speakers were asked to describe vignettes under 2 conditions: using gesture without speech, and using speech with spontaneous gestures. When using gesture alone, adults placed gestures for particular entities in non-neutral locations and then used those locations to refer back to the entities. When using gesture plus speech, adults also produced gestures in non-neutral locations but used the locations coreferentially far less often. When gesture is forced to take on the full burden of communication, it exploits space for coreference. Coreference thus appears to be a resilient property of language, likely to emerge in communication systems no matter how simple.
Collapse
|
27
|
Brentari D, Coppola M, Mazzoni L, Goldin-Meadow S. When does a system become phonological? Handshape production in gesturers, signers, and homesigners. NATURAL LANGUAGE & LINGUISTIC THEORY 2012; 30:1-31. [PMID: 23723534 PMCID: PMC3665423 DOI: 10.1007/s11049-011-9145-1] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled. Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that homesigners pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit of a linguistic community. Finally, we propose that iconicity, morphology and phonology each play an important role in the system of sign language classifiers to create the earliest markers of phonology at the morphophonological interface.
Collapse
Affiliation(s)
- Diane Brentari
- Department of Linguistics, University of Chicago, 1010 East 59th Street, Chicago, IL 60637-1512, USA
| | - Marie Coppola
- Departments of Psychology and Linguistics, University of Connecticut, Storrs, CT, USA
| | - Laura Mazzoni
- Linguistics Department, University of Pisa, Pisa, Italy
| | - Susan Goldin-Meadow
- Departments of Psychology and Comparative Human Development, University of Chicago, Chicago, IL, USA
| |
Collapse
|
28
|
Sandler W, Aronoff M, Meir I, Padden C. The gradual emergence of phonological form in a new language. NATURAL LANGUAGE & LINGUISTIC THEORY 2011; 29:503-543. [PMID: 22223927 PMCID: PMC3250231 DOI: 10.1007/s11049-011-9128-2] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
The division of linguistic structure into a meaningless (phonological) level and a meaningful level of morphemes and words is considered a basic design feature of human language. Although established sign languages, like spoken languages, have been shown to be characterized by this bifurcation, no information has been available about the way in which such structure arises. We report here on a newly emerging sign language, Al-Sayyid Bedouin Sign Language, which functions as a full language but in which a phonological level of structure has not yet emerged. Early indications of formal regularities provide clues to the way in which phonological structure may develop over time.
Collapse
Affiliation(s)
- Wendy Sandler
- Department of English Language and Literature, University of Haifa, 31905 Haifa, Israel
| | - Mark Aronoff
- Department of Linguistics, SUNY Stony Brook, Stony Brook, NY 11794-4376, USA
| | - Irit Meir
- Department of Hebrew Language, Department of Communication Disorders, University of Haifa, 31905 Haifa, Israel
| | - Carol Padden
- Department of Communication, University of California San Diego, La Jolla, CA 92093-0503, USA
| |
Collapse
|
29
|
Franklin A, Giannakidou A, Goldin-Meadow S. Negation, questions, and structure building in a homesign system. Cognition 2010; 118:398-416. [PMID: 23630971 DOI: 10.1016/j.cognition.2010.08.017] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2009] [Revised: 08/09/2010] [Accepted: 08/27/2010] [Indexed: 10/18/2022]
Abstract
Deaf children whose hearing losses are so severe that they cannot acquire spoken language, and whose hearing parents have not exposed them to sign language, use gestures called homesigns to communicate. Homesigns have been shown to contain many of the properties of natural languages. Here we ask whether homesign has structure building devices for negation and questions. We identify two meanings (negation, question) that correspond semantically to propositional functions, that is, to functions that apply to a sentence (whose semantic value is a proposition, ϕ) and yield another proposition that is more complex (¬ϕ for negation; ?ϕ for question). Combining ϕ with ¬ or ? thus involves sentence modification. We propose that these negative and question functions are structure building operators, and we support this claim with data from an American homesigner. We show that: (a) each meaning is marked by a particular form in the child's gesture system (side-to-side headshake for negation, manual flip for question); (b) the two markers occupy systematic, and different, positions at the periphery of the gesture sentences (headshake at the beginning, flip at the end); and (c) the flip is extended from questions to other uses associated with the wh-form (exclamatives, referential expressions of location) and thus functions like a category in natural languages. If what we see in homesign is a language creation process (Goldin-Meadow, 2003), and if negation and question formation involve sentential modification, then our analysis implies that homesign has at least this minimal sentential syntax. Our findings thus contribute to ongoing debates about properties that are fundamental to language and language learning.
Collapse
Affiliation(s)
- Amy Franklin
- University of Texas Health Science Center Houston, School of Biomedical Informatics, Center for Cognitive Informatics and Decision Making, Houston, Texas, United States.
| | | | | |
Collapse
|
30
|
Abstract
Imagine a child who has never seen or heard language. Would such a child be able to invent a language? Despite what one might guess, the answer is "yes". This chapter describes children who are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the children have not been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate--they gesture--and those gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The properties of language that we find in these gestures are just those properties that do not need to be handed down from generation to generation, but can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop. In contrast to these deaf children who are inventing language with their hands, hearing children are learning language from a linguistic model. But they too produce gestures, as do all hearing speakers (Feyereisen and de Lannoy 1991; Goldin-Meadow 2003b; Kendon 1980; McNeill 1992). Indeed, young hearing children often use gesture to communicate before they use words. Interestingly, changes in a child's gestures not only predate but also predict changes in the child's early language, suggesting that gesture may be playing a role in the language-learning process. This chapter begins with a description of the gestures the deaf child produces without speech. These gestures assume the full burden of communication and take on a language-like form--they are language. This phenomenon stands in contrast to the gestures hearing speakers produce with speech. These gestures share the burden of communication with speech and do not take on a language-like form--they are part of language.
Collapse
|
31
|
Goldin-Meadow S. Widening the Lens on Language Learning: Language Creation in Deaf Children and Adults in Nicaragua: Commentary on Senghas. Hum Dev 2010; 53:303-311. [PMID: 22476199 DOI: 10.1159/000321294] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
32
|
Morgan G, Herman R, Barriere I, Woll B. The onset and mastery of spatial language in children acquiring British Sign Language. COGNITIVE DEVELOPMENT 2008. [DOI: 10.1016/j.cogdev.2007.09.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
33
|
Slobin DI. Putting the pieces together: Commentary on “The onset and mastery of spatial language in children acquiring British sign language” by G. Morgan, R. Herman, I. Barriere, and B. Woll. COGNITIVE DEVELOPMENT 2008. [DOI: 10.1016/j.cogdev.2007.09.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
34
|
Baker SA, Golinkoff RM, Petitto LA. New Insights Into Old Puzzles From Infants' Categorical Discrimination of Soundless Phonetic Units. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2006; 2:147-162. [PMID: 19823599 PMCID: PMC2759762 DOI: 10.1207/s15473341lld0203_1] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
For 4 decades, serious scientific debate has persisted as to whether infants' remarkable capacity to detect and categorize phonetic units is derived from language-specific mechanisms or whether this capacity develops out of general perceptual mechanisms. The heart of this controversy has revolved around whether the young human brain is specialized to detect the underlying contrasting patterns in language or whether it simply processes general auditory perceptual features of sound that, over time, become utilized for language learning. This article takes a novel look at this question by using soundless phonetic units from a natural signed language as a new research tool. Research finds that 4-month-old hearing infants categorize soundless phonetic units on the basis of linguistic category membership, whereas 14-month-old infants fail to do so-thereby exhibiting the identical initial capacity and classic developmental shift in infant categorical discrimination of native and nonnative (foreign language) phonetic units in speech. These results suggest a novel testable hypothesis: Infants may begin life with the capacity to detect specific patterned units with alternating contrasts unique to natural language organization and to categorize them on the basis of linguistic category membership.
Collapse
Affiliation(s)
- Stephanie A Baker
- Departments of Psychological & Brain Sciences and Education, Dartmouth College, Hanover
| | | | | |
Collapse
|
35
|
Sheridan SR. A theory of marks and mind: the effect of notational systems on hominid brain evolution and child development with an emphasis on exchanges between mothers and children. Med Hypotheses 2004; 64:417-27. [PMID: 15607580 DOI: 10.1016/j.mehy.2004.09.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2004] [Accepted: 09/06/2004] [Indexed: 11/19/2022]
Abstract
A model of human language requires a theory of meaningful marks. Humans are the only species who use marks to think. A theory of marks identifies children's scribbles as significant behavior, while hypothesizing the importance of rotational systems to hominid brain evolution. By recognizing the importance of children's scribbles and drawings in developmental terms as well as in evolutionary terms, a marks-based rather than a predominantly speech-based theory of the human brain, language, and consciousness emerges. Combined research in anthropology, primatology, art history, neurology, child development (including research with deaf and blind children), gender studies and literacy suggests the importance of notational systems to human language, revealing the importance of mother/child interactions around marks and sounds to the development of an expressive, communicative, symbolic human brain. An understanding of human language is enriched by identifying marks carved on bone 1.9 million years ago as observational lunar calendar-keeping, pushing proto-literacy back dramatically. Neurologically, children recapitulate the meaningful marks of early hominins when they scribble and draw, reminding us that literacy belongs to humankind's earliest history. Even more than speech, such meaningful marks played - and continue to play - decisive roles in human brain evolution. The hominid brain required a model for integrative, transformative neural transfer. The research strongly suggests that humankind's multiple literacies (art, literature, scientific writing, mathematics and music) depended upon dyadic exchanges between hominid mothers and children, and that this exchange and sharing of visuo-spatial information drove the elaboration of human speech in terms of syntax, grammar and vocabulary. The human brain was spatial before it was linguistic. The child scribbles and draws before it speaks or writes. Children babble and scribble within the first two years of life. Hands and mouths are proximal on the sensory-motor cortex. Gestures accompany speech. Illiterate brains mis-pronounce nonsense sounds. Literate brains do not. Written language (work of the hands) enhances spoken language (work of the mouth). Until brain scans map the neurological links between human gesture, speech and marks in the context of mother/caregiver/child interactions, and research with literate and illiterate brains document even more precisely the long-term differences between these brains, the evolutionary pressure of marks on especially flexible maternal and infant brain tissue that occurred 1.9 million years, radically changing primate brain capabilities, requires an integrated theory of marks and mind.
Collapse
|
36
|
|
37
|
Zheng M, Goldin-Meadow S. Thought before language: how deaf and hearing children express motion events across cultures. Cognition 2002; 85:145-75. [PMID: 12127697 DOI: 10.1016/s0010-0277(02)00105-1] [Citation(s) in RCA: 101] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Do children come to the language-learning situation with a predetermined set of ideas about motion events that they want to communicate? If so, is the expression of these ideas modified by exposure to a language model within a particular cultural context? We explored these questions by comparing the gestures produced by Chinese and American deaf children who had not been exposed to a usable conventional language model with the speech of hearing children learning Mandarin or English. We found that, even in the absence of any conventional language model, deaf children conveyed the central elements of a motion event in their communications. More surprisingly, deaf children growing up in an American culture used their gestures to express motion events in precisely the same ways as deaf children growing up in a Chinese culture. In contrast, hearing children in the two cultures expressed motion events differently, in accordance with the languages they were learning. The American children obeyed the patterns of English and rarely omitted words for figures or agents. The Chinese children had more flexibility as Mandarin permits (but does not demand) deletion. Interestingly, the Chinese hearing children's descriptions of motion events resembled the deaf children's descriptions more closely than did the American hearing children's. The thoughts that deaf children convey in their gestures thus may serve as the starting point and perhaps a default for all children as they begin the process of grammaticization--thoughts that have not yet been filtered through a language model.
Collapse
Affiliation(s)
- Mingyu Zheng
- Department of Psychology, University of Chicago, 5848 South University Avenue, Chicago, IL 60637, USA.
| | | |
Collapse
|
38
|
Abstract
Do language abilities develop in isolation? Are they mediated by a unique neural substrate, a "mental organ" devoted exclusively to language? Or is language built upon more general abilities, shared with other cognitive domains, and mediated by common neural systems? Here, we review results suggesting that language and gesture are "close family", then turn to evidence that raises questions about how real those "family resemblances" are, summarizing dissociations from our developmental studies of several different child populations. We then examine both these veins of evidence in light of some new findings from the adult neuroimaging literature and suggest a possible reinterpretation of these dissociations as well as new directions for research with both children and adults.
Collapse
Affiliation(s)
- Elizabeth Bates
- Center for Research in Language and Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093-0526, USA
| | | |
Collapse
|
39
|
Abstract
People move their hands as they talk - they gesture. Gesturing is a robust phenomenon, found across cultures, ages, and tasks. Gesture is even found in individuals blind from birth. But what purpose, if any, does gesture serve? In this review, I begin by examining gesture when it stands on its own, substituting for speech and clearly serving a communicative function. When called upon to carry the full burden of communication, gesture assumes a language-like form, with structure at word and sentence levels. However, when produced along with speech, gesture assumes a different form - it becomes imagistic and analog. Despite its form, the gesture that accompanies speech also communicates. Trained coders can glean substantive information from gesture - information that is not always identical to that gleaned from speech. Gesture can thus serve as a research tool, shedding light on speakers' unspoken thoughts. The controversial question is whether gesture conveys information to listeners not trained to read them. Do spontaneous gestures communicate to ordinary listeners? Or might they be produced only for speakers themselves? I suggest these are not mutually exclusive functions - gesture serves as both a tool for communication for listeners, and a tool for thinking for speakers.
Collapse
Affiliation(s)
- S Goldin-Meadow
- Department of Psychology, University of Chicago, 5730 South Woodlawn Avenue, Chicago, IL 60637, USA
| |
Collapse
|
40
|
Abstract
There may be no greater testament to the resilience of language in humans than the observation that, when deprived of a language entirely, children will invent one nonetheless. Deaf children whose access to usable conventional linguistic input, signed or spoken, is severely limited develop gesture systems to communicate with the hearing individuals around them. The children's gestures resemble natural language in that they are structured at both sentence and word levels. Although the inclination to use gesture to communicate may be traceable to the fact that the deaf children's hearing parents (like all speakers) gesture as they talk, the deaf children themselves appear to be responsible for introducing language-like structure into their gestures. In particular, the structural properties found in the deaf children's gesture systems cannot be traced to the gestures that their hearing parents use with them, nor can they be traced to the way in which the parents respond to the children's gestures.
Collapse
|
41
|
Goldin-Meadow S, Mylander C. Spontaneous sign systems created by deaf children in two cultures. Nature 1998; 391:279-81. [PMID: 9440690 DOI: 10.1038/34646] [Citation(s) in RCA: 126] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Deaf children whose access to usable conventional linguistic input, signed or spoken, is severely limited nevertheless use gesture to communicate. These gestures resemble natural language in that they are structured at the level both of sentence and of word. Although the inclination to use gesture may be traceable to the fact that the deaf children's hearing parents, like all speakers, gesture as they talk, the children themselves are responsible for introducing language-like structure into their gestures. We have explored the robustness of this phenomenon by observing deaf children of hearing parents in two cultures, an American and a Chinese culture, that differ in their child-rearing practices and in the way gesture is used in relation to speech. The spontaneous sign systems developed in these cultures shared a number of structural similarities: patterned production and deletion of semantic elements in the surface structure of a sentence; patterned ordering of those elements within the sentence; and concatenation of propositions within a sentence. These striking similarities offer critical empirical input towards resolving the ongoing debate about the 'innateness' of language in human infants.
Collapse
Affiliation(s)
- S Goldin-Meadow
- University of Chicago, Department of Psychology, Illinois 60637, USA.
| | | |
Collapse
|
42
|
Morford JP, Goldin-Meadow S. From Here and Now to There and Then: The Development of Displaced Reference in Homesign and English. Child Dev 1997. [DOI: 10.1111/j.1467-8624.1997.tb01949.x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
43
|
McDonald JL. Language acquisition: the acquisition of linguistic structure in normal and special populations. Annu Rev Psychol 1997; 48:215-41. [PMID: 9046560 DOI: 10.1146/annurev.psych.48.1.215] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
This review examines how language learners master the formal structure of their language. Three possible routes to the acquisition and mastery of linguistic structure are investigated: (a) the use of prosodic and phonological information, which is imperfectly correlated with syntactic units and linguistic classes; (b) the use of function words to syntactically classify co-occurring words and phrases, and the effect of location of function-word processing on structural mastery; and (c) the use of morphology internal to lexical items to determine language structure, and the productive recombination of these subunits in new items. Evidence supporting these three routes comes from normal language acquirers and from several special populations, including learners given impoverished input, learners with Williams syndrome, specific language-impaired learners, learners with Down syndrome, and late learners of first and second languages. Further evidence for the three routes comes from artificial language acquisition experiments and computer simulations.
Collapse
Affiliation(s)
- J L McDonald
- Department of Psychology, Louisiana State University, Baton Rouge 70803, USA
| |
Collapse
|