1
|
Rissman L, Horton L, Goldin-Meadow S. Universal Constraints on Linguistic Event Categories: A Cross-Cultural Study of Child Homesign. Psychol Sci 2023; 34:298-312. [PMID: 36608154 DOI: 10.1177/09567976221140328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Languages carve up conceptual space in varying ways-for example, English uses the verb cut both for cutting with a knife and for cutting with scissors, but other languages use distinct verbs for these events. We asked whether, despite this variability, there are universal constraints on how languages categorize events involving tools (e.g., knife-cutting). We analyzed descriptions of tool events from two groups: (a) 43 hearing adult speakers of English, Spanish, and Chinese and (b) 10 deaf child homesigners ages 3 to 11 (each of whom has created a gestural language without input from a conventional language model) in five different countries (Guatemala, Nicaragua, United States, Taiwan, Turkey). We found alignment across these two groups-events that elicited tool-prominent language among the spoken-language users also elicited tool-prominent language among the homesigners. These results suggest ways of conceptualizing tool events that are so prominent as to constitute a universal constraint on how events are categorized in language.
Collapse
Affiliation(s)
- Lilia Rissman
- Department of Psychology, University of Wisconsin-Madison
| | - Laura Horton
- Language Sciences Program, University of Wisconsin-Madison
| | - Susan Goldin-Meadow
- Department of Psychology, The University of Chicago.,Center for Gesture, Sign, and Language, The University of Chicago
| |
Collapse
|
2
|
Pyers JE, Emmorey K. The iconic motivation for the morphophonological distinction between noun-verb pairs in American Sign Language does not reflect common human construals of objects and actions. LANGUAGE AND COGNITION 2022; 14:622-644. [PMID: 36426211 PMCID: PMC9681175 DOI: 10.1017/langcog.2022.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners' sensitivity to differences in noun-verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun-verb pairs. Experiment 1a's match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners' judgments. We speculate that the morphophonological distinctions in noun-verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.
Collapse
Affiliation(s)
- Jennie E. Pyers
- Wellesley College, Psychology Department, Wellesley, MA, USA
| | - Karen Emmorey
- San Diego State University, School of Speech, Language and Hearing Sciences, San Diego, CA, USA
| |
Collapse
|
3
|
The Seeds of the Noun–Verb Distinction in the Manual Modality: Improvisation and Interaction in the Emergence of Grammatical Categories. LANGUAGES 2022. [DOI: 10.3390/languages7020095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The noun–verb distinction has long been considered a fundamental property of human language, and has been found in some form even in the earliest stages of language emergence, including homesign and the early generations of emerging sign languages. We present two experimental studies that use silent gesture to investigate how noun–verb distinctions develop in the manual modality through two key processes: (i) improvising using novel signals by individuals, and (ii) using those signals in the interaction between communicators. We operationalise communicative interaction in two ways: a setting in which members of the dyad were in separate booths and were given a comprehension test after each stimulus vs. a more naturalistic face-to-face conversation without comprehension checks. There were few differences between the two conditions, highlighting the robustness of the paradigm. Our findings from both experiments reflect patterns found in naturally emerging sign languages. Some formal distinctions arise in the earliest stages of improvisation and do not require interaction to develop. However, the full range of formal distinctions between nouns and verbs found in naturally emerging language did not appear with either improvisation or interaction, suggesting that transmitting the language to a new generation of learners might be necessary for these properties to emerge.
Collapse
|
4
|
How and When to Sign “Hey!” Socialization into Grammar in Z, a 1st Generation Family Sign Language from Mexico. LANGUAGES 2022. [DOI: 10.3390/languages7020080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
“Z” is a young sign language developing in a family whose hearing members speak Tzotzil (Mayan). Three deaf siblings, together with an intervening hearing sister and a hearing niece, formed the original cohort of signing adults. A hearing son of the original signer became the first native signer of a second generation. Z provides evidence for a classic grammaticalization chain linking a sign requesting attention (HEY1) to a pragmatic turn-initiating particle (HEY2), which signals a new utterance or change of topic. Such an emergent grammatical particle linked to the pragmatic exigencies of communication is a primordial example of emergent grammar. The chapter presents the stages in the son’s language socialization and acquisition of HEY1 and HEY2, starting at 11 months, through his subsequent bilingual development in both Z and Tzotzil, jointly deploying other communicative modalities such as gaze and touch. It proposes a series of stages leading, by 4 years of age, to his understanding of the complex sequential structure that using the sign involves. Acquiring pragmatic signs such as HEY in Z demonstrates how the grammar of a language, including an emergent sign language, is built upon the practices of a language community and the basic expected parameters of local social life.
Collapse
|
5
|
Abner N, Namboodiripad S, Spaepen E, Goldin-Meadow S. Emergent Morphology in Child Homesign: Evidence from Number Language. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2021; 18:16-40. [PMID: 35603228 PMCID: PMC9122328 DOI: 10.1080/15475441.2021.1922281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Human languages, signed and spoken, can be characterized by the structural patterns they use to associate communicative forms with meanings. One such pattern is paradigmatic morphology, where complex words are built from the systematic use and re-use of sub-lexical units. Here, we provide evidence of emergent paradigmatic morphology akin to number inflection in a communication system developed without input from a conventional language, homesign. We study the communication systems of four deaf child homesigners (mean age 8;02). Although these idiosyncratic systems vary from one another, we nevertheless find that all four children use handshape and movement devices productively to express cardinal and non-cardinal number information, and that their number expressions are consistent in both form and meaning. Our study shows, for the first time, that all four homesigners not only incorporate number devices into representational devices used as predicates , but also into gestures functioning as nominals, including deictic gestures. In other words, the homesigners express number by systematically combining and re-combining additive markers for number (qua inflectional morphemes) with representational and deictic gestures (qua bases). The creation of new, complex forms with predictable meanings across gesture types and linguistic functions constitutes evidence for an inflectional morphological paradigm in homesign and expands our understanding of the structural patterns of language that are, and are not, dependent on linguistic input.
Collapse
Affiliation(s)
- Natasha Abner
- Department of Linguistics, University of Michigan, Ann Arbor, MI USA Savithry, Namboodiripad, Spaepen
| | - Savithry Namboodiripad
- Department of Linguistics, University of Michigan, Ann Arbor, MI USA Savithry, Namboodiripad, Spaepen
| | | | | |
Collapse
|
6
|
Flaherty M, Hunsicker D, Goldin-Meadow S. Structural biases that children bring to language learning: A cross-cultural look at gestural input to homesign. Cognition 2021; 211:104608. [PMID: 33581667 DOI: 10.1016/j.cognition.2021.104608] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 12/18/2020] [Accepted: 01/17/2021] [Indexed: 10/22/2022]
Abstract
Linguistic input has an immediate effect on child language, making it difficult to discern whatever biases children may bring to language-learning. To discover these biases, we turn to deaf children who cannot acquire spoken language and are not exposed to sign language. These children nevertheless produce gestures, called homesigns, which have structural properties found in natural language. We ask whether these properties can be traced to gestures produced by hearing speakers in Nicaragua, a gesture-rich culture, and in the USA, a culture where speakers rarely gesture without speech. We studied 7 homesigning children and hearing family members in Nicaragua, and 4 in the USA. As expected, family members produced more gestures without speech, and longer gesture strings, in Nicaragua than in the USA. However, in both cultures, homesigners displayed more structural complexity than family members, and there was no correlation between individual homesigners and family members with respect to structural complexity. The findings replicate previous work showing that the gestures hearing speakers produce do not offer a model for the structural aspects of homesign, thus suggesting that children bring biases to construct, or learn, these properties to language-learning. The study also goes beyond the current literature in three ways. First, it extends homesign findings to Nicaragua, where homesigners received a richer gestural model than USA homesigners. Moreover, the relatively large numbers of gestures in Nicaragua made it possible to take advantage of more sophisticated statistical techniques than were used in the original homesign studies. Second, the study extends the discovery of complex noun phrases to Nicaraguan homesign. The almost complete absence of complex noun phrases in the hearing family members of both cultures provides the most convincing evidence to date that homesigners, and not their hearing family members, are the ones who introduce structural properties into homesign. Finally, by extending the homesign phenomenon to Nicaragua, the study offers insight into the gestural precursors of an emerging sign language. The findings shed light on the types of structures that an individual can introduce into communication before that communication is shared within a community of users, and thus sheds light on the roots of linguistic structure.
Collapse
Affiliation(s)
- Molly Flaherty
- Davidson College, Psychology Department, Davidson, NC 28036, United States of America.
| | - Dea Hunsicker
- The University of Chicago, 5848 S. University Avenue, Chicago, IL 60637, United States of America
| | - Susan Goldin-Meadow
- The University of Chicago, 5848 S. University Avenue, Chicago, IL 60637, United States of America
| |
Collapse
|
7
|
Rissman L, Horton L, Flaherty M, Senghas A, Coppola M, Brentari D, Goldin-Meadow S. The communicative importance of agent-backgrounding: Evidence from homesign and Nicaraguan Sign Language. Cognition 2020; 203:104332. [PMID: 32559513 DOI: 10.1016/j.cognition.2020.104332] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 05/11/2020] [Accepted: 05/18/2020] [Indexed: 10/24/2022]
Abstract
Some concepts are more essential for human communication than others. In this paper, we investigate whether the concept of agent-backgrounding is sufficiently important for communication that linguistic structures for encoding this concept are present in young sign languages. Agent-backgrounding constructions serve to reduce the prominence of the agent - the English passive sentence a book was knocked over is an example. Although these constructions are widely attested cross-linguistically, there is little prior research on the emergence of such devices in new languages. Here we studied how agent-backgrounding constructions emerge in Nicaraguan Sign Language (NSL) and adult homesign systems. We found that NSL signers have innovated both lexical and morphological devices for expressing agent-backgrounding, indicating that conveying a flexible perspective on events has deep communicative value. At the same time, agent-backgrounding devices did not emerge at the same time as agentive devices. This result suggests that agent-backgrounding does not have the same core cognitive status as agency. The emergence of agent-backgrounding morphology appears to depend on receiving a linguistic system as input in which linguistic devices for expressing agency are already well-established.
Collapse
Affiliation(s)
- Lilia Rissman
- Department of Psychology, University of Wisconsin - Madison, 1202 W. Johnson St., Madison, WI 53706, United States of America.
| | - Laura Horton
- Department of Linguistics, University of Texas at Austin, 305 E. 23rd Street, Austin, TX 78712, United States of America.
| | - Molly Flaherty
- Department of Psychology, Swarthmore College, 500 College Avenue, Swarthmore, PA 19081, United States of America.
| | - Ann Senghas
- Department of Psychology, Barnard College, 3009 Broadway, New York, NY 10027, United States of America.
| | - Marie Coppola
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT 06269, United States of America; Department of Linguistics, University of Connecticut, 365 Fairfield Way, Unit 1145, Storrs, CT 06269, United States of America.
| | - Diane Brentari
- Center for Gesture, Sign, and Language, University of Chicago, Chicago, IL 60637, United States of America; Department of Linguistics, University of Chicago, 1115 E. 58th Street, Chicago, IL 60637, United States of America.
| | - Susan Goldin-Meadow
- Department of Psychology, University of Chicago, 5848 S. University Ave., Chicago, IL 60637, United States of America; Center for Gesture, Sign, and Language, University of Chicago, Chicago, IL 60637, United States of America.
| |
Collapse
|
8
|
Novack MA, Waxman S. Becoming human: human infants link language and cognition, but what about the other great apes? Philos Trans R Soc Lond B Biol Sci 2019; 375:20180408. [PMID: 31735145 DOI: 10.1098/rstb.2018.0408] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Human language has no parallel elsewhere in the animal kingdom. It is unique not only for its structural complexity but also for its inextricable interface with core cognitive capacities such as object representation, object categorization and abstract rule learning. Here, we (i) review recent evidence documenting how (and how early) language interacts with these core cognitive capacities in the mind of the human infant, and (ii) consider whether this link exists in non-human great apes-our closest genealogical cousins. Research with human infants demonstrates that well before they begin to speak, infants have already forged a link between language and core cognitive capacities. Evident by just three months of age, this language-cognition link unfolds in a rich developmental cascade, with each advance providing the foundation for subsequent, more precise and more powerful links. This link supports our species' capacity to represent and convey abstract concepts and to communicate beyond the immediate here and now. By contrast, although the communication systems of great apes are sophisticated in their own right, there is no conclusive evidence that apes establish reference, convey information declaratively or pass down communicative devices via cultural transmission. Thus, the evidence currently available reinforces the uniqueness of human language and the power of its interface to cognition. This article is part of the theme issue 'What can animal communication teach us about human language?'
Collapse
Affiliation(s)
- Miriam A Novack
- Department of Psychology, Northwestern University, Evanston, IL 60208, USA
| | - Sandra Waxman
- Department of Psychology, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
9
|
Evolving artificial sign languages in the lab: From improvised gesture to systematic sign. Cognition 2019; 192:103964. [PMID: 31302362 DOI: 10.1016/j.cognition.2019.05.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/30/2019] [Accepted: 05/01/2019] [Indexed: 11/23/2022]
Abstract
Recent work on emerging sign languages provides evidence for how key properties of linguistic systems are created. Here we use laboratory experiments to investigate the contribution of two specific mechanisms-interaction and transmission-to the emergence of a manual communication system in silent gesturers. We show that the combined effects of these mechanisms, rather than either alone, maintain communicative efficiency, and lead to a gradual increase of regularity and systematic structure. The gestures initially produced by participants are unsystematic and resemble pantomime, but come to develop key language-like properties similar to those documented in newly emerging sign systems.
Collapse
|
10
|
Bruce SM, Mann A, Jones C, Gavin M. Gestures Expressed by Children who are Congenitally Deaf-Blind: Topography, Rate, and Function. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2019. [DOI: 10.1177/0145482x0710101010] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This descriptive study examined the topography, rate, and function of gestures expressed by seven children who are congenitally deaf-blind. Participants expressed a total of 44 conventional and idiosyncratic gestures. They expressed 6–13 communicative functions through gestures and 7 functions through a single type of gesture. They also expressed idiosyncratic gestures and used specific gestures for functions other than those that are typically associated with those gestures.
Collapse
Affiliation(s)
- Susan M. Bruce
- Lynch School of Education, Boston College, 120 Campion Hall, 140 Commonwealth Avenue, Chestnut Hill, MA 02467-3813
| | | | | | - Mary Gavin
- Lynch School of Education, Boston College
| |
Collapse
|
11
|
Abstract
Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with which we look at language to include the manual modality. I suggest that human communication is most effective when it makes use of two types of formats--a discrete and segmented code, produced simultaneously along with an analog and mimetic code. The segmented code is supported by both the oral and the manual modalities. However, the mimetic code is more easily handled by the manual modality. We might then expect mimetic encoding to be done preferentially in the manual modality (gesture), leaving segmented encoding to the oral modality (speech). This argument rests on two assumptions: (1) The manual modality is as good at segmented encoding as the oral modality; sign languages, established and idiosyncratic, provide evidence for this assumption. (2) Mimetic encoding is important to human communication and best handled by the manual modality; co-speech gesture provides evidence for this assumption. By including the manual modality in two contexts--when it takes on the primary function of communication (sign language), and when it takes on a complementary communicative function (gesture)--in our analysis of language, we gain new perspectives on the origins and continuing development of language.
Collapse
|
12
|
Cartmill EA, Rissman L, Novack M, Goldin-Meadow S. The development of iconicity in children's co-speech gesture and homesign. ACTA ACUST UNITED AC 2017; 8:42-68. [PMID: 29034011 DOI: 10.1075/lia.8.1.03car] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Gesture can illustrate objects and events in the world by iconically reproducing elements of those objects and events. Children do not begin to express ideas iconically, however, until after they have begun to use conventional forms. In this paper, we investigate how children's use of iconic resources in gesture relates to the developing structure of their communicative systems. Using longitudinal video corpora, we compare the emergence of manual iconicity in hearing children who are learning a spoken language (co-speech gesture) to the emergence of manual iconicity in a deaf child who is creating a manual system of communication (homesign). We focus on one particular element of iconic gesture - the shape of the hand (handshape). We ask how handshape is used as an iconic resource in 1-5-year-olds, and how it relates to the semantic content of children's communicative acts. We find that patterns of handshape development are broadly similar between co-speech gesture and homesign, suggesting that the building blocks underlying children's ability to iconically map manual forms to meaning are shared across different communicative systems: those where gesture is produced alongside speech, and those where gesture is the primary mode of communication.
Collapse
|
13
|
Rissman L, Goldin-Meadow S. The Development of Causal Structure without a Language Model. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2017; 13:286-299. [PMID: 28983210 PMCID: PMC5624539 DOI: 10.1080/15475441.2016.1254633] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Across a diverse range of languages, children proceed through similar stages in their production of causal language: their initial verbs lack internal causal structure, followed by a period during which they produce causative overgeneralizations, indicating knowledge of a productive causative rule. We asked in this study whether a child not exposed to structured linguistic input could create linguistic devices for encoding causation and, if so, whether the emergence of this causal language would follow a trajectory similar to the one observed for children learning language from linguistic input. We show that the child in our study did develop causation-encoding morphology, but only after initially using verbs that lacked internal causal structure. These results suggest that the ability to encode causation linguistically can emerge in the absence of a language model, and that exposure to linguistic input is not the only factor guiding children from one stage to the next in their production of causal language.
Collapse
Affiliation(s)
| | - Susan Goldin-Meadow
- Department of Psychology, University of Chicago
- Center for Gesture, Sign, and Language, University of Chicago
| |
Collapse
|
14
|
Abstract
Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.
Collapse
Affiliation(s)
- Diane Brentari
- Department of Linguistics, University of Chicago, Chicago, Illinois 60637
| | | |
Collapse
|
15
|
Statistical evidence that a child can create a combinatorial linguistic system without external linguistic input: Implications for language evolution. Neurosci Biobehav Rev 2016; 81:150-157. [PMID: 28041786 DOI: 10.1016/j.neubiorev.2016.12.016] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2016] [Revised: 10/28/2016] [Accepted: 12/19/2016] [Indexed: 11/22/2022]
Abstract
Can a child who is not exposed to a model for language nevertheless construct a communication system characterized by combinatorial structure? We know that deaf children whose hearing losses prevent them from acquiring spoken language, and whose hearing parents have not exposed them to sign language, use gestures, called homesigns, to communicate. In this study, we call upon a new formal analysis that characterizes the statistical profile of grammatical rules and, when applied to child language data, finds that young children's language is consistent with a productive grammar rather than rote memorization of specific word combinations in caregiver speech. We apply this formal analysis to homesign, and find that homesign can also be characterized as having productive grammar. Our findings thus provide evidence that a child can create a combinatorial linguistic system without external linguistic input, and offer unique insight into how the capacity of language evolved as part of human biology.
Collapse
|
16
|
Kocab A, Senghas A, Snedeker J. The emergence of temporal language in Nicaraguan Sign Language. Cognition 2016; 156:147-163. [PMID: 27591549 PMCID: PMC5027136 DOI: 10.1016/j.cognition.2016.08.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2015] [Revised: 07/20/2016] [Accepted: 08/07/2016] [Indexed: 10/21/2022]
Abstract
Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users.
Collapse
Affiliation(s)
- Annemarie Kocab
- Harvard University, Department of Psychology, Cambridge, MA, USA.
| | - Ann Senghas
- Barnard College, Department of Psychology, New York, NY, USA
| | - Jesse Snedeker
- Harvard University, Department of Psychology, Cambridge, MA, USA
| |
Collapse
|
17
|
Carrigan EM, Coppola M. Successful communication does not drive language development: Evidence from adult homesign. Cognition 2016; 158:10-27. [PMID: 27771538 DOI: 10.1016/j.cognition.2016.09.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 07/07/2016] [Accepted: 09/28/2016] [Indexed: 10/20/2022]
Abstract
Constructivist accounts of language acquisition maintain that the language learner aims to match a target provided by mature users. Communicative problem solving in the context of social interaction and matching a linguistic target or model are presented as primary mechanisms driving the language development process. However, research on the development of homesign gesture systems by deaf individuals who have no access to a linguistic model suggests that aspects of language can develop even when typical input is unavailable. In four studies, we examined the role of communication in the genesis of homesign systems by assessing how well homesigners' family members comprehend homesign productions. In Study 1, homesigners' mothers showed poorer comprehension of homesign descriptions produced by their now-adult deaf child than of spoken Spanish descriptions of the same events produced by one of their adult hearing children. Study 2 found that the younger a family member was when they first interacted with their deaf relative, the better they understood the homesigner. Despite this, no family member comprehended homesign productions at levels that would be expected if family members co-generated homesign systems with their deaf relative via communicative interactions. Study 3 found that mothers' poor or incomplete comprehension of homesign was not a result of incomplete homesign descriptions. In Study 4 we demonstrated that Deaf native users of American Sign Language, who had no previous experience with the homesigners or their homesign systems, nevertheless comprehended homesign productions out of context better than the homesigners' mothers. This suggests that homesign has comprehensible structure, to which mothers and other family members are not fully sensitive. Taken together, these studies show that communicative problem solving is not responsible for the development of structure in homesign systems. The role of this mechanism must therefore be re-evaluated in constructivist theories of language development.
Collapse
|
18
|
Goldin-Meadow S, Brentari D, Coppola M, Horton L, Senghas A. Watching language grow in the manual modality: nominals, predicates, and handshapes. Cognition 2015; 136:381-95. [PMID: 25546342 PMCID: PMC4308574 DOI: 10.1016/j.cognition.2014.11.029] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2013] [Revised: 11/09/2014] [Accepted: 11/17/2014] [Indexed: 11/18/2022]
Abstract
All languages, both spoken and signed, make a formal distinction between two types of terms in a proposition--terms that identify what is to be talked about (nominals) and terms that say something about this topic (predicates). Here we explore conditions that could lead to this property by charting its development in a newly emerging language--Nicaraguan Sign Language (NSL). We examine how handshape is used in nominals vs. predicates in three Nicaraguan groups: (1) homesigners who are not part of the Deaf community and use their own gestures, called homesigns, to communicate; (2) NSL cohort 1 signers who fashioned the first stage of NSL; (3) NSL cohort 2 signers who learned NSL from cohort 1. We compare these three groups to a fourth: (4) native signers of American Sign Language (ASL), an established sign language. We focus on handshape in predicates that are part of a productive classifier system in ASL; handshape in these predicates varies systematically across agent vs. no-agent contexts, unlike handshape in the nominals we study, which does not vary across these contexts. We found that all four groups, including homesigners, used handshape differently in nominals vs. predicates--they displayed variability in handshape form across agent vs. no-agent contexts in predicates, but not in nominals. Variability thus differed in predicates and nominals: (1) In predicates, the variability across grammatical contexts (agent vs. no-agent) was systematic in all four groups, suggesting that handshape functioned as a productive morphological marker on predicate signs, even in homesign. This grammatical use of handshape can thus appear in the earliest stages of an emerging language. (2) In nominals, there was no variability across grammatical contexts (agent vs. no-agent), but there was variability within- and across-individuals in the handshape used in the nominal for a particular object. This variability was striking in homesigners (an individual homesigner did not necessarily use the same handshape in every nominal he produced for a particular object), but decreased in the first cohort of NSL and remained relatively constant in the second cohort. Stability in the lexical use of handshape in nominals thus does not seem to emerge unless there is pressure from a peer linguistic community. Taken together, our findings argue that a community of users is essential to arrive at a stable nominal lexicon, but not to establish a productive morphological marker in predicates. Examining the steps a manual communication system takes as it moves toward becoming a fully-fledged language offers a unique window onto factors that have made human language what it is.
Collapse
Affiliation(s)
| | | | - M Coppola
- University of Connecticut, United States
| | - L Horton
- University of Chicago, United States
| | | |
Collapse
|
19
|
Haviland JB. Hey! Top Cogn Sci 2015; 7:124-49. [DOI: 10.1111/tops.12126] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2013] [Revised: 12/06/2013] [Accepted: 02/28/2014] [Indexed: 11/29/2022]
|
20
|
Ozyürek A, Furman R, Goldin-Meadow S. On the way to language: event segmentation in homesign and gesture. JOURNAL OF CHILD LANGUAGE 2015; 42:64-94. [PMID: 24650738 PMCID: PMC4169751 DOI: 10.1017/s0305000913000512] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.
Collapse
Affiliation(s)
- Asli Ozyürek
- Radboud University Nijmegen and Max Planck Institute for Psycholinguistics,the Netherlands
| | | | | |
Collapse
|
21
|
Goldin-Meadow S, Namboodiripad S, Mylander C, Özyürek A, Sancar B. The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children. JOURNAL OF COGNITION AND DEVELOPMENT 2014; 16:55-80. [PMID: 25663828 DOI: 10.1080/15248372.2013.803970] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, that have many of the properties of natural language-the so-called resilient properties of language. We explored the resilience of structure built around the predicate-in particular, how manner and path are mapped onto the verb-in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children's gestures. Although co-speech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language.
Collapse
Affiliation(s)
| | | | | | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, Nijmegen ; Radboud University, Nijmegen
| | | |
Collapse
|
22
|
Richie R, Yang C, Coppola M. Modeling the emergence of lexicons in homesign systems. Top Cogn Sci 2014; 6:183-95. [PMID: 24482343 DOI: 10.1111/tops.12076] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 10/08/2013] [Indexed: 11/30/2022]
Abstract
It is largely 4ledged that natural languages emerge not just from human brains but also from rich communities of interacting human brains (Senghas, ). Yet the precise role of such communities and such interaction in the emergence of core properties of language has largely gone uninvestigated in naturally emerging systems, leaving the few existing computational investigations of this issue at an artificial setting. Here, we take a step toward investigating the precise role of community structure in the emergence of linguistic conventions with both naturalistic empirical data and computational modeling. We first show conventionalization of lexicons in two different classes of naturally emerging signed systems: (a) protolinguistic "homesigns" invented by linguistically isolated Deaf individuals, and (b) a natural sign language emerging in a recently formed rich Deaf community. We find that the latter conventionalized faster than the former. Second, we model conventionalization as a population of interacting individuals who adjust their probability of sign use in response to other individuals' actual sign use, following an independently motivated model of language learning (Yang, , ). Simulations suggest that a richer social network, like that of natural (signed) languages, conventionalizes faster than a sparser social network, like that of homesign systems. We discuss our behavioral and computational results in light of other work on language emergence, and other work of behavior on complex networks.
Collapse
|
23
|
Goldin-Meadow S. Widening the lens: what the manual modality reveals about language, learning and cognition. Philos Trans R Soc Lond B Biol Sci 2014; 369:20130295. [PMID: 25092663 PMCID: PMC4123674 DOI: 10.1098/rstb.2013.0295] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The goal of this paper is to widen the lens on language to include the manual modality. We look first at hearing children who are acquiring language from a spoken language model and find that even before they use speech to communicate, they use gesture. Moreover, those gestures precede, and predict, the acquisition of structures in speech. We look next at deaf children whose hearing losses prevent them from using the oral modality, and whose hearing parents have not presented them with a language model in the manual modality. These children fall back on the manual modality to communicate and use gestures, which take on many of the forms and functions of natural language. These homemade gesture systems constitute the first step in the emergence of manual sign systems that are shared within deaf communities and are full-fledged languages. We end by widening the lens on sign language to include gesture and find that signers not only gesture, but they also use gesture in learning contexts just as speakers do. These findings suggest that what is key in gesture's ability to predict learning is its ability to add a second representational format to communication, rather than a second modality. Gesture can thus be language, assuming linguistic forms and functions, when other vehicles are not available; but when speech or sign is possible, gesture works along with language, providing an additional representational format that can promote learning.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Department of Psychology, University of Chicago, 5848 South University Avenue, Chicago, IL 60637, USA
| |
Collapse
|
24
|
Coppola M, Brentari D. From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner. Front Psychol 2014; 5:830. [PMID: 25191283 PMCID: PMC4139701 DOI: 10.3389/fpsyg.2014.00830] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Accepted: 07/11/2014] [Indexed: 11/25/2022] Open
Abstract
Many sign languages display crosslinguistic consistencies in the use of two iconic aspects of handshape, handshape type and finger group complexity. Handshape type is used systematically in form-meaning pairings (morphology): Handling handshapes (Handling-HSs), representing how objects are handled, tend to be used to express events with an agent ("hand-as-hand" iconicity), and Object handshapes (Object-HSs), representing an object's size/shape, are used more often to express events without an agent ("hand-as-object" iconicity). Second, in the distribution of meaningless properties of form (morphophonology), Object-HSs display higher finger group complexity than Handling-HSs. Some adult homesigners, who have not acquired a signed or spoken language and instead use a self-generated gesture system, exhibit these two properties as well. This study illuminates the development over time of both phenomena for one child homesigner, "Julio," age 7;4 (years; months) to 12;8. We elicited descriptions of events with and without agents to determine whether morphophonology and morphosyntax can develop without linguistic input during childhood, and whether these structures develop together or independently. Within the time period studied: (1) Julio used handshape type differently in his responses to vignettes with and without an agent; however, he did not exhibit the same pattern that was found previously in signers, adult homesigners, or gesturers: while he was highly likely to use a Handling-HS for events with an agent (82%), he was less likely to use an Object-HS for non-agentive events (49%); i.e., his productions were heavily biased toward Handling-HSs; (2) Julio exhibited higher finger group complexity in Object- than in Handling-HSs, as in the sign language and adult homesigner groups previously studied; and (3) these two dimensions of language developed independently, with phonological structure showing a sign language-like pattern at an earlier age than morphosyntactic structure. We conclude that iconicity alone is not sufficient to explain the development of linguistic structure in homesign systems. Linguistic input is not required for some aspects of phonological structure to emerge in childhood, and while linguistic input is not required for morphology either, it takes time to emerge in homesign.
Collapse
Affiliation(s)
- Marie Coppola
- Departments of Psychology and Linguistics, Language Creation Laboratory, University of ConnecticutStorrs, CT, USA
| | - Diane Brentari
- Department of Linguistics, Sign Language Laboratory, University of ChicagoChicago, IL, USA
| |
Collapse
|
25
|
Goldin-Meadow S. In search of resilient and fragile properties of language. JOURNAL OF CHILD LANGUAGE 2014; 41 Suppl 1:64-77. [PMID: 25023497 PMCID: PMC4100075 DOI: 10.1017/s030500091400021x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Young children are skilled language learners. They apply their skills to the language input they receive from their parents and, in this way, derive patterns that are statistically related to their input. But being an excellent statistical learner does not explain why children who are not exposed to usable linguistic input nevertheless communicate using systems containing the fundamental properties of language. Nor does it explain why learners sometimes alter the linguistic input to which they are exposed (input from either a natural or an artificial language). These observations suggest that children are prepared to learn language. Our task now, as it was in 1974, is to figure out what they are prepared with - to identify properties of language that are relatively easy to learn, the resilient properties, as well as properties of language that are more difficult to learn, the fragile properties. The new tools and paradigms for describing and explaining language learning that have been introduced into the field since 1974 offer great promise for accomplishing this task.
Collapse
|
26
|
Fay N, Lister CJ, Ellison TM, Goldin-Meadow S. Creating a communication system from scratch: gesture beats vocalization hands down. Front Psychol 2014; 5:354. [PMID: 24808874 PMCID: PMC4010783 DOI: 10.3389/fpsyg.2014.00354] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2013] [Accepted: 04/04/2014] [Indexed: 11/30/2022] Open
Abstract
How does modality affect people's ability to create a communication system from scratch? The present study experimentally tests this question by having pairs of participants communicate a range of pre-specified items (emotions, actions, objects) over a series of trials to a partner using either non-linguistic vocalization, gesture or a combination of the two. Gesture-alone outperformed vocalization-alone, both in terms of successful communication and in terms of the creation of an inventory of sign-meaning mappings shared within a dyad (i.e., sign alignment). Combining vocalization with gesture did not improve performance beyond gesture-alone. In fact, for action items, gesture-alone was a more successful means of communication than the combined modalities. When people do not share a system for communication they can quickly create one, and gesture is the best means of doing so.
Collapse
Affiliation(s)
- Nicolas Fay
- School of Psychology, University of Western Australia Crawley, WA, Australia
| | - Casey J Lister
- School of Psychology, University of Western Australia Crawley, WA, Australia
| | - T Mark Ellison
- School of Psychology, University of Western Australia Crawley, WA, Australia
| | | |
Collapse
|
27
|
Applebaum L, Coppola M, Goldin-Meadow S. Prosody in a communication system developed without a language model. SIGN LANGUAGE AND LINGUISTICS 2014; 17:181-212. [PMID: 25574153 PMCID: PMC4285364 DOI: 10.1075/sll.17.2.02app] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Prosody, he "music" of language, is an important aspect of all natural languages, spoken and signed. We ask here whether prosody is also robust across learning conditions. If a child were not exposed to a conventional language and had to construct his own communication system, would that system contain prosodic structure? We address this question by observing a deaf child who received no sign language input and whose hearing loss prevented him from acquiring spoken language. Despite his lack of a conventional language model, this child developed his own gestural system. In this system, features known to mark phrase and utterance boundaries in established sign languages were used to consistently mark the ends of utterances, but not to mark phrase or utterance internal boundaries. A single child can thus develop the seeds of a prosodic system, but full elaboration may require more time, more users, or even more generations to blossom.
Collapse
|
28
|
|
29
|
Spaepen E, Coppola M, Flaherty M, Spelke E, Goldin-Meadow S. Generating a lexicon without a language model: Do words for number count? JOURNAL OF MEMORY AND LANGUAGE 2013; 69:496-505. [PMID: 24187432 PMCID: PMC3811965 DOI: 10.1016/j.jml.2013.05.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Homesigns are communication systems created by deaf individuals without access to conventional linguistic input. To investigate how homesign gestures for number function in short-term memory compared to homesign gestures for objects, actions, or attributes, we conducted memory span tasks with adult homesigners in Nicaragua, and with comparison groups of unschooled hearing Spanish speakers and deaf Nicaraguan Sign Language signers. There was no difference between groups in recall of gestures or words for objects, actions or attributes; homesign gestures therefore can function as word units in short-term memory. However, homesigners showed poorer recall of numbers than the other groups. Unlike the other groups, increasing the numerical value of the to-be-remembered quantities negatively affected recall in homesigners, but not controls. When developed without linguistic input, gestures for number do not seem to function as summaries of the cardinal values of the sets (four), but rather as indexes of items within a set (one-one-one-one).
Collapse
Affiliation(s)
- Elizabet Spaepen
- University of Chicago, Department of Psychology, 5848 S. University Ave., Chicago IL 60637, , ,
| | | | | | | | | |
Collapse
|
30
|
Coppola M, Spaepen E, Goldin-Meadow S. Communicating about quantity without a language model: number devices in homesign grammar. Cogn Psychol 2013; 67:1-25. [PMID: 23872365 PMCID: PMC3870334 DOI: 10.1016/j.cogpsych.2013.05.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2012] [Revised: 12/05/2012] [Accepted: 05/03/2013] [Indexed: 11/16/2022]
Abstract
All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners' hearing communication partners displayed some, but not all, of the homesigners' linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners' gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners' linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input.
Collapse
Affiliation(s)
- Marie Coppola
- University of Chicago, Department of Psychology, 5848 S. University Ave., Chicago, IL 60637, United States
| | - Elizabet Spaepen
- University of Chicago, Department of Psychology, 5848 S. University Ave., Chicago, IL 60637, United States
| | - Susan Goldin-Meadow
- University of Chicago, Department of Psychology, 5848 S. University Ave., Chicago, IL 60637, United States
| |
Collapse
|
31
|
Hunsicker D, Goldin-Meadow S. How handshape type can distinguish between nouns and verbs in homesign. GESTURE (AMSTERDAM, NETHERLANDS) 2013; 175:354-376. [PMID: 25435844 PMCID: PMC4245027 DOI: 10.1075/gest.13.3.05hun] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
All established languages, spoken or signed, make a distinction between nouns and verbs. Even a young sign language emerging within a family of deaf individuals has been found to mark the noun-verb distinction, and to use handshape type to do so. Here we ask whether handshape type is used to mark the noun-verb distinction in a gesture system invented by a deaf child who does not have access to a usable model of either spoken or signed language. The child produces homesigns that have linguistic structure, but receives from his hearing parents co-speech gestures that are structured differently from his own gestures. Thus, unlike users of established and emerging languages, the homesigner is a producer of his system but does not receive it from others. Nevertheless, we found that the child used handshape type to mark the distinction between nouns and verbs at the early stages of development. The noun-verb distinction is thus so fundamental to language that it can arise in a homesign system not shared with others. We also found that the child abandoned handshape type as a device for distinguishing nouns from verbs at just the moment when he developed a combinatorial system of handshape and motion components that marked the distinction. The way the noun-verb distinction is marked thus depends on the full array of linguistic devices available within the system.
Collapse
|
32
|
Marshall CR, Rowley K, Mason K, Herman R, Morgan G. Lexical organization in deaf children who use British Sign Language: evidence from a semantic fluency task. JOURNAL OF CHILD LANGUAGE 2013; 40:193-220. [PMID: 22717181 DOI: 10.1017/s0305000912000116] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
We adapted the semantic fluency task into British Sign Language (BSL). In Study 1, we present data from twenty-two deaf signers aged four to fifteen. We show that the same 'cognitive signatures' that characterize this task in spoken languages are also present in deaf children, for example, the semantic clustering of responses. In Study 2, we present data from thirteen deaf children with Specific Language Impairment (SLI) in BSL, in comparison to a subset of children from Study 1 matched for age and BSL exposure. The two groups' results were comparable in most respects. However, the group with SLI made occasional word-finding errors and gave fewer responses in the first 15 seconds. We conclude that deaf children with SLI do not differ from their controls in terms of the semantic organization of the BSL lexicon, but that they access signs less efficiently.
Collapse
|
33
|
So WC, Coppola M, Licciardello V, Goldin-Meadow S. The seeds of spatial grammar in the manual modality. Cogn Sci 2012; 29:1029-43. [PMID: 21702801 DOI: 10.1207/s15516709cog0000_38] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Sign languages modulate the production of signs in space and use this spatial modulation to refer back to entities-to maintain coreference. We ask here whether spatial modulation is so fundamental to language in the manual modality that it will be invented by individuals asked to create gestures on the spot. English speakers were asked to describe vignettes under 2 conditions: using gesture without speech, and using speech with spontaneous gestures. When using gesture alone, adults placed gestures for particular entities in non-neutral locations and then used those locations to refer back to the entities. When using gesture plus speech, adults also produced gestures in non-neutral locations but used the locations coreferentially far less often. When gesture is forced to take on the full burden of communication, it exploits space for coreference. Coreference thus appears to be a resilient property of language, likely to emerge in communication systems no matter how simple.
Collapse
|
34
|
Abstract
When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Department of Psychology, University of Chicago, Chicago, Illinois 60637, USA.
| | | |
Collapse
|
35
|
Conwell E, Morgan JL. Is It a Noun or Is It a Verb? Resolving the Ambicategoricality Problem. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2012; 8:87-112. [PMID: 34733122 PMCID: PMC8562707 DOI: 10.1080/15475441.2011.580236] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In many languages, significant numbers of words are used in more than one grammatical category; English, in particular, has many words that can be used as both nouns and verbs. Such ambicategoricality potentially poses problems for children trying to learn the grammatical properties of words and has been used to argue against the logical possibility of learning grammatical categories from syntactic distribution alone. This article addresses how often English-learning children hear words used across categories, whether young language learners might be sensitive to perceptual cues that differentiate noun and verb uses of such words and how young speakers use ambicategorical words. The findings suggest that children hear considerably less cross-category usage than is possible and are sensitive to perceptual cues that distinguish the two categories. Furthermore, in early language production, children's cross-category production mirrors the statistics of their linguistic environments, suggesting that they are distinguishing noun and verb uses of individual words in natural language exposure. Taken together, these results indicate that cues in the speech stream may help children resolve the ambicategoricality problem.
Collapse
Affiliation(s)
- Erin Conwell
- Department of Psychology, North Dakota State University, and Department of Cognitive and Linguistic Sciences, Brown University
| | - James L Morgan
- Department of Cognitive and Linguistic Sciences, Brown University
| |
Collapse
|
36
|
Brentari D, Coppola M, Mazzoni L, Goldin-Meadow S. When does a system become phonological? Handshape production in gesturers, signers, and homesigners. NATURAL LANGUAGE & LINGUISTIC THEORY 2012; 30:1-31. [PMID: 23723534 PMCID: PMC3665423 DOI: 10.1007/s11049-011-9145-1] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled. Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that homesigners pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit of a linguistic community. Finally, we propose that iconicity, morphology and phonology each play an important role in the system of sign language classifiers to create the earliest markers of phonology at the morphophonological interface.
Collapse
Affiliation(s)
- Diane Brentari
- Department of Linguistics, University of Chicago, 1010 East 59th Street, Chicago, IL 60637-1512, USA
| | - Marie Coppola
- Departments of Psychology and Linguistics, University of Connecticut, Storrs, CT, USA
| | - Laura Mazzoni
- Linguistics Department, University of Pisa, Pisa, Italy
| | - Susan Goldin-Meadow
- Departments of Psychology and Comparative Human Development, University of Chicago, Chicago, IL, USA
| |
Collapse
|
37
|
Franklin A, Giannakidou A, Goldin-Meadow S. Negation, questions, and structure building in a homesign system. Cognition 2010; 118:398-416. [PMID: 23630971 DOI: 10.1016/j.cognition.2010.08.017] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2009] [Revised: 08/09/2010] [Accepted: 08/27/2010] [Indexed: 10/18/2022]
Abstract
Deaf children whose hearing losses are so severe that they cannot acquire spoken language, and whose hearing parents have not exposed them to sign language, use gestures called homesigns to communicate. Homesigns have been shown to contain many of the properties of natural languages. Here we ask whether homesign has structure building devices for negation and questions. We identify two meanings (negation, question) that correspond semantically to propositional functions, that is, to functions that apply to a sentence (whose semantic value is a proposition, ϕ) and yield another proposition that is more complex (¬ϕ for negation; ?ϕ for question). Combining ϕ with ¬ or ? thus involves sentence modification. We propose that these negative and question functions are structure building operators, and we support this claim with data from an American homesigner. We show that: (a) each meaning is marked by a particular form in the child's gesture system (side-to-side headshake for negation, manual flip for question); (b) the two markers occupy systematic, and different, positions at the periphery of the gesture sentences (headshake at the beginning, flip at the end); and (c) the flip is extended from questions to other uses associated with the wh-form (exclamatives, referential expressions of location) and thus functions like a category in natural languages. If what we see in homesign is a language creation process (Goldin-Meadow, 2003), and if negation and question formation involve sentential modification, then our analysis implies that homesign has at least this minimal sentential syntax. Our findings thus contribute to ongoing debates about properties that are fundamental to language and language learning.
Collapse
Affiliation(s)
- Amy Franklin
- University of Texas Health Science Center Houston, School of Biomedical Informatics, Center for Cognitive Informatics and Decision Making, Houston, Texas, United States.
| | | | | |
Collapse
|
38
|
Abstract
Imagine a child who has never seen or heard language. Would such a child be able to invent a language? Despite what one might guess, the answer is "yes". This chapter describes children who are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the children have not been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate--they gesture--and those gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The properties of language that we find in these gestures are just those properties that do not need to be handed down from generation to generation, but can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop. In contrast to these deaf children who are inventing language with their hands, hearing children are learning language from a linguistic model. But they too produce gestures, as do all hearing speakers (Feyereisen and de Lannoy 1991; Goldin-Meadow 2003b; Kendon 1980; McNeill 1992). Indeed, young hearing children often use gesture to communicate before they use words. Interestingly, changes in a child's gestures not only predate but also predict changes in the child's early language, suggesting that gesture may be playing a role in the language-learning process. This chapter begins with a description of the gestures the deaf child produces without speech. These gestures assume the full burden of communication and take on a language-like form--they are language. This phenomenon stands in contrast to the gestures hearing speakers produce with speech. These gestures share the burden of communication with speech and do not take on a language-like form--they are part of language.
Collapse
|
39
|
Goldin-Meadow S. Le rôle des gestes dans la création et l’acquisition du langage. ENFANCE 2010. [DOI: 10.3917/enf1.103.0239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
40
|
Núñez RE, Sweetser E. With the Future Behind Them: Convergent Evidence From Aymara Language and Gesture in the Crosslinguistic Comparison of Spatial Construals of Time. Cogn Sci 2010; 30:401-50. [DOI: 10.1207/s15516709cog0000_62] [Citation(s) in RCA: 449] [Impact Index Per Article: 32.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
41
|
Goldin-Meadow S. Widening the Lens on Language Learning: Language Creation in Deaf Children and Adults in Nicaragua: Commentary on Senghas. Hum Dev 2010; 53:303-311. [PMID: 22476199 DOI: 10.1159/000321294] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
42
|
Ozçalişkan S, Goldin-Meadow S, Gentner D, Mylander C. Does language about similarity play a role in fostering similarity comparison in children? Cognition 2009; 112:217-28. [PMID: 19524220 DOI: 10.1016/j.cognition.2009.05.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2007] [Revised: 03/06/2009] [Accepted: 05/05/2009] [Indexed: 10/20/2022]
Abstract
Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of similarity comparisons, we observed four children who had not been exposed to usable linguistic input--deaf children whose hearing losses prevented them from learning spoken language and whose hearing parents had not exposed them to sign language. These children developed gesture systems that have language-like structure at many different levels. Here we ask whether the deaf children used their gestures to comment on similarity relations and, if so, which types of relations they expressed. We found that all four deaf children were able to use their gestures to express similarity comparisons (point to cat+point to tiger) resembling those conveyed by 40 hearing children in early gesture+speech combinations (cat+point to tiger). However, the two groups diverged at later ages. Hearing children, after acquiring the word like, shifted from primarily expressing global similarity (as in cat/tiger) to primarily expressing single-property similarity (as in crayon is brown like my hair). In contrast, the deaf children, lacking an explicit term for similarity, continued to primarily express global similarity. The findings underscore the robustness of similarity comparisons in human communication, but also highlight the importance of conventional terms for comparison as likely contributors to routinely expressing more focused similarity relations.
Collapse
|
43
|
Bernardis P, Salillas E, Caramelli N. Behavioural and neurophysiological evidence of semantic interaction between iconic gestures and words. Cogn Neuropsychol 2008; 25:1114-28. [DOI: 10.1080/02643290801921707] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
44
|
Capone NC. Tapping toddlers' evolving semantic representation via gesture. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2007; 50:732-45. [PMID: 17538112 DOI: 10.1044/1092-4388(2007/051)] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
PURPOSE This study presents evidence that gesture is a means to understanding the semantic representations of toddlers. METHOD The data were part of a study of toddlers' word learning conducted by N. C. Capone and K. K. McGregor (2005). The object function probe from that study was administered after 1 exposure and after 3 exposures to objects. Here, toddlers' gestures were described and their gesture-speech combinations were analyzed as a function of instruction and time. RESULTS A large proportion of toddlers gestured. Gestures were iconic and deictic, but toddlers produced more iconic gestures than previously reported. Consistent with studies of older children, toddlers produced gesture-speech combinations that reflected their learning state. CONCLUSION Gesture can be both a source of semantic knowledge and an expression of that knowledge. Gesture provides a window onto evolving semantic representations and, therefore, can be 1 method of assessing what a child knows at a time when oral language skills are limited and are, perhaps, an unreliable indicator of what the child knows. Embodied knowledge may underlie the use of gesture. Clinical implications are discussed.
Collapse
Affiliation(s)
- Nina C Capone
- Department of Speech-Language Pathology, School of Graduate Medical Education, Seton Hall University, 400 South Orange Avenue, Alfieri Hall, Room 33, South Orange, NJ 07079, USA.
| |
Collapse
|
45
|
Coppola M, Newport EL. Grammatical Subjects in home sign: Abstract linguistic structure in adult primary gesture systems without linguistic input. Proc Natl Acad Sci U S A 2005; 102:19249-53. [PMID: 16357199 PMCID: PMC1315276 DOI: 10.1073/pnas.0509306102] [Citation(s) in RCA: 63] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Language ordinarily emerges in young children as a consequence of both linguistic experience (for example, exposure to a spoken or signed language) and innate abilities (for example, the ability to acquire certain types of language patterns). One way to discern which aspects of language acquisition are controlled by experience and which arise from innate factors is to remove or manipulate linguistic input. However, experimental manipulations that involve depriving a child of language input are impossible. The present work examines the communication systems resulting from natural situations of language deprivation and thus explores the inherent tendency of humans to build communication systems of particular kinds, without any conventional linguistic input. We examined the gesture systems that three isolated deaf Nicaraguans (ages 14-23 years) have developed for use with their hearing families. These deaf individuals have had no contact with any conventional language, spoken or signed. To communicate with their families, they have each developed a gestural communication system within the home called "home sign." Our analysis focused on whether these systems show evidence of the grammatical category of Subject. Subjects are widely considered to be universal to human languages. Using specially designed elicitation tasks, we show that home signers also demonstrate the universal characteristics of Subjects in their gesture productions, despite the fact that their communicative systems have developed without exposure to a conventional language. These findings indicate that abstract linguistic structure, particularly the grammatical category of Subject, can emerge in the gestural modality without linguistic input.
Collapse
|
46
|
Goldin-Meadow S, Gelman SA, Mylander C. Expressing generic concepts with and without a language model. Cognition 2004; 96:109-26. [PMID: 15925572 DOI: 10.1016/j.cognition.2004.07.003] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2003] [Revised: 03/17/2004] [Accepted: 07/08/2004] [Indexed: 10/26/2022]
Abstract
Utterances expressing generic kinds ("birds fly") highlight qualities of a category that are stable and enduring, and thus provide insight into conceptual organization. To explore the role that linguistic input plays in children's production of generic nouns, we observed American and Chinese deaf children whose hearing losses prevented them from learning speech and whose hearing parents had not exposed them to sign. These children develop gesture systems that have language-like structure at many different levels. The specific question we addressed in this study was whether the gesture systems, developed without input from a conventional language model, would contain generics. We found that the deaf children used generics in the gestures they invented, and did so at about the same rate as hearing children growing up in the same cultures and learning English or Mandarin. Moreover, the deaf children produced more generics for animals than for artifacts, a bias found previously in adult English- and Mandarin-speakers and also found in both groups of hearing children in our current study. This bias has been hypothesized to reflect the different conceptual organizations underlying animal and artifact categories. Our results suggest that not only is a language model not necessary for young children to produce generic utterances, but the bias to produce more generics for animals than artifacts also does not require linguistic input to develop.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Department of Psychology, University of Chicago, 5730 South Woodlawn Avenue, Chicago, IL 60637, USA.
| | | | | |
Collapse
|
47
|
Capone NC, McGregor KK. Gesture development: a review for clinical and research practices. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2004; 47:173-186. [PMID: 15072537 DOI: 10.1044/1092-4388(2004/015)] [Citation(s) in RCA: 78] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The aim of this article is to provide clinicians and researchers a comprehensive overview of the development and functions of gesture in childhood and in select populations with developmental language impairments. Of significance is the growing body of evidence that gesture enhances, not hinders, language development. In both normal and impaired populations, gesture and language development parallel each other and share underlying symbolic abilities. Gesture serves several functions, including those of communication, compensation, and transition to spoken language. In clinical practice, gesture may play a valuable role in diagnosis, prognosis, goal selection, and intervention for children with language impairments. Where available, supporting evidence is presented. Needs for additional research on gesture are also highlighted.
Collapse
Affiliation(s)
- Nina C Capone
- New York Medical College, Speech-Language Pathology Program, Northwestern University, Chicago, IL 10595, USA.
| | | |
Collapse
|
48
|
|
49
|
Zheng M, Goldin-Meadow S. Thought before language: how deaf and hearing children express motion events across cultures. Cognition 2002; 85:145-75. [PMID: 12127697 DOI: 10.1016/s0010-0277(02)00105-1] [Citation(s) in RCA: 101] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Do children come to the language-learning situation with a predetermined set of ideas about motion events that they want to communicate? If so, is the expression of these ideas modified by exposure to a language model within a particular cultural context? We explored these questions by comparing the gestures produced by Chinese and American deaf children who had not been exposed to a usable conventional language model with the speech of hearing children learning Mandarin or English. We found that, even in the absence of any conventional language model, deaf children conveyed the central elements of a motion event in their communications. More surprisingly, deaf children growing up in an American culture used their gestures to express motion events in precisely the same ways as deaf children growing up in a Chinese culture. In contrast, hearing children in the two cultures expressed motion events differently, in accordance with the languages they were learning. The American children obeyed the patterns of English and rarely omitted words for figures or agents. The Chinese children had more flexibility as Mandarin permits (but does not demand) deletion. Interestingly, the Chinese hearing children's descriptions of motion events resembled the deaf children's descriptions more closely than did the American hearing children's. The thoughts that deaf children convey in their gestures thus may serve as the starting point and perhaps a default for all children as they begin the process of grammaticization--thoughts that have not yet been filtered through a language model.
Collapse
Affiliation(s)
- Mingyu Zheng
- Department of Psychology, University of Chicago, 5848 South University Avenue, Chicago, IL 60637, USA.
| | | |
Collapse
|
50
|
Abstract
Do language abilities develop in isolation? Are they mediated by a unique neural substrate, a "mental organ" devoted exclusively to language? Or is language built upon more general abilities, shared with other cognitive domains, and mediated by common neural systems? Here, we review results suggesting that language and gesture are "close family", then turn to evidence that raises questions about how real those "family resemblances" are, summarizing dissociations from our developmental studies of several different child populations. We then examine both these veins of evidence in light of some new findings from the adult neuroimaging literature and suggest a possible reinterpretation of these dissociations as well as new directions for research with both children and adults.
Collapse
Affiliation(s)
- Elizabeth Bates
- Center for Research in Language and Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093-0526, USA
| | | |
Collapse
|