1
|
Gaps in the Lexicon Restrict Communication. Open Mind (Camb) 2023; 7:412-434. [PMID: 37637298 PMCID: PMC10449401 DOI: 10.1162/opmi_a_00089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 06/20/2023] [Indexed: 08/29/2023] Open
Abstract
Across languages, words carve up the world of experience in different ways. For example, English lacks an equivalent to the Chinese superordinate noun tiáowèipǐn, which is loosely translated as "ingredients used to season food while cooking." Do such differences matter? A conventional label may offer a uniquely effective way of communicating. On the other hand, lexical gaps may be easily bridged by the compositional power of language. After all, most of the ideas we want to express do not map onto simple lexical forms. We conducted a referential Director/Matcher communication task with adult speakers of Chinese and English. Directors provided a clue that Matchers used to select words from a word grid. The three target words corresponded to a superordinate term (e.g., beverages) in either Chinese or English but not both. We found that Matchers were more accurate at choosing the target words when their language lexicalized the target category. This advantage was driven entirely by the Directors' use/non-use of the intended superordinate term. The presence of a conventional superordinate had no measurable effect on speakers' within- or between-category similarity ratings. These results show that the ability to rely on a conventional term is surprisingly important despite the flexibility languages offer to communicate about non-lexicalized categories.
Collapse
|
2
|
Visual resemblance and interaction history jointly constrain pictorial meaning. Nat Commun 2023; 14:2199. [PMID: 37069160 PMCID: PMC10110538 DOI: 10.1038/s41467-023-37737-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 03/28/2023] [Indexed: 04/19/2023] Open
Abstract
How do drawings-ranging from detailed illustrations to schematic diagrams-reliably convey meaning? Do viewers understand drawings based on how strongly they resemble an entity (i.e., as images) or based on socially mediated conventions (i.e., as symbols)? Here we evaluate a cognitive account of pictorial meaning in which visual and social information jointly support visual communication. Pairs of participants used drawings to repeatedly communicate the identity of a target object among multiple distractor objects. We manipulated social cues across three experiments and a full replication, finding that participants developed object-specific and interaction-specific strategies for communicating more efficiently over time, beyond what task practice or a resemblance-based account alone could explain. Leveraging model-based image analyses and crowdsourced annotations, we further determined that drawings did not drift toward "arbitrariness," as predicted by a pure convention-based account, but preserved visually diagnostic features. Taken together, these findings advance psychological theories of how successful graphical conventions emerge.
Collapse
|
3
|
Simultaneous structures in sign languages: Acquisition and emergence. Front Psychol 2022; 13:992589. [PMID: 36619119 PMCID: PMC9815181 DOI: 10.3389/fpsyg.2022.992589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
The visual-gestural modality affords its users simultaneous movement of several independent articulators and thus lends itself to simultaneous encoding of information. Much research has focused on the fact that sign languages coordinate two manual articulators in addition to a range of non-manual articulators to present different types of linguistic information simultaneously, from phonological contrasts to inflection, spatial relations, and information structure. Children and adults acquiring a signed language arguably thus need to comprehend and produce simultaneous structures to a greater extent than individuals acquiring a spoken language. In this paper, we discuss the simultaneous encoding that is found in emerging and established sign languages; we also discuss places where sign languages are unexpectedly sequential. We explore potential constraints on simultaneity in cognition and motor coordination that might impact the acquisition and use of simultaneous structures.
Collapse
|
4
|
The puzzle of ideography. Behav Brain Sci 2022; 46:e233. [PMID: 36254782 DOI: 10.1017/s0140525x22002801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An ideography is a general-purpose code made of pictures that do not encode language, which can be used autonomously - not just as a mnemonic prop - to encode information on a broad range of topics. Why are viable ideographies so hard to find? I contend that self-sufficient graphic codes need to be narrowly specialized. Writing systems are only an apparent exception: At their core, they are notations of a spoken language. Even if they also encode nonlinguistic information, they are useless to someone who lacks linguistic competence in the encoded language or a related one. The versatility of writing is thus vicarious: Writing borrows it from spoken language. Why is it so difficult to build a fully generalist graphic code? The most widespread answer points to a learnability problem. We possess specialized cognitive resources for learning spoken language, but lack them for graphic codes. I argue in favor of a different account: What is difficult about graphic codes is not so much learning or teaching them as getting every user to learn and teach the same code. This standardization problem does not affect spoken or signed languages as much. Those are based on cheap and transient signals, allowing for easy online repairing of miscommunication, and require face-to-face interactions where the advantages of common ground are maximized. Graphic codes lack these advantages, which makes them smaller in size and more specialized.
Collapse
|
5
|
Simultaneity as an Emergent Property of Efficient Communication in Language: A Comparison of Silent Gesture and Sign Language. Cogn Sci 2022; 46:e13133. [PMID: 35613353 PMCID: PMC9287048 DOI: 10.1111/cogs.13133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 02/25/2022] [Accepted: 03/16/2022] [Indexed: 11/27/2022]
Abstract
Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality‐specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. (2020) with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality‐specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality‐specific adaptive capabilities.
Collapse
|
6
|
Investigating Word Order Emergence: Constraints From Cognition and Communication. Front Psychol 2022; 13:805144. [PMID: 35529568 PMCID: PMC9072621 DOI: 10.3389/fpsyg.2022.805144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 03/25/2022] [Indexed: 11/25/2022] Open
Abstract
How do cognitive biases and mechanisms from learning and use interact when a system of language conventions emerges? We investigate this question by focusing on how transitive events are conveyed in silent gesture production and interaction. Silent gesture experiments (in which participants improvise to use gesture but no speech) have been used to investigate cognitive biases that shape utterances produced in the absence of a conventional language system. In this mode of communication, participants do not follow the dominant order of their native language (e.g., Subject-Verb-Object), and instead condition the structure on the semantic properties of the events they are conveying. An important source of variability in structure in silent gesture is the property of reversibility. Reversible events typically have two animate participants whose roles can be reversed (girl kicks boy). Without a syntactic/conventional means of conveying who does what to whom, there is inherent unclarity about the agent and patient roles in the event (by contrast, this is less pressing for non-reversible events like girl kicks ball). In experiment 1 we test a novel, fine-grained analysis of reversibility. Presenting a silent gesture production experiment, we show that the variability in word order depends on two factors (properties of the verb and properties of the direct object) that together determine how reversible an event is. We relate our experimental results to principles from information theory, showing that our data support the “noisy channel” account of constituent order. In experiment 2, we focus on the influence of interaction on word order variability for reversible and non-reversible events. We show that when participants use silent gesture for communicative interaction, they become more consistent in their usage of word order over time, however, this pattern less pronounced for events that are classified as strongly non-reversible. We conclude that full consistency in word order is theoretically a good strategy, but word order use in practice is a more complex phenomenon.
Collapse
|
7
|
Gesture is the primary modality for language creation. Proc Biol Sci 2022; 289:20220066. [PMID: 35259991 PMCID: PMC8905156 DOI: 10.1098/rspb.2022.0066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
How language began is one of the oldest questions in science, but theories remain speculative due to a lack of direct evidence. Here, we report two experiments that generate empirical evidence to inform gesture-first and vocal-first theories of language origin; in each, we tested modern humans' ability to communicate a range of meanings (995 distinct words) using either gesture or non-linguistic vocalization. Experiment 1 is a cross-cultural study, with signal Producers sampled from Australia (n = 30, Mage = 32.63, s.d. = 12.42) and Vanuatu (n = 30, Mage = 32.40, s.d. = 11.76). Experiment 2 is a cross-experiential study in which Producers were either sighted (n = 10, Mage = 39.60, s.d. = 11.18) or severely vision-impaired (n = 10, Mage = 39.40, s.d. = 10.37). A group of undergraduate student Interpreters guessed the meaning of the signals created by the Producers (n = 140). Communication success was substantially higher in the gesture modality than the vocal modality (twice as high overall; 61.17% versus 29.04% success). This was true within cultures, across cultures and even for the signals produced by severely vision-impaired participants. The success of gesture is attributed in part to its greater universality (i.e. similarity in form across different Producers). Our results support the hypothesis that gesture is the primary modality for language creation.
Collapse
|
8
|
Phonological characteristics of novel gesture production in children with developmental language disorder: Longitudinal findings. APPLIED PSYCHOLINGUISTICS 2022; 43:333-362. [PMID: 35342208 PMCID: PMC8955622 DOI: 10.1017/s0142716421000540] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Children with developmental language disorder (DLD; aka specific language impairment) are characterized based on deficits in language, especially morphosyntax, in the absence of other explanatory conditions. However, deficits in speech production, as well as fine and gross motor skill, have also been observed, implicating both the linguistic and motor systems. Situated at the intersection of these domains, and providing insight into both, is manual gesture. In the current work, we asked whether children with DLD showed phonological deficits in the production of novel gestures and whether gesture production at 4 years of age is related to language and motor outcomes two years later. Twenty-eight children (14 with DLD) participated in a two-year longitudinal novel gesture production study. At the first and final time points, language and fine motor skills were measured and gestures were analyzed for phonological feature accuracy, including handshape, path, and orientation. Results indicated that, while early deficits in phonological accuracy did not persist for children with DLD, all children struggled with orientation while handshape was the most accurate. Early handshape and orientation accuracy were also predictive of later language skill, but only for the children with DLD. Theoretical and clinical implications of these findings are discussed.
Collapse
|
9
|
The effect of gestures on the interpretation of plural references. JOURNAL OF COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1080/20445911.2021.1998074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
10
|
Persuasive conversation as a new form of communication in Homo sapiens. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200196. [PMID: 33745315 DOI: 10.1098/rstb.2020.0196] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
The aim of this paper is twofold: to propose that conversation is the distinctive feature of Homo sapiens' communication; and to show that the emergence of modern language is tied to the transition from pantomime to verbal and grammatically complex forms of narrative. It is suggested that (animal and human) communication is a form of persuasion and that storytelling was the best tool developed by humans to convince others. In the early stage of communication, archaic hominins used forms of pantomimic storytelling to persuade others. Although pantomime is a powerful tool for persuasive communication, it is proposed that it is not an effective tool for persuasive conversation: conversation is characterized by a form of reciprocal persuasion among peers; instead, pantomime has a mainly asymmetrical character. The selective pressure towards persuasive reciprocity of the conversational level is the evolutionary reason that allowed the transition from pantomime to grammatically complex codes in H. sapiens, which favoured the evolution of speech. This article is part of the theme issue 'Reconstructing prehistoric languages'.
Collapse
|
11
|
Structural biases that children bring to language learning: A cross-cultural look at gestural input to homesign. Cognition 2021; 211:104608. [PMID: 33581667 DOI: 10.1016/j.cognition.2021.104608] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 12/18/2020] [Accepted: 01/17/2021] [Indexed: 10/22/2022]
Abstract
Linguistic input has an immediate effect on child language, making it difficult to discern whatever biases children may bring to language-learning. To discover these biases, we turn to deaf children who cannot acquire spoken language and are not exposed to sign language. These children nevertheless produce gestures, called homesigns, which have structural properties found in natural language. We ask whether these properties can be traced to gestures produced by hearing speakers in Nicaragua, a gesture-rich culture, and in the USA, a culture where speakers rarely gesture without speech. We studied 7 homesigning children and hearing family members in Nicaragua, and 4 in the USA. As expected, family members produced more gestures without speech, and longer gesture strings, in Nicaragua than in the USA. However, in both cultures, homesigners displayed more structural complexity than family members, and there was no correlation between individual homesigners and family members with respect to structural complexity. The findings replicate previous work showing that the gestures hearing speakers produce do not offer a model for the structural aspects of homesign, thus suggesting that children bring biases to construct, or learn, these properties to language-learning. The study also goes beyond the current literature in three ways. First, it extends homesign findings to Nicaragua, where homesigners received a richer gestural model than USA homesigners. Moreover, the relatively large numbers of gestures in Nicaragua made it possible to take advantage of more sophisticated statistical techniques than were used in the original homesign studies. Second, the study extends the discovery of complex noun phrases to Nicaraguan homesign. The almost complete absence of complex noun phrases in the hearing family members of both cultures provides the most convincing evidence to date that homesigners, and not their hearing family members, are the ones who introduce structural properties into homesign. Finally, by extending the homesign phenomenon to Nicaragua, the study offers insight into the gestural precursors of an emerging sign language. The findings shed light on the types of structures that an individual can introduce into communication before that communication is shared within a community of users, and thus sheds light on the roots of linguistic structure.
Collapse
|
12
|
Abstract
AbstractMany studies have been conducted to find approaches to overcome the Uncanny Valley. However, the focus on the influence of the robot’s appearance leaves a big missing part: the influence of the robot’s nonverbal behaviour. This impedes the complete exploration of the Uncanny Valley. In this study, we explored the Uncanny Valley from the viewpoint of the robot’s nonverbal behaviour in regard to the Uncanny Valley hypothesis. We observed a relationship between the participants’ ratings on human-likeness of the robot’s nonverbal behavior and affinity toward the robot’s nonverbal behavior, and define the point where the affinity toward the robot’s nonverbal behavior significantly drops down as the Uncanny Valley. In this study, an experiment of human–robot interaction was conducted. The participants were asked to interact with a robot with different nonverbal behaviours, ranging from 0 (no nonverbal behavior, speaking only) to 3 (gaze, head nodding, and gestures) combinations and to rate the perceived human-likeness and affinity toward the robot’s nonverbal behavior by using a questionnaire. Additionally, the participants’ fixation duration was measured during the experiment. The result showed a biphasic relationship between human-likeness and affinity rating results. A curve resembling the Uncanny Valley is found. The result was also supported by participants’ fixation duration. It showed that the participants had the longest fixation at the robot when the robot expressed the nonverbal behaviours that fall into the Uncanny Valley. This exploratory study provides evidence suggesting the existence of the Uncanny Valley from the viewpoint of the robot’s nonverbal behaviour.
Collapse
|
13
|
Pantomime (Not Silent Gesture) in Multimodal Communication: Evidence From Children's Narratives. Front Psychol 2020; 11:575952. [PMID: 33329222 PMCID: PMC7734346 DOI: 10.3389/fpsyg.2020.575952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 11/04/2020] [Indexed: 11/13/2022] Open
Abstract
Pantomime has long been considered distinct from co-speech gesture. It has therefore been argued that pantomime cannot be part of gesture-speech integration. We examine pantomime as distinct from silent gesture, focusing on non-co-speech gestures that occur in the midst of children’s spoken narratives. We propose that gestures with features of pantomime are an infrequent but meaningful component of a multimodal communicative strategy. We examined spontaneous non-co-speech representational gesture production in the narratives of 30 monolingual English-speaking children between the ages of 8- and 11-years. We compared the use of co-speech and non-co-speech gestures in both autobiographical and fictional narratives and examined viewpoint and the use of non-manual articulators, as well as the length of responses and narrative quality. The use of non-co-speech gestures was associated with longer narratives of equal or higher quality than those using only co-speech gestures. Non-co-speech gestures were most likely to adopt character-viewpoint and use non-manual articulators. The present study supports a deeper understanding of the term pantomime and its multimodal use by children in the integration of speech and gesture.
Collapse
|
14
|
Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behav Res Methods 2020; 52:51-67. [PMID: 30788798 PMCID: PMC7005091 DOI: 10.3758/s13428-019-01204-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An unprecedented number of empirical studies have shown that iconic gestures-those that mimic the sensorimotor attributes of a referent-contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture-meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture's mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
Collapse
|
15
|
The communicative importance of agent-backgrounding: Evidence from homesign and Nicaraguan Sign Language. Cognition 2020; 203:104332. [PMID: 32559513 DOI: 10.1016/j.cognition.2020.104332] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 05/11/2020] [Accepted: 05/18/2020] [Indexed: 10/24/2022]
Abstract
Some concepts are more essential for human communication than others. In this paper, we investigate whether the concept of agent-backgrounding is sufficiently important for communication that linguistic structures for encoding this concept are present in young sign languages. Agent-backgrounding constructions serve to reduce the prominence of the agent - the English passive sentence a book was knocked over is an example. Although these constructions are widely attested cross-linguistically, there is little prior research on the emergence of such devices in new languages. Here we studied how agent-backgrounding constructions emerge in Nicaraguan Sign Language (NSL) and adult homesign systems. We found that NSL signers have innovated both lexical and morphological devices for expressing agent-backgrounding, indicating that conveying a flexible perspective on events has deep communicative value. At the same time, agent-backgrounding devices did not emerge at the same time as agentive devices. This result suggests that agent-backgrounding does not have the same core cognitive status as agency. The emergence of agent-backgrounding morphology appears to depend on receiving a linguistic system as input in which linguistic devices for expressing agency are already well-established.
Collapse
|
16
|
Do you understand what I want to tell you? Early sensitivity in bilinguals' iconic gesture perception and production. Dev Sci 2020; 23:e12943. [PMID: 31991030 DOI: 10.1111/desc.12943] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Accepted: 01/23/2020] [Indexed: 11/30/2022]
Abstract
Previous research has shown differences in monolingual and bilingual communication. We explored whether monolingual and bilingual pre-schoolers (N = 80) differ in their ability to understand others' iconic gestures (gesture perception) and produce intelligible iconic gestures themselves (gesture production) and how these two abilities are related to differences in parental iconic gesture frequency. In a gesture perception task, the experimenter replaced the last word of every sentence with an iconic gesture. The child was then asked to choose one of four pictures that matched the gesture as well as the sentence. In a gesture production task, children were asked to indicate 'with their hands' to a deaf puppet which objects to select. Finally, parental gesture frequency was measured while parents answered three different questions. In the iconic gesture perception task, monolingual and bilingual children did not differ. In contrast, bilinguals produced more intelligible gestures than their monolingual peers. Finally, bilingual children's parents gestured more while they spoke than monolingual children's parents. We suggest that bilinguals' heightened sensitivity to their interaction partner supports their ability to produce intelligible gestures and results in a bilingual advantage in iconic gesture production.
Collapse
|
17
|
Lexical Iconicity is differentially favored under transmission in a new sign language: The effect of type of iconicity. SIGN LANGUAGE AND LINGUISTICS 2020; 23:73-95. [PMID: 33613090 PMCID: PMC7894619 DOI: 10.1075/sll.00044.pye] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Observations that iconicity diminishes over time in sign languages pose a puzzle--why should something so evidently useful and functional decrease? Using an archival dataset of signs elicited over 15 years from 4 first-cohort and 4 third-cohort signers of an emerging sign language (Nicaraguan Sign Language), we investigated changes in pantomimic (body-to-body) and perceptual (body-to-object) iconicity. We make three key observations: (1) there is greater variability in the signs produced by the first cohort compared to the third; (2) while both types of iconicity are evident, pantomimic iconicity is more prevalent than perceptual iconicity for both groups; and (3) across cohorts, pantomimic elements are dropped to a greater proportion than perceptual elements. The higher rate of pantomimic iconicity in the first-cohort lexicon reflects the usefulness of body-as-body mapping in language creation. Yet, its greater vulnerability to change over transmission suggests that it is less favored by children's language acquisition processes.
Collapse
|
18
|
Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to signs. Cognition 2019; 191:103996. [PMID: 31238248 DOI: 10.1016/j.cognition.2019.06.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Revised: 06/04/2019] [Accepted: 06/06/2019] [Indexed: 11/20/2022]
Abstract
The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of 'manual cognates' that help non-signing adults to break into a new language at first exposure.
Collapse
|
19
|
Abstract
The commentaries have led us to entertain expansions of our paradigm to include new theoretical questions, new criteria for what counts as a gesture, and new data and populations to study. The expansions further reinforce the approach we took in the target article: namely, that linguistic and gestural components are two distinct yet integral sides of communication, which need to be studied together.
Collapse
|
20
|
Comprehensibility and neural substrate of communicative gestures in severe aphasia. BRAIN AND LANGUAGE 2017; 171:62-71. [PMID: 28535366 DOI: 10.1016/j.bandl.2017.04.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Revised: 03/21/2017] [Accepted: 04/18/2017] [Indexed: 06/07/2023]
Abstract
Communicative gestures can compensate incomprehensibility of oral speech in severe aphasia, but the brain damage that causes aphasia may also have an impact on the production of gestures. We compared the comprehensibility of gestural communication of persons with severe aphasia and non-aphasic persons and used voxel based lesion symptom mapping (VLSM) to determine lesion sites that are responsible for poor gestural expression in aphasia. On group level, persons with aphasia conveyed more information via gestures than controls indicating a compensatory use of gestures in persons with severe aphasia. However, individual analysis showed a broad range of gestural comprehensibility. VLSM suggested that poor gestural expression was associated with lesions in anterior temporal and inferior frontal regions. We hypothesize that likely functional correlates of these localizations are selection of and flexible changes between communication channels as well as between different types of gestures and between features of actions and objects that are expressed by gestures.
Collapse
|
21
|
Production and Comprehension of Pantomimes Used to Depict Objects. Front Psychol 2017; 8:1095. [PMID: 28744232 PMCID: PMC5504161 DOI: 10.3389/fpsyg.2017.01095] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 06/13/2017] [Indexed: 11/26/2022] Open
Abstract
Pantomime, gesture in absence of speech, has no conventional meaning. Nevertheless, individuals seem to be able to produce pantomimes and derive meaning from pantomimes. A number of studies has addressed the use of co-speech gesture, but little is known on pantomime. Therefore, the question of how people construct and understand pantomimes arises in gesture research. To determine how people use pantomimes, we asked participants to depict a set of objects using pantomimes only. We annotated what representation techniques people produced. Furthermore, using judgment tasks, we assessed the pantomimes' comprehensibility. Analyses showed that similar techniques were used to depict objects across individuals. Objects with a default depiction method were better comprehended than objects for which there was no such default. More specifically, tools and objects depicted using a handling technique were better understood. The open-answer experiment showed low interpretation accuracy. Conversely, the forced-choice experiment showed ceiling effects. These results suggest that across individuals, similar strategies are deployed to produce pantomime, with the handling technique as the apparent preference. This might indicate that the production of pantomimes is based on mental representations which are intrinsically similar. Furthermore, pantomime conveys semantically rich, but ambiguous, information, and its interpretation is much dependent on context. This pantomime database is available online: https://dataverse.nl/dataset.xhtml?persistentId=hdl:10411/QZHO6M. This can be used as a baseline with which we can compare clinical groups.
Collapse
|
22
|
Abstract
A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal--that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495-514, 2008)--has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon's function is its purpose rather than its precipitating cause--the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism--it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.
Collapse
|
23
|
|
24
|
Communicative effectiveness of pantomime gesture in people with aphasia. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2017; 52:227-237. [PMID: 27417906 DOI: 10.1111/1460-6984.12268] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2015] [Revised: 04/26/2016] [Accepted: 04/26/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND Human communication occurs through both verbal and visual/motoric modalities. Simultaneous conversational speech and gesture occurs across all cultures and age groups. When verbal communication is compromised, more of the communicative load can be transferred to the gesture modality. Although people with aphasia produce meaning-laden gestures, the communicative value of these has not been adequately investigated. AIMS To investigate the communicative effectiveness of pantomime gesture produced spontaneously by individuals with aphasia during conversational discourse. METHODS & PROCEDURES Sixty-seven undergraduate students wrote down the messages conveyed by 11 people with aphasia that produced pantomime while engaged in conversational discourse. Students were presented with a speech-only, a gesture-only and a combined speech and gesture condition and guessed messages in both a free description and a multiple-choice task. OUTCOMES & RESULTS As hypothesized, listener comprehension was more accurate in the combined pantomime gesture and speech condition as compared with the gesture- or speech-only conditions. Participants achieved greater accuracy in the multiple-choice task as compared with the free-description task, but only in the gesture-only condition. The communicative effectiveness of the pantomime gestures increased as the fluency of the participants with aphasia decreased. CONCLUSIONS & IMPLICATIONS These results indicate that when pantomime gesture was presented with aphasic speech, the combination had strong communicative effectiveness. Future studies could investigate how pantomimes can be integrated into interventions for people with aphasia, particularly emphasizing elicitation of pantomimes in as natural a context as possible and highlighting the opportunity for efficient message repair.
Collapse
|
25
|
Abstract
Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.
Collapse
|
26
|
Abstract
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Collapse
|
27
|
Prosody Predicts Contest Outcome in Non-Verbal Dialogs. PLoS One 2016; 11:e0166953. [PMID: 27907039 PMCID: PMC5132166 DOI: 10.1371/journal.pone.0166953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Accepted: 11/07/2016] [Indexed: 11/24/2022] Open
Abstract
Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.
Collapse
|
28
|
Successful communication does not drive language development: Evidence from adult homesign. Cognition 2016; 158:10-27. [PMID: 27771538 DOI: 10.1016/j.cognition.2016.09.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 07/07/2016] [Accepted: 09/28/2016] [Indexed: 10/20/2022]
Abstract
Constructivist accounts of language acquisition maintain that the language learner aims to match a target provided by mature users. Communicative problem solving in the context of social interaction and matching a linguistic target or model are presented as primary mechanisms driving the language development process. However, research on the development of homesign gesture systems by deaf individuals who have no access to a linguistic model suggests that aspects of language can develop even when typical input is unavailable. In four studies, we examined the role of communication in the genesis of homesign systems by assessing how well homesigners' family members comprehend homesign productions. In Study 1, homesigners' mothers showed poorer comprehension of homesign descriptions produced by their now-adult deaf child than of spoken Spanish descriptions of the same events produced by one of their adult hearing children. Study 2 found that the younger a family member was when they first interacted with their deaf relative, the better they understood the homesigner. Despite this, no family member comprehended homesign productions at levels that would be expected if family members co-generated homesign systems with their deaf relative via communicative interactions. Study 3 found that mothers' poor or incomplete comprehension of homesign was not a result of incomplete homesign descriptions. In Study 4 we demonstrated that Deaf native users of American Sign Language, who had no previous experience with the homesigners or their homesign systems, nevertheless comprehended homesign productions out of context better than the homesigners' mothers. This suggests that homesign has comprehensible structure, to which mothers and other family members are not fully sensitive. Taken together, these studies show that communicative problem solving is not responsible for the development of structure in homesign systems. The role of this mechanism must therefore be re-evaluated in constructivist theories of language development.
Collapse
|
29
|
Handshape complexity as a precursor to phonology: Variation, Emergence, and Acquisition. LANGUAGE ACQUISITION 2016; 24:283-306. [PMID: 33033424 PMCID: PMC7540628 DOI: 10.1080/10489223.2016.1187614] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper two dimensions of handshape complexity are analyzed as potential building blocks of phonological contrast-joint complexity and finger group complexity. We ask whether sign language patterns are elaborations of those seen in the gestures produced by hearing people without speech (pantomime) or a more radical re-organization of them. Data from adults and children are analyzed to address issues of cross-linguistic variation, emergence, and acquisition. Study 1 addresses these issues in adult signers and gesturers from the United States, Italy, China, and Nicaragua. Study 2 addresses these issues in child and adult groups (signers and gesturers) from the United States, Italy, and Nicaragua. We argue that handshape undergoes a fairly radical reorganization, including loss and reorganization of iconicity and feature redistribution, as phonologization takes place in both of these dimensions. Moreover, while the patterns investigated here are not evidence of duality of patterning, we conclude that they are indeed phonological, and that they appear earlier than related morphosyntactic patterns that use the same types of handshape.
Collapse
|
30
|
Abstract
When people talk, they gesture. Typically, gesture is produced along with speech and forms a fully integrated system with that speech. However, under unusual circumstances, gesture can be produced on its own, without speech. In these instances, gesture must take over the full burden of communication usually shared by the two modalities. What happens to gesture in this very different context? One possibility is that there are no differences in the forms gesture takes with speech and without it—that gesture is gesture no matter what its function. But that is not what we find. When gesture is produced on its own and assumes the full burden of communication, it takes on a language-like form. In contrast, when gesture is produced in conjunction with speech and shares the burden of communication with that speech, it takes on an unsegmented, imagistic form, often conveying information not found in speech. As such, gesture sheds light on how people think and can even play a role in changing those thoughts. Gesture can thus be part of language or it can itself be language, altering its form to fit its function.
Collapse
|
31
|
A multimodal parallel architecture: A cognitive framework for multimodal interactions. Cognition 2016; 146:304-23. [DOI: 10.1016/j.cognition.2015.10.007] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2014] [Revised: 09/03/2015] [Accepted: 10/11/2015] [Indexed: 11/29/2022]
|
32
|
Iconicity can ground the creation of vocal symbols. ROYAL SOCIETY OPEN SCIENCE 2015; 2:150152. [PMID: 26361547 PMCID: PMC4555852 DOI: 10.1098/rsos.150152] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2015] [Accepted: 07/10/2015] [Indexed: 05/05/2023]
Abstract
Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative 'vocal' charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic 'meaning template' we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems.
Collapse
|
33
|
The Continuity of Metaphor: Evidence From Temporal Gestures. Cogn Sci 2015; 40:481-95. [DOI: 10.1111/cogs.12254] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2014] [Revised: 12/30/2014] [Accepted: 01/02/2015] [Indexed: 11/27/2022]
|
34
|
Abstract
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
Collapse
|
35
|
Abstract
In the last few years, researchers have begun to investigate the emergence of novel forms of human communication in the laboratory. I survey this growing line of research, which may be called experimental semiotics, from three distinct angles. First, I situate the new approach in its theoretical and historical context. Second, I review a sample of studies that exemplify experimental semiotics. Third, I present an empirical study that illustrates how the new approach can help us understand the socio-cognitive underpinnings of human communication. The main conclusion of the paper will be that, by reproducing micro samples of historical processes in the laboratory, experimental semiotics offers new powerful tools for investigating human communication as a form of joint action.
Collapse
|
36
|
Abstract
Until recently it was widely held that language, and its left-hemispheric representation in the brain, were uniquely human, emerging abruptly after the emergence of Homo sapiens. Changing views of language suggest that it was not a recent and sudden development in human evolution, but was adapted from dual-stream circuity long predating hominins, including a system in nonhuman primates specialized for intentional grasping. This system was gradually tailored for skilled manual operations (praxis) and communication. As processing requirements grew more demanding, the neural circuits were increasingly lateralized, with the left hemisphere assuming dominance, at least in the majority of individuals. The trend toward complexity and lateralization was probably accelerated in hominins when bipedalism freed the hands for more complex manufacture and tool use, and more expressive communication. The incorporation of facial and vocal gestures led to the emergence of speech as the dominant mode of language, although gestural communication may have led to generative language before speech became dominant. This scenario provides a more Darwinian perspective on language and its lateralization than has been commonly assumed.
Collapse
|
37
|
The Influence of the Visual Modality on Language Structure and Conventionalization: Insights From Sign Language and Gesture. Top Cogn Sci 2015; 7:2-11. [DOI: 10.1111/tops.12127] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Revised: 11/13/2014] [Accepted: 11/13/2014] [Indexed: 11/29/2022]
|
38
|
Abstract
Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.
Collapse
|
39
|
The impact of time on predicate forms in the manual modality: signers, homesigners, and silent gesturers. Top Cogn Sci 2015; 7:169-84. [PMID: 25329421 PMCID: PMC4310783 DOI: 10.1111/tops.12119] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2013] [Revised: 11/11/2013] [Accepted: 02/07/2014] [Indexed: 11/27/2022]
Abstract
It is difficult to create spoken forms that can be understood on the spot. But the manual modality, in large part because of its iconic potential, allows us to construct forms that are immediately understood, thus requiring essentially no time to develop. This paper contrasts manual forms for actions produced over three time spans-by silent gesturers who are asked to invent gestures on the spot; by homesigners who have created gesture systems over their life spans; and by signers who have learned a conventional sign language from other signers-and finds that properties of the predicate differ across these time spans. Silent gesturers use location to establish co-reference in the way established sign languages do, but they show little evidence of the segmentation sign languages display in motion forms for manner and path, and little evidence of the finger complexity sign languages display in handshapes in predicates representing events. Homesigners, in contrast, not only use location to establish co-reference but also display segmentation in their motion forms for manner and path and finger complexity in their object handshapes, although they have not yet decreased finger complexity to the levels found in sign languages in their handling handshapes. The manual modality thus allows us to watch language as it grows, offering insight into factors that may have shaped and may continue to shape human language.
Collapse
|
40
|
Cognitive, Cultural, and Linguistic Sources of a Handshape Distinction Expressing Agentivity. Top Cogn Sci 2014; 7:95-123. [DOI: 10.1111/tops.12123] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2013] [Revised: 11/21/2013] [Accepted: 01/24/2014] [Indexed: 11/27/2022]
|
41
|
On language acquisition in speech and sign: development of combinatorial structure in both modalities. Front Psychol 2014; 5:1217. [PMID: 25426085 PMCID: PMC4227467 DOI: 10.3389/fpsyg.2014.01217] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Accepted: 10/07/2014] [Indexed: 11/25/2022] Open
Abstract
Languages are composed of a conventionalized system of parts which allow speakers and signers to generate an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: “how do I make a language with combinatorial structure”? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.
Collapse
|
42
|
The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children. JOURNAL OF COGNITION AND DEVELOPMENT 2014; 16:55-80. [PMID: 25663828 DOI: 10.1080/15248372.2013.803970] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, that have many of the properties of natural language-the so-called resilient properties of language. We explored the resilience of structure built around the predicate-in particular, how manner and path are mapped onto the verb-in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children's gestures. Although co-speech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language.
Collapse
|
43
|
Neuropsychological functions of hand movements and gestures change in the presence or absence of speech. JOURNAL OF COGNITIVE PSYCHOLOGY 2014. [DOI: 10.1080/20445911.2014.961925] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
44
|
Abstract
One reason for the apparent gulf between animal and human communication systems is that the focus has been on the presence or the absence of language as a complex expressive system built on speech. But language normally occurs embedded within an interactional exchange of multi-modal signals. If this larger perspective takes central focus, then it becomes apparent that human communication has a layered structure, where the layers may be plausibly assigned different phylogenetic and evolutionary origins--especially in the light of recent thoughts on the emergence of voluntary breathing and spoken language. This perspective helps us to appreciate the different roles that the different modalities play in human communication, as well as how they function as one integrated system despite their different roles and origins. It also offers possibilities for reconciling the 'gesture-first hypothesis' with that of gesture and speech having evolved together, hand in hand--or hand in mouth, rather--as one system.
Collapse
|
45
|
Creating a communication system from scratch: gesture beats vocalization hands down. Front Psychol 2014; 5:354. [PMID: 24808874 PMCID: PMC4010783 DOI: 10.3389/fpsyg.2014.00354] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2013] [Accepted: 04/04/2014] [Indexed: 11/30/2022] Open
Abstract
How does modality affect people's ability to create a communication system from scratch? The present study experimentally tests this question by having pairs of participants communicate a range of pre-specified items (emotions, actions, objects) over a series of trials to a partner using either non-linguistic vocalization, gesture or a combination of the two. Gesture-alone outperformed vocalization-alone, both in terms of successful communication and in terms of the creation of an inventory of sign-meaning mappings shared within a dyad (i.e., sign alignment). Combining vocalization with gesture did not improve performance beyond gesture-alone. In fact, for action items, gesture-alone was a more successful means of communication than the combined modalities. When people do not share a system for communication they can quickly create one, and gesture is the best means of doing so.
Collapse
|
46
|
|
47
|
The gestural misinformation effect: skewing eyewitness testimony through gesture. AMERICAN JOURNAL OF PSYCHOLOGY 2013; 126:301-14. [PMID: 24027944 DOI: 10.5406/amerjpsyc.126.3.0301] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The susceptibility of eyewitnesses to verbal suggestion has been well documented, although little attention has been paid to the role of nonverbal communication in misinformation. Three experiments are reported; in each, participants watched footage of a crime scene before being questioned about what they had observed. In Experiments 1 and 2, an on-screen interviewer accompanied identically worded questions with gestures that either conveyed accurate information about the scene or conveyed false, misleading information. The misleading gestures significantly influenced recall, and participants' responses were consistent with the gestured information. In Experiment 3, a live interview was conducted, and the gestural misinformation effect was found to be robust; participants were influenced by misleading gestures performed by the interviewer during questioning. These findings provide compelling evidence for the gestural misinformation effect, whereby subtle hand gestures can implant information and distort the testimony of eyewitnesses. The practical and legal implications of these findings are discussed.
Collapse
|
48
|
The relationship of aphasia type and gesture production in people with aphasia. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2013; 22:662-672. [PMID: 24018695 DOI: 10.1044/1058-0360(2013/12-0030)] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE For many individuals with aphasia, gestures form a vital component of message transfer and are the target of speech-language pathology intervention. What remains unclear are the participant variables that predict successful outcomes from gesture treatments. The authors examined the gesture production of a large number of individuals with aphasia-in a consistent discourse sampling condition and with a detailed gesture coding system-to determine patterns of gesture production associated with specific types of aphasia. METHOD The authors analyzed story retell samples from AphasiaBank (TalkBank, n.d.), gathered from 98 individuals with aphasia resulting from stroke and 64 typical controls. Twelve gesture types were coded. Descriptive statistics were used to describe the patterns of gesture production. Possible significant differences in production patterns according to aphasia type were examined using a series of chi-square, Fisher exact, and logistic regression statistics. RESULTS A significantly higher proportion of individuals with aphasia gestured as compared to typical controls, and for many individuals with aphasia, this gesture was iconic and was capable of communicative load. Aphasia type impacted significantly on gesture type in specific identified patterns, detailed here. CONCLUSION These type-specific patterns suggest the opportunity for gestures as targets of aphasia therapy.
Collapse
|
49
|
Predicate structures, gesture, and simultaneity in the representation of action in British Sign Language: evidence from deaf children and adults. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2013; 18:370-390. [PMID: 23670881 PMCID: PMC3943391 DOI: 10.1093/deafed/ent020] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Revised: 02/28/2013] [Accepted: 03/24/2013] [Indexed: 06/02/2023]
Abstract
British Sign Language (BSL) signers use a variety of structures, such as constructed action (CA), depicting constructions (DCs), or lexical verbs, to represent action and other verbal meanings. This study examines the use of these verbal predicate structures and their gestural counterparts, both separately and simultaneously, in narratives by deaf children with various levels of exposure to BSL (ages 5;1 to 7;5) and deaf adult native BSL signers. Results reveal that all groups used the same types of predicative structures, including children with minimal BSL exposure. However, adults used CA, DCs, and/or lexical signs simultaneously more frequently than children. These results suggest that simultaneous use of CA with lexical and depicting predicates is more complex than the use of these predicate structures alone and thus may take deaf children more time to master.
Collapse
|
50
|
How handshape type can distinguish between nouns and verbs in homesign. GESTURE (AMSTERDAM, NETHERLANDS) 2013; 175:354-376. [PMID: 25435844 PMCID: PMC4245027 DOI: 10.1075/gest.13.3.05hun] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
All established languages, spoken or signed, make a distinction between nouns and verbs. Even a young sign language emerging within a family of deaf individuals has been found to mark the noun-verb distinction, and to use handshape type to do so. Here we ask whether handshape type is used to mark the noun-verb distinction in a gesture system invented by a deaf child who does not have access to a usable model of either spoken or signed language. The child produces homesigns that have linguistic structure, but receives from his hearing parents co-speech gestures that are structured differently from his own gestures. Thus, unlike users of established and emerging languages, the homesigner is a producer of his system but does not receive it from others. Nevertheless, we found that the child used handshape type to mark the distinction between nouns and verbs at the early stages of development. The noun-verb distinction is thus so fundamental to language that it can arise in a homesign system not shared with others. We also found that the child abandoned handshape type as a device for distinguishing nouns from verbs at just the moment when he developed a combinatorial system of handshape and motion components that marked the distinction. The way the noun-verb distinction is marked thus depends on the full array of linguistic devices available within the system.
Collapse
|