1
|
Stamp R, Cohn D, Hel-Or H, Sandler W. Kinect-ing the Dots: Using Motion-Capture Technology to Distinguish Sign Language Linguistic From Gestural Expressions. LANGUAGE AND SPEECH 2024; 67:255-276. [PMID: 37313985 DOI: 10.1177/00238309231169502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Just as vocalization proceeds in a continuous stream in speech, so too do movements of the hands, face, and body in sign languages. Here, we use motion-capture technology to distinguish lexical signs in sign language from other common types of expression in the signing stream. One type of expression is constructed action, the enactment of (aspects of) referents and events by (parts of) the body. Another is classifier constructions, the manual representation of analogue and gradient motions and locations simultaneously with specified referent morphemes. The term signing is commonly used for all of these, but we show that not all visual signals in sign languages are of the same type. In this study of Israeli Sign Language, we use motion capture to show that the motion of lexical signs differs significantly along several kinematic parameters from that of the two other modes of expression: constructed action and the classifier forms. In so doing, we show how motion-capture technology can help to define the universal linguistic category "word," and to distinguish it from the expressive gestural elements that are commonly found across sign languages.
Collapse
Affiliation(s)
- Rose Stamp
- Department of English Literature and Linguistics, Bar-Ilan University, Israel
| | | | - Hagit Hel-Or
- Department of Computer Science, University of Haifa, Israel
| | - Wendy Sandler
- Sign Language Research Lab, University of Haifa, Israel
| |
Collapse
|
2
|
Trujillo JP, Holler J. Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Sci Rep 2024; 14:2286. [PMID: 38280963 PMCID: PMC10821935 DOI: 10.1038/s41598-024-52589-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 01/20/2024] [Indexed: 01/29/2024] Open
Abstract
Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.
Collapse
Affiliation(s)
- James P Trujillo
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands.
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Kelly SD, Ngo Tran QA. Exploring the Emotional Functions of Co-Speech Hand Gesture in Language and Communication. Top Cogn Sci 2023. [PMID: 37115518 DOI: 10.1111/tops.12657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 04/05/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023]
Abstract
Research over the past four decades has built a convincing case that co-speech hand gestures play a powerful role in human cognition . However, this recent focus on the cognitive function of gesture has, to a large extent, overlooked its emotional role-a role that was once central to research on bodily expression. In the present review, we first give a brief summary of the wealth of research demonstrating the cognitive function of co-speech gestures in language acquisition, learning, and thinking. Building on this foundation, we revisit the emotional function of gesture across a wide range of communicative contexts, from clinical to artistic to educational, and spanning diverse fields, from cognitive neuroscience to linguistics to affective science. Bridging the cognitive and emotional functions of gesture highlights promising avenues of research that have varied practical and theoretical implications for human-machine interactions, therapeutic interventions, language evolution, embodied cognition, and more.
Collapse
Affiliation(s)
- Spencer D Kelly
- Department of Psychological and Brain Sciences, Center for Language and Brain, Colgate University, 13 Oak Dr., Hamilton, NY, 13346, United States
| | - Quang-Anh Ngo Tran
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405, United States
| |
Collapse
|
4
|
Berent I, Gervain J. Speakers aren't blank slates (with respect to sign-language phonology)! Cognition 2023; 232:105347. [PMID: 36528980 DOI: 10.1016/j.cognition.2022.105347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 09/18/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022]
Abstract
A large literature has gauged the linguistic knowledge of signers by comparing sign-processing by signers and non-signers. Underlying this approach is the assumption that non-signers are devoid of any relevant linguistic knowledge, and as such, they present appropriate non-linguistic controls-a recent paper by Meade et al. (2022) articulates this view explicitly. Our commentary revisits this position. Informed by recent findings from adults and infants, we argue that the phonological system is partly amodal. We show that hearing infants use a shared brain network to extract phonological rules from speech and sign. Moreover, adult speakers who are sign-naïve demonstrably project knowledge of their spoken L1 to signs. So, when it comes to sign-language phonology, speakers are not linguistic blank slates. Disregarding this possibility could systematically underestimate the linguistic knowledge of signers and obscure the nature of the language faculty.
Collapse
Affiliation(s)
| | - Judit Gervain
- INCC, CNRS & Université Paris Cité, Paris, France; DPSS, University of Padua, Italy
| |
Collapse
|
5
|
Holler J. Visual bodily signals as core devices for coordinating minds in interaction. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210094. [PMID: 35876208 PMCID: PMC9310176 DOI: 10.1098/rstb.2021.0094] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/21/2022] [Indexed: 12/11/2022] Open
Abstract
The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed-and survived-owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Judith Holler
- Max-Planck-Institut für Psycholinguistik, Nijmegen, The Netherlands
- Donders Centre for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Wagner D, Bialystok E, Grundy JG. What Is a Language? Who Is Bilingual? Perceptions Underlying Self-Assessment in Studies of Bilingualism. Front Psychol 2022; 13:863991. [PMID: 35645938 PMCID: PMC9134110 DOI: 10.3389/fpsyg.2022.863991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 04/08/2022] [Indexed: 11/13/2022] Open
Abstract
Research on the cognitive consequences of bilingualism typically proceeds by labeling participants as "monolingual" or "bilingual" and comparing performance on some measures across these groups. It is well-known that this approach has led to inconsistent results. However, the approach assumes that there are clear criteria to designate individuals as monolingual or bilingual, and more fundamentally, to determine whether a communication system counts as a unique language. Both of these assumptions may not be correct. The problem is particularly acute when participants are asked to classify themselves or simply report how many languages they speak. Participants' responses to these questions are shaped by their personal perceptions of the criteria for making these judgments. This study investigated the perceptions underlying judgments of bilingualism by asking 528 participants to judge the extent to which a description of a fictional linguistic system constitutes a unique language and the extent to which a description of a fictional individual's linguistic competence qualifies that person as bilingual. The results show a range of responses for both concepts, indicating substantial ambiguity for these terms. Moreover, participants were asked to self-classify as monolingual or bilingual, and these decisions were not related to more objective information regarding the degree of bilingual experience obtained from a detailed questionnaire. These results are consistent with the notion that bilingualism is not categorical and that specific language experiences are important in determining the criteria for being bilingual. The results impact interpretations of research investigating group differences on the cognitive effects of bilingualism.
Collapse
Affiliation(s)
- Danika Wagner
- Department of Psychology, York University, Toronto, ON, Canada
| | - Ellen Bialystok
- Department of Psychology, York University, Toronto, ON, Canada
| | - John G. Grundy
- Department of Psychology, Iowa State University, Ames, IA, United States
| |
Collapse
|
7
|
Pouw W, Proksch S, Drijvers L, Gamba M, Holler J, Kello C, Schaefer RS, Wiggins GA. Multilevel rhythms in multimodal communication. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200334. [PMID: 34420378 PMCID: PMC8380971 DOI: 10.1098/rstb.2020.0334] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2021] [Indexed: 12/16/2022] Open
Abstract
It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Wim Pouw
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Shannon Proksch
- Cognitive and Information Sciences, University of California, Merced, CA, USA
| | - Linda Drijvers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Marco Gamba
- Department of Life Sciences and Systems Biology, University of Turin, Turin, Italy
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Christopher Kello
- Cognitive and Information Sciences, University of California, Merced, CA, USA
| | - Rebecca S. Schaefer
- Health, Medical and Neuropsychology unit, Institute for Psychology, Leiden University, Leiden, The Netherlands
- Academy for Creative and Performing Arts, Leiden University, Leiden, The Netherlands
| | - Geraint A. Wiggins
- Vrije Universiteit Brussel, Brussels, Belgium and Queen Mary University of London, UK
- Queen Mary University, London, UK
| |
Collapse
|
8
|
Pouw W, Dingemanse M, Motamedi Y, Özyürek A. A Systematic Investigation of Gesture Kinematics in Evolving Manual Languages in the Lab. Cogn Sci 2021; 45:e13014. [PMID: 34288069 PMCID: PMC8365719 DOI: 10.1111/cogs.13014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 05/25/2021] [Accepted: 06/09/2021] [Indexed: 11/29/2022]
Abstract
Silent gestures consist of complex multi‐articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.
Collapse
Affiliation(s)
- Wim Pouw
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen.,Max Planck Institute for Psycholinguistics, Radboud University Nijmegen
| | - Mark Dingemanse
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen.,Center for Language Studies, Radboud University Nijmegen
| | | | - Aslı Özyürek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen.,Max Planck Institute for Psycholinguistics, Radboud University Nijmegen.,Center for Language Studies, Radboud University Nijmegen
| |
Collapse
|
9
|
|
10
|
The signing body: extensive sign language practice shapes the size of hands and face. Exp Brain Res 2021; 239:2233-2249. [PMID: 34028597 PMCID: PMC8282562 DOI: 10.1007/s00221-021-06121-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 04/21/2021] [Indexed: 11/20/2022]
Abstract
The representation of the metrics of the hands is distorted, but is susceptible to malleability due to expert dexterity (magicians) and long-term tool use (baseball players). However, it remains unclear whether modulation leads to a stable representation of the hand that is adopted in every circumstance, or whether the modulation is closely linked to the spatial context where the expertise occurs. To this aim, a group of 10 experienced Sign Language (SL) interpreters were recruited to study the selective influence of expertise and space localisation in the metric representation of hands. Experiment 1 explored differences in hands’ size representation between the SL interpreters and 10 age-matched controls in near-reaching (Condition 1) and far-reaching space (Condition 2), using the localisation task. SL interpreters presented reduced hand size in near-reaching condition, with characteristic underestimation of finger lengths, and reduced overestimation of hands and wrists widths in comparison with controls. This difference was lost in far-reaching space, confirming the effect of expertise on hand representations is closely linked to the spatial context where an action is performed. As SL interpreters are also experts in the use of their face with communication purposes, the effects of expertise in the metrics of the face were also studied (Experiment 2). SL interpreters were more accurate than controls, with overall reduction of width overestimation. Overall, expertise modifies the representation of relevant body parts in a specific and context-dependent manner. Hence, different representations of the same body part can coexist simultaneously.
Collapse
|
11
|
Mineiro A, Báez-Montero IC, Moita M, Galhano-Rodrigues I, Castro-Caldas A. Disentangling Pantomime From Early Sign in a New Sign Language: Window Into Language Evolution Research. Front Psychol 2021; 12:640057. [PMID: 33935890 PMCID: PMC8080026 DOI: 10.3389/fpsyg.2021.640057] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 03/15/2021] [Indexed: 11/30/2022] Open
Abstract
In this study, we aim to disentangle pantomime from early signs in a newly-born sign language: Sao Tome and Principe Sign Language. Our results show that within 2 years of their first contact with one another, a community of 100 participants interacting everyday was able to build a shared language. The growth of linguistic systematicity, which included a decrease in use of pantomime, reduction of the amplitude of signs and an increase in articulation economy, showcases a learning, and social interaction process that constitutes a continuum and not a cut-off system. The human cognitive system is endowed with mechanisms for symbolization that allow the process of arbitrariness to unfold and the expansion of linguistic complexity. Our study helps to clarify the role of pantomime in a new sign language and how this role might be linked with language itself, showing implications for language evolution research.
Collapse
Affiliation(s)
- Ana Mineiro
- Catholic University of Portugal, Lisbon, Portugal.,Center of Interdisciplinary Research in Health, Catholic University of Portugal, Lisbon, Portugal
| | | | - Mara Moita
- Catholic University of Portugal, Lisbon, Portugal.,Center of Interdisciplinary Research in Health, Catholic University of Portugal, Lisbon, Portugal.,Linguistics Research Centre of the UNL (CLUNL), NOVA University of Lisbon, Lisbon, Portugal
| | - Isabel Galhano-Rodrigues
- University of Porto, Porto, Portugal.,Centro de Linguística da Universidade do Porto, University of Porto, Porto, Portugal
| | - Alexandre Castro-Caldas
- Catholic University of Portugal, Lisbon, Portugal.,Center of Interdisciplinary Research in Health, Catholic University of Portugal, Lisbon, Portugal
| |
Collapse
|
12
|
Luna S, Joubert S, Blondel M, Cecchetto C, Gagné JP. The Impact of Aging on Spatial Abilities in Deaf Users of a Sign Language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:230-240. [PMID: 33221919 DOI: 10.1093/deafed/enaa034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 09/21/2020] [Accepted: 09/22/2020] [Indexed: 06/11/2023]
Abstract
Research involving the general population of people who use a spoken language to communicate has demonstrated that older adults experience cognitive and physical changes associated with aging. Notwithstanding the differences in the cognitive processes involved in sign and spoken languages, it is possible that aging can also affect cognitive processing in deaf signers. This research aims to explore the impact of aging on spatial abilities among sign language users. Results showed that younger signers were more accurate than older signers on all spatial tasks. Therefore, the age-related impact on spatial abilities found in the older hearing population can be generalized to the population of signers. Potential implications for sign language production and comprehension are discussed.
Collapse
Affiliation(s)
- Stéphanie Luna
- Faculty of Medicine, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| | - Sven Joubert
- Department of Psychology, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| | - Marion Blondel
- Centre National de Recherche Scientifique, Structures Formelles du Langage, Université Paris 8
| | - Carlo Cecchetto
- Centre National de Recherche Scientifique, Structures Formelles du Langage, Université Paris 8
- Departement of Psychology, University of Milan-Bicocca
| | - Jean-Pierre Gagné
- Faculty of Medicine, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| |
Collapse
|
13
|
De Campos D, Buso L. Deaf sign language hidden in the fresco The Crucifixion of Saint Peter by Michelangelo Buonarroti (1475-1564). ACTA BIO-MEDICA : ATENEI PARMENSIS 2020; 91:e2020192. [PMID: 33525298 PMCID: PMC7927487 DOI: 10.23750/abm.v91i4.9069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 02/11/2020] [Indexed: 11/23/2022]
Abstract
Since antiquity, specialists have worked to facilitate the communication of hearing impaired individuals, which according to the current literature, is among the disabilities that have the greatest impact on the quality of life. The system by which deaf people communicate is based essentially on sign language and the manual alphabet, employing gestures, and facial and body expressions. Although there is no exact data on how many people communicated through sign language in ancient times, studies show manual alphabets were used by deaf people in Europe in the early 15th century. Perhaps this was a reflection of a significant number of deaf people living throughout Europe at that time and who needed sign language to communicate. In this context, this manuscript, for the first time, demonstrates the renowned Italian Renaissance artist and genius of human anatomy Michelangelo Buonarroti (1475-1564) may have used deaf sign language in the fresco The Crucifixion of Saint Peter [Cappella Paolina, Vatican City, Italy]. This would demonstrate the engagement of one of the greatest Renaissance artists, with a clinical condition that has been studied by numerous health specialists since ancient times. (www.actabiomedica.it)
Collapse
Affiliation(s)
| | - Luciano Buso
- Treviso, 31030, San Vito di Altivole, Italy, Studioso e ricercatore scientifico nel campo dell'arte.
| |
Collapse
|
14
|
Oña LS, Sandler W, Liebal K. A stepping stone to compositionality in chimpanzee communication. PeerJ 2019; 7:e7623. [PMID: 31565566 PMCID: PMC6745191 DOI: 10.7717/peerj.7623] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 08/05/2019] [Indexed: 11/20/2022] Open
Abstract
Compositionality refers to a structural property of human language, according to which the meaning of a complex expression is a function of the meaning of its parts and the way they are combined. Compositionality is a defining characteristic of all human language, spoken and signed. Comparative research into the emergence of human language aims at identifying precursors to such key features of human language in the communication of other primates. While it is known that chimpanzees, our closest relatives, produce a variety of gestures, facial expressions and vocalizations in interactions with their group members, little is known about how these signals combine simultaneously. Therefore, the aim of the current study is to investigate whether there is evidence for compositional structures in the communication of chimpanzees. We investigated two semi-wild groups of chimpanzees, with focus on their manual gestures and their combinations with facial expressions across different social contexts. If there are compositional structures in chimpanzee communication, adding a facial expression to a gesture should convey a different message than the gesture alone, a difference that we expect to be measurable by the recipient's response. Furthermore, we expect context-dependent usage of these combinations. Based on a form-based coding procedure of the collected video footage, we identified two frequently used manual gestures (stretched arm gesture and bent arm gesture) and two facial expression (bared teeth face and funneled lip face). We analyzed whether the recipients' response varied depending on the signaler's usage of a given gesture + face combination and the context in which these were used. Overall, our results suggest that, in positive contexts, such as play or grooming, specific combinations had an impact on the likelihood of the occurrence of particular responses. Specifically, adding a bared teeth face to a gesture either increased the likelihood of affiliative behavior (for stretched arm gesture) or eliminated the bias toward an affiliative response (for bent arm gesture). We show for the first time that the components under study are recombinable, and that different combinations elicit different responses, a property that we refer to as componentiality. Yet our data do not suggest that the components have consistent meanings in each combination-a defining property of compositionality. We propose that the componentiality exhibited in this study represents a necessary stepping stone toward a fully evolved compositional system.
Collapse
Affiliation(s)
- Linda S. Oña
- Max Planck Research Group ‘Naturalistic Social Cognition’, Max Planck Institute for Human Development, Berlin, Germany
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Wendy Sandler
- Sign Language Research Lab, University of Haifa, Haifa, Israel
| | - Katja Liebal
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
15
|
Dachkovsky S, Stamp R, Sandler W. Constructing Complexity in a Young Sign Language. Front Psychol 2018; 9:2202. [PMID: 30618892 PMCID: PMC6306080 DOI: 10.3389/fpsyg.2018.02202] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Accepted: 10/24/2018] [Indexed: 12/03/2022] Open
Abstract
A universally acknowledged, core property of language is its complexity, at each level of structure – sounds, words, phrases, clauses, utterances, and higher levels of discourse. How does this complexity originate and develop in a language? We cannot fully answer this question from spoken languages, since they are all thousands of years old or descended from old languages. However, sign languages of deaf communities can arise at any time and provide empirical data for testing hypotheses related to the emergence of language complexity. An added advantage of the signed modality is a correspondence between visible physical articulations and linguistic structures, providing a more transparent view of linguistic complexity and its emergence (Sandler, 2012). These essential characteristics of sign languages allow us to address the issue of emerging complexity by documenting the use of the body for linguistic purposes. We look at three types of discourse relations of increasing complexity motivated by research on spoken languages – additive, symmetric, and asymmetric (Mann and Thompson, 1988; Sanders et al., 1992). Each relation type can connect units at two different levels: within propositions (simpler) and across propositions (more complex).1 We hypothesized that these relations provide a measure for charting the time course of emergence of complexity, from simplest to most complex, in a new sign language. We test this hypothesis on Israeli Sign Language (ISL), a young language, some of whose earliest users are still available for recording. Taking advantage of the unique relation in sign languages between bodily articulations and linguistic form, we study fifteen ISL signers from three generations, and demonstrate that the predictions indeed hold. We also find that younger signers tend to converge on more systematic marking of relations, that they use fewer articulators for a given linguistic function than older signers, and that the form of articulations becomes reduced, as the language matures. Mapping discourse relations to the bodily expression of linguistic components across age groups reveals how simpler, less constrained, and more gesture-like expressions, become language.
Collapse
Affiliation(s)
| | - Rose Stamp
- Sign Language Research Laboratory, University of Haifa, Haifa, Israel
| | - Wendy Sandler
- Sign Language Research Laboratory, University of Haifa, Haifa, Israel
| |
Collapse
|