1
|
Ozturk S, Özçalışkan Ş. Gesture's Role in the Communication of Adults With Different Types of Aphasia. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024:1-20. [PMID: 38625101 DOI: 10.1044/2024_ajslp-23-00046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
PURPOSE Adults with aphasia gesture more than adults without aphasia. However, less is known about the role of gesture in different discourse contexts for individuals with different types of aphasia. In this study, we asked whether patterns of speech and gesture production of individuals with aphasia vary by aphasia and discourse type and also differ from the speech and gestures produced by adults without aphasia. METHOD We compared the amount, diversity, and complexity of speech and gesture production in adults with anomic or Broca's aphasia and adults with no aphasia (n = 20/group) in their first- versus third-person narratives. RESULTS Adults with Broca's aphasia showed the lowest performance in their amount, diversity, and complexity of speech production, followed by adults with anomic aphasia and adults without aphasia. This pattern was reversed for gesture production. Speech and gesture production also varied by discourse context. Adults with either type of aphasia used a lower amount of and less diverse speech in third-person than in first-person narratives; this pattern was also reversed for gesture production. CONCLUSIONS Overall, our results provide evidence for a compensatory role of gesture in aphasia communication. Adults with Broca's aphasia, who showed the greatest speech production difficulties, also relied most on gesture, and this pattern was particularly pronounced in the third-person narrative context.
Collapse
|
2
|
Bradley C, Wilbur R. Visual Form and Event Semantics Predict Transitivity in Silent Gestures: Evidence for Compositionality. Cogn Sci 2023; 47:e13331. [PMID: 37635624 DOI: 10.1111/cogs.13331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/18/2023] [Accepted: 08/08/2023] [Indexed: 08/29/2023]
Abstract
Silent gesture is not considered to be linguistic, on par with spoken and sign languages. It is claimed that silent gestures, unlike language, represent events holistically, without compositional structure. However, recent research has demonstrated that gesturers use consistent strategies when representing objects and events, and that there are behavioral and clinically relevant limits on what form a gesture may take to effect a particular meaning. This systematicity challenges a holistic interpretation of silent gesture, which predicts that there should be no stable form-meaning correspondence across event representations. Here, we demonstrate to the contrary that untrained gesturers systematically manipulate the form of their gestures when representing events with and without a theme (e.g., Someone popped the balloon vs. Someone walked), that is, transitive and intransitive events. We elicited silent gestures and annotated them for manual features active in coding transitivity distinctions in sign languages. We trained linear support vector machines to make item-by-item transitivity predictions based on these features. Prediction accuracy was good across the entire dataset, thus demonstrating that systematicity in silent gesture can be explained with recourse to subunits. We argue that handshape features are constructs co-opted from cognitive systems subserving manual action production and comprehension for communicative purposes, which may integrate into the linguistic system of emerging sign languages. We further suggest that nonsigners tend to map event participants to each hand, a strategy found across genetically and geographically distinct sign languages, suggesting the strategy's cognitive foundation.
Collapse
Affiliation(s)
| | - Ronnie Wilbur
- Department of Linguistics, Purdue University
- Department of Speech, Language, and Hearing Sciences, Purdue University
| |
Collapse
|
3
|
Slonimska A, Özyürek A, Capirci O. Simultaneity as an Emergent Property of Efficient Communication in Language: A Comparison of Silent Gesture and Sign Language. Cogn Sci 2022; 46:e13133. [PMID: 35613353 PMCID: PMC9287048 DOI: 10.1111/cogs.13133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 02/25/2022] [Accepted: 03/16/2022] [Indexed: 11/27/2022]
Abstract
Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality‐specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. (2020) with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality‐specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality‐specific adaptive capabilities.
Collapse
Affiliation(s)
- Anita Slonimska
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics, Radboud University
| | - Asli Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics, Radboud University.,Donders Centre for Cognition, Radboud University
| | - Olga Capirci
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR) of Italy
| |
Collapse
|
4
|
Ünal E, Manhardt F, Özyürek A. Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements. Cognition 2022; 225:105127. [PMID: 35617850 DOI: 10.1016/j.cognition.2022.105127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 03/24/2022] [Accepted: 04/11/2022] [Indexed: 11/28/2022]
Abstract
Speakers' visual attention to events is guided by linguistic conceptualization of information in spoken language production and in language-specific ways. Does production of language-specific co-speech gestures further guide speakers' visual attention during message preparation? Here, we examine the link between visual attention and multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers' speech and gesture show language specificity with path of motion mostly expressed within the main verb accompanied by path gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not. Results strongly suggest that speakers' visual attention is guided by language-specific event encoding not only in speech but also in gesture. This provides evidence consistent with models that propose integration of speech and gesture at the conceptualization level of language production and suggests that the links between the eye and the mouth may be extended to the eye and the hand.
Collapse
Affiliation(s)
- Ercenur Ünal
- Department of Psychology, Ozyegin Universiy, Nişantepe Mahallesi Orman Sokak, 34794, Çekmeköy, Istanbul, Turkey; Centre for Language Studies, Radboud University, Erasmusplein 1, 6525, HT, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525, XD, Nijmegen, the Netherlands.
| | - Francie Manhardt
- Centre for Language Studies, Radboud University, Erasmusplein 1, 6525, HT, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525, XD, Nijmegen, the Netherlands.
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University, Erasmusplein 1, 6525, HT, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525, XD, Nijmegen, the Netherlands; Donders Institute for Brain, Cognition and Behaviour, Heyendaalseweg, 135 6525, AJ, Nijmegen, the Netherlands.
| |
Collapse
|
5
|
The Seeds of the Noun–Verb Distinction in the Manual Modality: Improvisation and Interaction in the Emergence of Grammatical Categories. LANGUAGES 2022. [DOI: 10.3390/languages7020095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The noun–verb distinction has long been considered a fundamental property of human language, and has been found in some form even in the earliest stages of language emergence, including homesign and the early generations of emerging sign languages. We present two experimental studies that use silent gesture to investigate how noun–verb distinctions develop in the manual modality through two key processes: (i) improvising using novel signals by individuals, and (ii) using those signals in the interaction between communicators. We operationalise communicative interaction in two ways: a setting in which members of the dyad were in separate booths and were given a comprehension test after each stimulus vs. a more naturalistic face-to-face conversation without comprehension checks. There were few differences between the two conditions, highlighting the robustness of the paradigm. Our findings from both experiments reflect patterns found in naturally emerging sign languages. Some formal distinctions arise in the earliest stages of improvisation and do not require interaction to develop. However, the full range of formal distinctions between nouns and verbs found in naturally emerging language did not appear with either improvisation or interaction, suggesting that transmitting the language to a new generation of learners might be necessary for these properties to emerge.
Collapse
|
6
|
Emerson SN, Limia VD, Özçalışkan Ş. Cross-linguistic transfer in Turkish-English bilinguals' descriptions of motion events. LINGUA. INTERNATIONAL REVIEW OF GENERAL LINGUISTICS. REVUE INTERNATIONALE DE LINGUISTIQUE GENERALE 2021; 264:103153. [PMID: 35001974 PMCID: PMC8740906 DOI: 10.1016/j.lingua.2021.103153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Languages differ in how they express motion: Languages like English prefer to conflate manner and path into the same clause and express both elements frequently while languages like Turkish prefer to express these elements separately, with a greater preference for the expression of path of motion. While typological patterns are well-established for monolingual speakers of a variety of languages, relatively less is known about motion expression in bilingual speakers. The current study examined the packaging (expressing each element in separate clauses or within the same clause) and lexical choices (amount and diversity of manner and path verbs) for motion expression in monolingual speakers of Turkish or English and advanced Turkish (L1)-English (L2) bilinguals in a narrative elicitation task. Bilinguals were successful in attaining many English-like patterns of expression in their L2 English but also showed some packaging and lexical choices that were intermediate between English and Turkish monolinguals-thus providing evidence of an L1-to-L2 cross-linguistic effect. Subtle effects of L2 on L1 were also found in bilinguals' lexical choices for the expression of motion in their L1 Turkish. Altogether, our results demonstrate bi-directional transfer effects of learning a typologically distinct language in advanced Turkish-English bilinguals.
Collapse
Affiliation(s)
- Samantha N. Emerson
- Boys Town National Research Hospital, Center for Childhood Deafness, Learning, & Language, 555 North 30th St., Omaha, NE 68131, USA
| | - Valery D. Limia
- Florida Institute of Technology, Institutional Research & Effectiveness, 150 West University Blvd., Melbourne, FL 32901, USA
| | - Şeyda Özçalışkan
- Georgia State University, Department of Psychology, 140 Decatur St., Atlanta, GA 30303, USA
| |
Collapse
|
7
|
Stites L, Özçalışkan Ş. The Time is at Hand: Literacy Predicts Changes in Children's Gestures About Time. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2021; 50:967-983. [PMID: 33963464 DOI: 10.1007/s10936-021-09782-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/24/2021] [Indexed: 06/12/2023]
Abstract
The metaphorical motion of time can be expressed in gesture along either a sagittal axis-with the future ahead and past behind the speaker, or a lateral axis-with the past to the left and future to the right of the speaker (Casasanto & Jasmin in CL 23(4): 643-674, 2012). Adult English speakers, when gesturing about time, show a preference for lateral gestures with left-to-right directionality, consistent with the directionality of the reading-writing system in English (Casasanto & Jasmin in CL 23(4): 643-674, 2012). In this study, we asked how early children would show a preference for left-to-right lateral gestures and whether literacy skills would predict the production of such gestures. Our findings showed developmental changes in both the orientation and directionality of children's gestures about time. Children increased their production of left-to-right lateral gestures over time, with a shift around ages 7-8. More importantly, literacy predicted children's production of such lateral gestures. Overall, these results suggest that the orientation and directionality of children's metaphorical gestures about time follow a developmental pattern that is largely influenced by changes in literacy.
Collapse
Affiliation(s)
- Lauren Stites
- Georgia State University, 140 Decatur St., Atlanta, GA, 30303, United States.
- , 3166 Lindmoor Dr., Decatur, GA, 30033, USA.
| | - Şeyda Özçalışkan
- Georgia State University, 140 Decatur St., Atlanta, GA, 30303, United States
| |
Collapse
|
8
|
Marentette P, Furman R, Suvanto ME, Nicoladis E. Pantomime (Not Silent Gesture) in Multimodal Communication: Evidence From Children's Narratives. Front Psychol 2020; 11:575952. [PMID: 33329222 PMCID: PMC7734346 DOI: 10.3389/fpsyg.2020.575952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 11/04/2020] [Indexed: 11/13/2022] Open
Abstract
Pantomime has long been considered distinct from co-speech gesture. It has therefore been argued that pantomime cannot be part of gesture-speech integration. We examine pantomime as distinct from silent gesture, focusing on non-co-speech gestures that occur in the midst of children’s spoken narratives. We propose that gestures with features of pantomime are an infrequent but meaningful component of a multimodal communicative strategy. We examined spontaneous non-co-speech representational gesture production in the narratives of 30 monolingual English-speaking children between the ages of 8- and 11-years. We compared the use of co-speech and non-co-speech gestures in both autobiographical and fictional narratives and examined viewpoint and the use of non-manual articulators, as well as the length of responses and narrative quality. The use of non-co-speech gestures was associated with longer narratives of equal or higher quality than those using only co-speech gestures. Non-co-speech gestures were most likely to adopt character-viewpoint and use non-manual articulators. The present study supports a deeper understanding of the term pantomime and its multimodal use by children in the integration of speech and gesture.
Collapse
Affiliation(s)
| | - Reyhan Furman
- School of Psychology, University of Central Lancashire, Preston, United Kingdom
| | - Marcus E Suvanto
- Center for Studies in Behavioral Neuroscience, Concordia University, Montréal, QC, Canada
| | - Elena Nicoladis
- Department of Psychology, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
9
|
Goldin‐Meadow S. Discovering the Biases Children Bring to Language Learning. CHILD DEVELOPMENT PERSPECTIVES 2020. [DOI: 10.1111/cdep.12379] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Murteira A, Nickels L. Can gesture observation help people with aphasia name actions? Cortex 2019; 123:86-112. [PMID: 31760340 DOI: 10.1016/j.cortex.2019.10.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 09/04/2019] [Accepted: 10/15/2019] [Indexed: 11/19/2022]
Abstract
It has been suggested that gesture can play a role in the treatment of naming impairments in aphasia, however investigation is still sparse, especially when compared to research on verbal treatments. Critically, previous studies have included either verbal or gesture production in the training. However, while in speakers without language impairment, action naming is facilitated by gesture observation, no study has yet systematically determined whether gesture observation alone influences word retrieval in people with aphasia. This is the aim of the research presented here. In a gesture priming experiment, participants with aphasia named actions that were preceded by the observation of videos of congruent or unrelated gestures or a non-gesture control condition. At the group-level, action naming was facilitated by observation of congruent gestures. However, single-case analyses revealed variability in the extent to which the participants benefited from gesture cueing. The potential mechanisms underlying the effects of gesture observation on action picture naming in people with aphasia were examined by exploring participant-related and item-related predictors of improvement. It is concluded that gesture observation may facilitate verb retrieval at either semantic or lexical levels. In addition, and despite variability across individuals, gesture observation seems more likely to facilitate action naming in people with spared gesture semantics and mild-moderate deficits in lexical-semantic or post-semantic processing.
Collapse
Affiliation(s)
- Ana Murteira
- Department of Cognitive Science, Macquarie University, Sydney, Australia; International Doctorate of Experimental Approaches to Language and Brain - IDEALAB, Universities of Trento, Groningen, Potsdam, Newcastle and Macquarie University, Australia.
| | - Lyndsey Nickels
- Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
11
|
Wortman-Jutt S, Edwards D. Poststroke Aphasia Rehabilitation: Why All Talk and No Action? Neurorehabil Neural Repair 2019; 33:235-244. [PMID: 30900528 DOI: 10.1177/1545968319834901] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
There is ample agreement in the scientific literature, across diverse areas of study, that suggests that language and movement are interrelated. In particular, it is widely held that the upper limb and hand play a key role in language use. Aphasia, a common, disabling language disorder frequently associated with stroke, requires new restorative methods. A combinatorial hand-arm-language paradigm that capitalizes on shared neural networks may therefore prove beneficial for aphasia recovery in stroke patients and requires further exploration.
Collapse
Affiliation(s)
- Susan Wortman-Jutt
- 1 Burke Rehabilitation Hospital, White Plains, NY, USA
- 2 Burke Neurological Institute, White Plains, NY, USA
| | - Dylan Edwards
- 3 Moss Rehabilitation Research Institute, Elkins Park, PA, USA
- 4 Edith Cowan University, Joondalup, Western Australia, Australia
| |
Collapse
|
12
|
|
13
|
Glasser ML, Williamson RA, Özçalışkan Ş. Do Children Understand Iconic Gestures About Events as Early as Iconic Gestures About Entities? JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2018; 47:741-754. [PMID: 29305747 DOI: 10.1007/s10936-017-9550-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Children can understand iconic co-speech gestures that characterize entities by age 3 (Stanfield et al. in J Child Lang 40(2):1-10, 2014; e.g., "I'm drinking" [Formula: see text] tilting hand in C-shape to mouth as if holding a glass). In this study, we ask whether children understand co-speech gestures that characterize events as early as they do so for entities, and if so, whether their understanding is influenced by the patterns of gesture production in their native language. We examined this question by studying native English speaking 3- to 4 year-old children and adults as they completed an iconic co-speech gesture comprehension task involving motion events across two studies. Our results showed that children understood iconic co-speech gestures about events at age 4, marking comprehension of gestures about events one year later than gestures about entities. Our findings also showed that native gesture production patterns influenced children's comprehension of gestures characterizing such events, with better comprehension for gestures that follow language-specific patterns compared to the ones that do not follow such patterns-particularly for manner of motion. Overall, these results highlight early emerging abilities in gesture comprehension about motion events.
Collapse
Affiliation(s)
- Melissa L Glasser
- Department of Psychology, Georgia State University, P.O. Box 5010, Atlanta, GA, 30302, USA.
| | - Rebecca A Williamson
- Department of Psychology, Georgia State University, P.O. Box 5010, Atlanta, GA, 30302, USA
| | - Şeyda Özçalışkan
- Department of Psychology, Georgia State University, P.O. Box 5010, Atlanta, GA, 30302, USA
| |
Collapse
|
14
|
Özçalışkan Ş, Lucero C, Goldin-Meadow S. Blind Speakers Show Language-Specific Patterns in Co-Speech Gesture but Not Silent Gesture. Cogn Sci 2018; 42:1001-1014. [PMID: 28481418 DOI: 10.1111/cogs.12502] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2016] [Revised: 03/13/2017] [Accepted: 03/20/2017] [Indexed: 11/29/2022]
Abstract
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture-blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language-an organization that relies on neither visuospatial cues nor language structure.
Collapse
Affiliation(s)
| | - Ché Lucero
- Department of Psychology, University of Chicago
| | | |
Collapse
|
15
|
van Nispen K, Mieke WME, van de Sandt-Koenderman E, Krahmer E. The comprehensibility of pantomimes produced by people with aphasia. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2018; 53:85-100. [PMID: 28691196 DOI: 10.1111/1460-6984.12328] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Revised: 03/17/2017] [Accepted: 05/02/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND People with aphasia (PWA) use pantomime, gesture in absence of speech, differently from non-brain-damaged people (NBDP). AIMS To evaluate through an exploratory study the comprehensibility of PWA's pantomimes and to find out whether they can compensate for information PWA are unable to convey in speech. METHODS & PROCEDURES A total of 273 naïve observers participated in one of two judgement tasks: forced-choice and open-ended questions. These were used to determine the comprehensibility of pantomimes produced to depict objects by PWA as compared with NBDP. Furthermore, we compared the information conveyed in pantomime with the information in speech. We looked into factors influencing pantomime's comprehensibility: individual factors, manner of depiction and information needed to be depicted. OUTCOME & RESULTS Although comprehensibility scores for PWA's pantomimes were lower than for those produced by NBDP, all PWA were able to convey information in pantomime that they could not convey in speech. Comprehensibility of pantomimes was predicted by apraxia. The inability to use the right hand related to slightly lower comprehensibility scores. Objects for which individuals depicted its use were best understood. CONCLUSION & IMPLICATIONS Our findings highlight the potential benefit of pantomime for clinical practice. Pantomimes, even though sometimes impaired, can convey information that PWA cannot convey in speech. Clinical implications are discussed.
Collapse
Affiliation(s)
- Karin van Nispen
- Tilburg Center for Cognition and Communication (TiCC), Tilburg University, the Netherlands
| | - W M E Mieke
- Tilburg Center for Cognition and Communication (TiCC), Tilburg University, the Netherlands
| | | | - Emiel Krahmer
- Tilburg Center for Cognition and Communication (TiCC), Tilburg University, the Netherlands
| |
Collapse
|
16
|
Janke V, Marshall CR. Using the Hands to Represent Objects in Space: Gesture as a Substrate for Signed Language Acquisition. Front Psychol 2017; 8:2007. [PMID: 29250001 PMCID: PMC5715371 DOI: 10.3389/fpsyg.2017.02007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 11/02/2017] [Indexed: 11/19/2022] Open
Abstract
An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the grammar of the language being learned.
Collapse
Affiliation(s)
- Vikki Janke
- English Language and Linguistics, University of Kent, Canterbury, United Kingdom
| | - Chloë R. Marshall
- Department of Psychology and Human Development, UCL Institute of Education, London, United Kingdom
| |
Collapse
|
17
|
GestuRe and ACtion Exemplar (GRACE) video database: stimuli for research on manners of human locomotion and iconic gestures. Behav Res Methods 2017; 50:1270-1284. [PMID: 28916988 PMCID: PMC5990570 DOI: 10.3758/s13428-017-0942-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.
Collapse
|
18
|
Abstract
In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts.
Collapse
|
19
|
Abstract
A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal--that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495-514, 2008)--has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon's function is its purpose rather than its precipitating cause--the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism--it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.
Collapse
Affiliation(s)
- Miriam A Novack
- Department of Psychology, University of Chicago, Chicago, IL, 60637, USA.
| | | |
Collapse
|
20
|
Abstract
Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.
Collapse
Affiliation(s)
- Diane Brentari
- Department of Linguistics, University of Chicago, Chicago, Illinois 60637
| | | |
Collapse
|