1
|
Ozturk S, Özçalışkan Ş. Gesture's Role in the Communication of Adults With Different Types of Aphasia. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024:1-20. [PMID: 38625101 DOI: 10.1044/2024_ajslp-23-00046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
PURPOSE Adults with aphasia gesture more than adults without aphasia. However, less is known about the role of gesture in different discourse contexts for individuals with different types of aphasia. In this study, we asked whether patterns of speech and gesture production of individuals with aphasia vary by aphasia and discourse type and also differ from the speech and gestures produced by adults without aphasia. METHOD We compared the amount, diversity, and complexity of speech and gesture production in adults with anomic or Broca's aphasia and adults with no aphasia (n = 20/group) in their first- versus third-person narratives. RESULTS Adults with Broca's aphasia showed the lowest performance in their amount, diversity, and complexity of speech production, followed by adults with anomic aphasia and adults without aphasia. This pattern was reversed for gesture production. Speech and gesture production also varied by discourse context. Adults with either type of aphasia used a lower amount of and less diverse speech in third-person than in first-person narratives; this pattern was also reversed for gesture production. CONCLUSIONS Overall, our results provide evidence for a compensatory role of gesture in aphasia communication. Adults with Broca's aphasia, who showed the greatest speech production difficulties, also relied most on gesture, and this pattern was particularly pronounced in the third-person narrative context.
Collapse
|
2
|
Luo H, Du J, Yang P, Shi Y, Liu Z, Yang D, Zheng L, Chen X, Wang ZL. Human-Machine Interaction via Dual Modes of Voice and Gesture Enabled by Triboelectric Nanogenerator and Machine Learning. ACS APPLIED MATERIALS & INTERFACES 2023; 15:17009-17018. [PMID: 36947663 PMCID: PMC10080540 DOI: 10.1021/acsami.3c00566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 03/12/2023] [Indexed: 06/18/2023]
Abstract
With the development of science and technology, human-machine interaction has brought great benefits to the society. Here, we design a voice and gesture signal translator (VGST), which can translate natural actions into electrical signals and realize efficient communication in human-machine interface. By spraying silk protein on the copper of the device, the VGST can achieve improved output and a wide frequency response of 20-2000 Hz with a high sensitivity of 167 mV/dB, and the resolution of frequency detection can reach 0.1 Hz. By designing its internal structure, its resonant frequency and output voltage can be adjusted. The VGST can be used as a high-fidelity platform to effectively recover recorded music and can also be combined with machine learning algorithms to realize the function of speech recognition with a high accuracy rate of 97%. It also has good antinoise performance to recognize speech correctly even in noisy environments. Meanwhile, in gesture recognition, the triboelectric translator is able to recognize simple hand gestures and to judge the distance between hand and the VGST based on the principle of electrostatic induction. This work demonstrates that triboelectric nanogenerator (TENG) technology can have great application prospects and significant advantages in human-machine interaction and high-fidelity platforms.
Collapse
Affiliation(s)
- Hao Luo
- College
of Mathematics and Physics, Shanghai Key Laboratory of Materials Protection
and Advanced Materials in Electric Power, Shanghai University of Electric Power, Shanghai 200090, China
- Beijing
Key Laboratory of Micro-nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy
of Sciences, Beijing 100083, PR China
| | - Jingyi Du
- College
of Mathematics and Physics, Shanghai Key Laboratory of Materials Protection
and Advanced Materials in Electric Power, Shanghai University of Electric Power, Shanghai 200090, China
- Beijing
Key Laboratory of Micro-nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy
of Sciences, Beijing 100083, PR China
| | - Peng Yang
- Beijing
Key Laboratory of Micro-nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy
of Sciences, Beijing 100083, PR China
- School
of Nanoscience and Technology, University
of Chinese Academy of Sciences, Beijing 100049, PR China
| | - Yuxiang Shi
- Beijing
Key Laboratory of Micro-nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy
of Sciences, Beijing 100083, PR China
- School
of Nanoscience and Technology, University
of Chinese Academy of Sciences, Beijing 100049, PR China
| | - Zhaoqi Liu
- Beijing
Key Laboratory of Micro-nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy
of Sciences, Beijing 100083, PR China
- School
of Nanoscience and Technology, University
of Chinese Academy of Sciences, Beijing 100049, PR China
| | - Dehong Yang
- Beijing
Key Laboratory of Micro-nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy
of Sciences, Beijing 100083, PR China
- School
of Nanoscience and Technology, University
of Chinese Academy of Sciences, Beijing 100049, PR China
| | - Li Zheng
- College
of Mathematics and Physics, Shanghai Key Laboratory of Materials Protection
and Advanced Materials in Electric Power, Shanghai University of Electric Power, Shanghai 200090, China
| | - Xiangyu Chen
- Beijing
Key Laboratory of Micro-nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy
of Sciences, Beijing 100083, PR China
- School
of Nanoscience and Technology, University
of Chinese Academy of Sciences, Beijing 100049, PR China
| | - Zhong Lin Wang
- Beijing
Key Laboratory of Micro-nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy
of Sciences, Beijing 100083, PR China
- School
of Nanoscience and Technology, University
of Chinese Academy of Sciences, Beijing 100049, PR China
| |
Collapse
|
3
|
Mamus E, Speed LJ, Rissman L, Majid A, Özyürek A. Lack of Visual Experience Affects Multimodal Language Production: Evidence From Congenitally Blind and Sighted People. Cogn Sci 2023; 47:e13228. [PMID: 36607157 PMCID: PMC10078191 DOI: 10.1111/cogs.13228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/08/2022] [Accepted: 11/25/2022] [Indexed: 01/07/2023]
Abstract
The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
Collapse
Affiliation(s)
- Ezgi Mamus
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics
| | | | - Lilia Rissman
- Department of Psychology, University of Wisconsin - Madison
| | - Asifa Majid
- Department of Experimental Psychology, University of Oxford
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics.,Donders Center for Cognition, Radboud University
| |
Collapse
|
4
|
Cienki A. The study of gesture in cognitive linguistics: How it could inform and inspire other research in cognitive science. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2022; 13:e1623. [PMID: 36148788 PMCID: PMC9788131 DOI: 10.1002/wcs.1623] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 08/16/2022] [Indexed: 12/30/2022]
Abstract
Cognitive linguists are increasingly extending their paradigm to include the study of gestures. The bottom-up, usage-based approach in cognitive linguistics has advanced the methods for identifying gesture functions, starting from a detailed analysis of gesture forms. Theoretical notions from cognitive linguistics also help explain the means by which the forms of gestures can be interpreted as meaningful functions. Principles of conceptual metonymy explain how gestures indicate referents through the partial representation of their features that are relevant in the context of use. Conceptual metaphor theory sheds light on how abstract notions can be represented in gesture via comparison with physical source domains. Furthermore, every gestural representation inherently requires the gesturing speaker to employ a specific viewpoint for their depiction-something which is normally not expressed verbally. These aspects of gesture provide insights into processes of thinking for speaking that can be exploited in various fields of cognitive science research. Referential gestures also normally combine pragmatic and interactive functions (showing stance-taking, for example) with representational or deictic functions. The multiple functions of gesture combined with those of speech raise questions for further research about how viewing-listeners interpret and combine information from the multiple semiotic systems employed by gesturing-speakers. Finally, gesture use has been shown to correlate not only with lexical concepts but also in some ways with grammatical constructions. This gives rise to fundamental questions about what constitutes the grammar of a language. Gesture analysis thus raises issues for consideration in any research in cognitive science that concerns spoken language. This article is categorized under: Linguistics > Cognitive Linguistics > Linguistic Theory Psychology > Language.
Collapse
Affiliation(s)
- Alan Cienki
- Language, Literature and Communication, Faculty of HumanitiesVrije Universiteit AmsterdamAmsterdamThe Netherlands
| |
Collapse
|
5
|
Stites L, Özçalışkan Ş. The Time is at Hand: Literacy Predicts Changes in Children's Gestures About Time. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2021; 50:967-983. [PMID: 33963464 DOI: 10.1007/s10936-021-09782-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/24/2021] [Indexed: 06/12/2023]
Abstract
The metaphorical motion of time can be expressed in gesture along either a sagittal axis-with the future ahead and past behind the speaker, or a lateral axis-with the past to the left and future to the right of the speaker (Casasanto & Jasmin in CL 23(4): 643-674, 2012). Adult English speakers, when gesturing about time, show a preference for lateral gestures with left-to-right directionality, consistent with the directionality of the reading-writing system in English (Casasanto & Jasmin in CL 23(4): 643-674, 2012). In this study, we asked how early children would show a preference for left-to-right lateral gestures and whether literacy skills would predict the production of such gestures. Our findings showed developmental changes in both the orientation and directionality of children's gestures about time. Children increased their production of left-to-right lateral gestures over time, with a shift around ages 7-8. More importantly, literacy predicted children's production of such lateral gestures. Overall, these results suggest that the orientation and directionality of children's metaphorical gestures about time follow a developmental pattern that is largely influenced by changes in literacy.
Collapse
Affiliation(s)
- Lauren Stites
- Georgia State University, 140 Decatur St., Atlanta, GA, 30303, United States.
- , 3166 Lindmoor Dr., Decatur, GA, 30033, USA.
| | - Şeyda Özçalışkan
- Georgia State University, 140 Decatur St., Atlanta, GA, 30303, United States
| |
Collapse
|
6
|
Marentette P, Furman R, Suvanto ME, Nicoladis E. Pantomime (Not Silent Gesture) in Multimodal Communication: Evidence From Children's Narratives. Front Psychol 2020; 11:575952. [PMID: 33329222 PMCID: PMC7734346 DOI: 10.3389/fpsyg.2020.575952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 11/04/2020] [Indexed: 11/13/2022] Open
Abstract
Pantomime has long been considered distinct from co-speech gesture. It has therefore been argued that pantomime cannot be part of gesture-speech integration. We examine pantomime as distinct from silent gesture, focusing on non-co-speech gestures that occur in the midst of children’s spoken narratives. We propose that gestures with features of pantomime are an infrequent but meaningful component of a multimodal communicative strategy. We examined spontaneous non-co-speech representational gesture production in the narratives of 30 monolingual English-speaking children between the ages of 8- and 11-years. We compared the use of co-speech and non-co-speech gestures in both autobiographical and fictional narratives and examined viewpoint and the use of non-manual articulators, as well as the length of responses and narrative quality. The use of non-co-speech gestures was associated with longer narratives of equal or higher quality than those using only co-speech gestures. Non-co-speech gestures were most likely to adopt character-viewpoint and use non-manual articulators. The present study supports a deeper understanding of the term pantomime and its multimodal use by children in the integration of speech and gesture.
Collapse
Affiliation(s)
| | - Reyhan Furman
- School of Psychology, University of Central Lancashire, Preston, United Kingdom
| | - Marcus E Suvanto
- Center for Studies in Behavioral Neuroscience, Concordia University, Montréal, QC, Canada
| | - Elena Nicoladis
- Department of Psychology, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
7
|
Emerson SN, Conway CM, Özçalışkan Ş. Semantic P600-but not N400-effects index crosslinguistic variability in speakers' expectancies for expression of motion. Neuropsychologia 2020; 149:107638. [PMID: 33007360 DOI: 10.1016/j.neuropsychologia.2020.107638] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 09/23/2020] [Accepted: 09/24/2020] [Indexed: 11/15/2022]
Abstract
The expression of motion shows strong crosslinguistic variability; however, less is known about speakers' expectancies for lexicalizations of motion at the neural level. We examined event-related brain potentials (ERPs) in native English or Spanish speakers while they read grammatical sentences describing animations involving manner and path components of motion that did or did not violate language-specific patterns of expression. ERPs demonstrated different expectancies between speakers: Spanish speakers showed higher expectancies for motion verbs to encode path and English speakers showed higher expectancies for motion verbs to encode manner followed by a secondary path expression. Interestingly, grammatical but infrequent motion expressions (manner verbs in Spanish, path verbs and secondary manner expressions in English) elicited semantic P600 rather than the expected N400 effects-with or without post-N400 positivities-that are typically associated with semantic processing. Overall, our findings provide the first empirical evidence for the effect of crosslinguistic variation in processing motion event descriptions at the neural level.
Collapse
Affiliation(s)
- Samantha N Emerson
- Department of Psychology, Georgia State University, P.O. Box 5010, Atlanta, GA, USA.
| | - Christopher M Conway
- Department of Psychology, Georgia State University, P.O. Box 5010, Atlanta, GA, USA
| | - Şeyda Özçalışkan
- Department of Psychology, Georgia State University, P.O. Box 5010, Atlanta, GA, USA
| |
Collapse
|