1
|
Language and gesture neural correlates: A meta-analysis of functional magnetic resonance imaging studies. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2024; 59:902-912. [PMID: 37971416 DOI: 10.1111/1460-6984.12987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 11/03/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Humans often use co-speech gestures to promote effective communication. Attention has been paid to the cortical areas engaged in the processing of co-speech gestures. AIMS To investigate the neural network underpinned in the processing of co-speech gestures and to observe whether there is a relationship between areas involved in language and gesture processing. METHODS & PROCEDURES We planned to include studies with neurotypical and/or stroke participants who underwent a bimodal task (i.e., processing of co-speech gestures with relative speech) and a unimodal task (i.e., speech or gesture alone) during a functional magnetic resonance imaging (fMRI) session. After a database search, abstract and full-text screening were conducted. Qualitative and quantitative data were extracted, and a meta-analysis was performed with the software GingerALE 3.0.2, performing contrast analyses of uni- and bimodal tasks. MAIN CONTRIBUTION The database search produced 1024 records. After the screening process, 27 studies were included in the review. Data from 15 studies were quantitatively analysed through meta-analysis. Meta-analysis found three clusters with a significant activation of the left middle frontal gyrus and inferior frontal gyrus, and bilateral middle occipital gyrus and inferior temporal gyrus. CONCLUSIONS There is a close link at the neural level for the semantic processing of auditory and visual information during communication. These findings encourage the integration of the use of co-speech gestures during aphasia treatment as a strategy to foster the possibility to communicate effectively for people with aphasia. WHAT THIS PAPER ADDS What is already known on this subject Gestures are an integral part of human communication, and they may have a relationship at neural level with speech processing. What this paper adds to the existing knowledge During processing of bi- and unimodal communication, areas related to semantic processing and multimodal processing are activated, suggesting that there is a close link between co-speech gestures and spoken language at a neural level. What are the potential or actual clinical implications of this work? Knowledge of the functions related to gesture and speech processing neural networks will allow for the adoption of model-based neurorehabilitation programs to foster recovery from aphasia by strengthening the specific functions of these brain networks.
Collapse
|
2
|
Individual differences in representational gesture production are associated with cognitive and empathy skills. Q J Exp Psychol (Hove) 2024:17470218241245831. [PMID: 38531690 DOI: 10.1177/17470218241245831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Substantial individual variation exists in the frequency of gestures produced while speaking. This study investigated the associations between cognitive abilities, empathy levels, and personality traits with the frequency of representational gestures. A cartoon narration task and a social dilemma solving task were used to elicit gestures. Predictor variables were selected based on prior research on individual differences in gesture production and the cognitive and communicative functions of gestures in speech. Our findings revealed that an increased frequency of representational gestures was associated with higher empathy levels in the cartoon narration task. However, in the social dilemma solving task, a higher frequency of representational gestures was associated with lower visuospatial working memory, spatial transformation, and inhibition control abilities. Moreover, no significant relationships were found between verbal working memory, personality traits, and the frequency of representational gestures in either task. These findings suggested that predictor variables for representational gesture production vary depending on the nature of the gesture elicitation task (e.g., spatiomotoric vs. abstract topics). Future research should examine the relationship between individuals' cognitive abilities, empathy and gesture production with across a broader range of topics and in more ecologically valid contexts.
Collapse
|
3
|
Item-Based Analysis of Some ADOS-2 Items with Typically Developing Participants Might help Improve Cross-Cultural Validity of ADOS-2. J Autism Dev Disord 2024; 54:109-120. [PMID: 36323993 DOI: 10.1007/s10803-022-05791-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/09/2022] [Indexed: 06/16/2023]
Abstract
Most internationally recognized instruments for the screening and diagnosis of autism spectrum disorder have been developed in the USA, which calls into question the degree of their cultural adaptation to diverse populations. The aim of this study is to examine the characteristics of social communication in typically developing Croatian-speaking participants (N = 220) using ADOS-2-defined item-level normative values. Croatian subjects showed the expected ("typical") results in the domain of verbal communication, slightly atpical results in nonverbal communication (primarily gesture use), and more significant deviations in pragmatics (offering and asking for information), relative to the expectations of the ADOS-2. As ADOS-2 has become an important component of thorough ASD diagnostic evaluations worldwide, identifying methods for increasing the cross-cultural validity is essential.
Collapse
|
4
|
Learning to express causal events in Mandarin Chinese: A multimodal perspective. JOURNAL OF CHILD LANGUAGE 2024; 51:191-216. [PMID: 36420637 DOI: 10.1017/s0305000922000447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Previous research has shown language-specific features play a guiding role in how children develop expression of events with speech and gestures. This study adopts a multimodal approach and examines Mandarin Chinese, a language that features context use and verb serializations. Forty children (four-to-seven years old) and ten adults were asked to describe fourteen video stimuli depicting different types of causal events involving location/state changes. Participants' speech was segmented into clauses and co-occurring gestures were analyzed in relation to causation. The results show that the older the children, the greater the use of contextual clauses which contribute meaning to event descriptions. It is not until the age of six that children used adult-like structures - namely, using single gestures representing causing actions and aligning them with verb serializations in single clauses. We discuss the implications of these findings for the guiding role of language specificity in multimodal language development.
Collapse
|
5
|
Interpretations of meaningful and ambiguous hand gestures in autistic and non-autistic adults: A norming study. Behav Res Methods 2023:10.3758/s13428-023-02268-1. [PMID: 38012511 DOI: 10.3758/s13428-023-02268-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/06/2023] [Indexed: 11/29/2023]
Abstract
Gestures are ubiquitous in human communication, and a growing but inconsistent body of research suggests that people with autism spectrum disorder (ASD) may process co-speech gestures differently from neurotypical individuals. To facilitate research on this topic, we created a database of 162 gesture videos that have been normed for comprehensibility by both autistic and non-autistic raters. These videos portray an actor performing silent gestures that range from highly meaningful (e.g., iconic gestures) to ambiguous or meaningless. Each video was rated for meaningfulness and given a one-word descriptor by 40 autistic and 40 non-autistic adults, and analyses were conducted to assess the level of within- and across-group agreement. Across gestures, the meaningfulness ratings provided by raters with and without ASD correlated at r > 0.90, indicating a very high level of agreement. Overall, autistic raters produced a more diverse set of verbal labels for each gesture than did non-autistic raters. However, measures of within-gesture semantic similarity among the responses provided by each group did not differ, suggesting that increased variability within the ASD group may have occurred at the lexical rather than semantic level. This study is the first to compare gesture naming between autistic and non-autistic individuals, and the resulting dataset is the first gesture stimulus set for which both groups were equally represented in the norming process. This database also has broad applicability to other areas of research related to gesture processing and comprehension. The video database and accompanying norming data are available on the Open Science Framework.
Collapse
|
6
|
Looking at gesture: The reciprocal influence between gesture and conversation. JOURNAL OF COMMUNICATION DISORDERS 2023; 106:106379. [PMID: 37769381 DOI: 10.1016/j.jcomdis.2023.106379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 09/04/2023] [Accepted: 09/12/2023] [Indexed: 09/30/2023]
Abstract
INTRODUCTION There is limited research in group communication treatment for people with aphasia but existing studies report benefits of gesture to support conversation. Gesture supports conversation through recipient design features and reducing linguistic demands of lexical retrieval and formulation. Additionally, gesture serves an affiliative function. However, the relationship between gesture use and gestural capacity has not been widely examined. As part of a larger study on group cohesiveness and conversation, this investigation examined the patterns of co-speech gesture within authentic conversations among persons with aphasia to discern the functions of gesture use for the participants, changes in the use of gesture over time, and the relationship between gesture use and gesture ability. METHODS Conversation Analysis (CA) was applied in an embedded case-study design. Three participants received an academic semester of group and individual conversation-based treatment according to Facilitating Authentic Conversation (Damico et al., 2015). Four conversations from the treatment were selected and transcribed for multi-modality communication with CA conventions applied, and then cyclically analysed for patterns of gesture. RESULTS Participants demonstrated gesture that served social and linguistic functions: ratifying clinicians' proxy turns, turn-allocation, turn repair, relaying novel visual information, emphasizing content, demonstrating affiliation with the prior speaker, demonstrating their assessment others' talk, and demonstrating humor. All three participants showed an increased rate of gesture per turn and increasingly used gesture to repair conversation breakdown. Increased gesture use over the course of the semester coincided with increased scores for pantomime on the Porch Index of Communicative Ability (Porch, 1981, PICA). CONCLUSION Individuals with aphasia demonstrated increased use of gesture for varied purposes and improved gestural processing following a semester of conversation-based treatment. This is significant because gesture is an effective support for the repair of conversation breakdown typical of persons with aphasia.
Collapse
|
7
|
Abstract
Research and theory in nonverbal communication have made great advances toward understanding the patterns and functions of nonverbal behavior in social settings. Progress has been hindered, we argue, by presumptions about nonverbal behavior that follow from both received wisdom and faulty evidence. In this article, we document four persistent misconceptions about nonverbal communication-namely, that people communicate using decodable body language; that they have a stable personal space by which they regulate contact with others; that they express emotion using universal, evolved, iconic, categorical facial expressions; and that they can deceive and detect deception, using dependable telltale clues. We show how these misconceptions permeate research as well as the practices of popular behavior experts, with consequences that extend from intimate relationships to the boardroom and courtroom and even to the arena of international security. Notwithstanding these misconceptions, existing frameworks of nonverbal communication are being challenged by more comprehensive systems approaches and by virtual technologies that ambiguate the roles and identities of interactants and the contexts of interaction.
Collapse
|
8
|
The Role of Representational Gestures and Speech Synchronicity in Auditory Input by L2 and L1 Speakers. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2023; 52:1721-1735. [PMID: 37171686 DOI: 10.1007/s10936-023-09947-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/19/2023] [Indexed: 05/13/2023]
Abstract
Speech and gesture are two integrated and temporally coordinated systems. Manual gestures can help second language (L2) speakers with vocabulary learning and word retrieval. However, it is still under-investigated whether the synchronisation of speech and gesture has a role in helping listeners compensate for the difficulties in processing L2 aural information. In this paper, we tested, in two behavioural experiments, how L2 speakers process speech and gesture asynchronies in comparison to native speakers (L1). L2 speakers responded significantly faster when gestures and the semantic relevant speech were synchronous than asynchronous. They responded significantly slower than L1 speakers regardless of speech/gesture synchronisation. On the other hand, L1 speakers did not show a significant difference between asynchronous and synchronous integration of gestures and speech. We conclude that gesture-speech asynchrony affects L2 speakers more than L1 speakers.
Collapse
|
9
|
Abstract
Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.
Collapse
|
10
|
The role of manual gestures in second language comprehension: a simultaneous interpreting experiment. Front Psychol 2023; 14:1188628. [PMID: 37441333 PMCID: PMC10333536 DOI: 10.3389/fpsyg.2023.1188628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/08/2023] [Indexed: 07/15/2023] Open
Abstract
Manual gestures and speech form a single integrated system during native language comprehension. However, it remains unclear whether this hold for second language (L2) comprehension, more specifically for simultaneous interpreting (SI), which involves comprehension in one language and simultaneous production in another. In a combined mismatch and priming paradigm, we presented Swedish speakers fluent in L2 English with multimodal stimuli in which speech was congruent or incongruent with a gesture. A picture prime was displayed before the stimuli. Participants had to decide whether the video was related to the prime, focusing either on the auditory or the visual information. Participants performed the task either during passive viewing or during SI into their L1 Swedish (order counterbalanced). Incongruent stimuli yielded longer reaction times than congruent stimuli, during both viewing and interpreting. Visual and audio targets were processed equally easily in both activities. However, in both activities incongruent speech was more disruptive for gesture processing than incongruent gesture was for speech processing. Thus, the data only partly supports the expected mutual and obligatory interaction of gesture and speech in L2 comprehension. Interestingly, there were no differences between activities suggesting that the language comprehension component in SI shares features with other (L2) comprehension tasks.
Collapse
|
11
|
Exploring the Emotional Functions of Co-Speech Hand Gesture in Language and Communication. Top Cogn Sci 2023. [PMID: 37115518 DOI: 10.1111/tops.12657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 04/05/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023]
Abstract
Research over the past four decades has built a convincing case that co-speech hand gestures play a powerful role in human cognition . However, this recent focus on the cognitive function of gesture has, to a large extent, overlooked its emotional role-a role that was once central to research on bodily expression. In the present review, we first give a brief summary of the wealth of research demonstrating the cognitive function of co-speech gestures in language acquisition, learning, and thinking. Building on this foundation, we revisit the emotional function of gesture across a wide range of communicative contexts, from clinical to artistic to educational, and spanning diverse fields, from cognitive neuroscience to linguistics to affective science. Bridging the cognitive and emotional functions of gesture highlights promising avenues of research that have varied practical and theoretical implications for human-machine interactions, therapeutic interventions, language evolution, embodied cognition, and more.
Collapse
|
12
|
Gestures and pauses to help thought: hands, voice, and silence in the tourist guide's speech. Cogn Process 2023; 24:25-41. [PMID: 36495353 DOI: 10.1007/s10339-022-01116-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 11/23/2022] [Indexed: 12/14/2022]
Abstract
In the body of research on the relationship between gesture and speech, some models propose they form an integrated system while others attribute gestures a compensatory role in communication. This study addresses the gesture-speech relationship by taking disfluency phenomena as a case study. Since it is part of a project aimed at designing virtual agents to be employed in museums, an analysis was performed on the communicative behavior of tourist guides. Results reveal that gesturing is more frequent during speech than pauses. Moreover, when comparing the types of gestures and types of pauses they co-occur with, non-communicative gestures (idles and manipulators) turn out to be more frequent than communicatively-meaningful gestures, which instead more often co-occur with speech. We discuss these findings as relevant for a theoretical model viewing speech and gesture as an integrated system.
Collapse
|
13
|
The Temporal Alignment of Speech-Accompanying Eyebrow Movement and Voice Pitch: A Study Based on Late Night Show Interviews. Behav Sci (Basel) 2023; 13:bs13010052. [PMID: 36661624 PMCID: PMC9854528 DOI: 10.3390/bs13010052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/27/2022] [Accepted: 12/30/2022] [Indexed: 01/08/2023] Open
Abstract
Previous research has shown that eyebrow movement during speech exhibits a systematic relationship with intonation: brow raises tend to be aligned with pitch accents, typically preceding them. The present study approaches the question of temporal alignment between brow movement and intonation from a new angle. The study makes use of footage from the Late Night Show with David Letterman, processed with 3D facial landmark detection. Pitch is modeled as a sinusoidal function whose parameters are correlated with the maximum height of the eyebrows in a brow raise. The results confirm some previous findings on audiovisual prosody but lead to new insights as well. First, the shape of the pitch signal in a region of approx. 630 ms before the brow raise is not random and tends to display a specific shape. Second, while being less informative than the post-peak pitch, the pitch signal in the pre-peak region also exhibits correlations with the magnitude of the associated brow raises. Both of these results point to early preparatory action in the speech signal, calling into question the visual-precedes-acoustic assumption. The results are interpreted as supporting a unified view of gesture/speech co-production that regards both signals as manifestations of a single communicative act.
Collapse
|
14
|
Communicative constraints affect oro-facial gestures and acoustics: Whispered vs normal speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:613. [PMID: 36732243 DOI: 10.1121/10.0015251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 11/04/2022] [Indexed: 06/18/2023]
Abstract
The present paper investigates a relationship between the acoustic signal and oro-facial expressions (gestures) when speakers (i) speak normally or whisper, (ii) do or do not see each other, and (iii) produce questions as opposed to statements. To this end, we conducted a motion capture experiment with 17 native speakers of German. The results provide partial support to the hypothesis that the most intensified oro-facial expressions occur when speakers whisper, do not see each other, and produce questions. The results are interpreted in terms of two hypotheses, i.e., the "hand-in-hand" and "trade-off" hypotheses. The relationship between acoustic properties and gestures does not provide straightforward support for one or the other hypothesis. Depending on the condition, speakers used more pronounced gestures and longer duration compensating for the lack of the fundamental frequency (supporting the trade-off hypothesis), but since the gestures were also enhanced when the listener was invisible, we conclude that they are not produced solely for the needs of the listener (supporting the hand-in-hand hypothesis), but rather they seem to help the speaker to achieve an overarching communicative goal.
Collapse
|
15
|
Temporal Overlap Between Gestures and Speech in Poststroke Aphasia: Is There a Compensatory Effect? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4797-4811. [PMID: 36455133 DOI: 10.1044/2022_jslhr-22-00130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE If language production is impaired, will gestures compensate? Evidence in favor of this prediction has often been argued to come from aphasia, but it remains contested. Here, we tested whether thought content not present in speech due to language impairment is manifested in gestures, in 20 people with dysfluent (Broca's) aphasia, 20 people with fluent (Wernicke's) aphasia, and 20 matched neurotypical controls. METHOD A new annotation scheme was created distinguishing types of gestures and whether they co-occurred with fluent or dysfluent/absent speech and were temporally aligned in content with coproduced speech. RESULTS Across both aphasia types, noncontent (beat) gestures, which by their nature cannot compensate for lost speech content, constituted the greatest proportion of all types of gestures produced. Content (i.e., descriptive, referential, and metaphorical) gestures were largely coproduced with fluent rather than dysfluent speech and tended to be aligned with the content conveyed in speech. They also did not differ in quantity depending on whether the dysfluencies were eventually resolved or not. Neither aphasia severity nor comprehension ability had an impact on the total amount of content gesture produced in people with aphasia, which was instead positively correlated with speech fluency. CONCLUSIONS Together, these results suggest that gestures are unlikely to have a role in compensating for linguistic deficits and to serve as a representational system conveying thought content independent of language. Surprisingly, aphasia rather is a model of how gesture and language are inherently integrated and aligned: Even when language is impaired, it remains the essential provider of content.
Collapse
|
16
|
Understanding conversational interaction in multiparty conversations: the EVA Corpus. LANG RESOUR EVAL 2022. [DOI: 10.1007/s10579-022-09627-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
AbstractThis paper focuses on gaining new knowledge through observation, qualitative analytics, and cross-modal fusion of rich multi-layered conversational features expressed during multiparty discourse. The outlined research stems from the theory that speech and co-speech gestures originate from the same representation; however, the representation is not solely limited to the speech production process. Thus, the nature of how information is conveyed by synchronously fusing speech and gestures must be investigated in detail. Therefore, this paper introduces an integrated annotation scheme and methodology which opens the opportunity to study verbal (i.e., speech) and non-verbal (i.e., visual cues with a communicative intent) components independently, however, still interconnected over a common timeline. To analyse this interaction between linguistic, paralinguistic, and non-verbal components in multiparty discourse and to help improve natural language generation in embodied conversational agents, a high-quality multimodal corpus, consisting of several annotation layers spanning syntax, POS, dialogue acts, discourse markers, sentiment, emotions, non-verbal behaviour, and gesture units was built and is represented in detail. It is the first of its kind for the Slovenian language. Moreover, detailed case studies show the tendency of metadiscourse to coincide with non-verbal behaviour of non-propositional origin. The case analysis further highlights how the newly created conversational model and the corresponding information-rich consistent corpus can be exploited to deepen the understanding of multiparty discourse.
Collapse
|
17
|
Limitations of Variable-Oriented Methodologies: Challenges in Gesture Research and Recommendations for Future Improvements. Integr Psychol Behav Sci 2022; 56:930-953. [PMID: 35567748 DOI: 10.1007/s12124-022-09697-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/19/2022] [Indexed: 11/05/2022]
Abstract
One critical aspect of modern psychology involves limitations to the currently dominant variable-oriented methodological approach. In this paper, I address those limitations using gesture studies as an example. I first discuss the theoretical and methodological problems of this approach, which prevent a full understanding of the nature of gestures. Specifically, I explain how variable-oriented approaches do not allow researchers to understand initial behavior; how causal relationships do not demonstrate the mechanisms of the relationship between gestures and other psychological processes; and how the analysis of individual differences does not allow researchers to make conclusions on an individual level. I argue that an alternative approach could benefit researchers' understanding of the nature of gestures, both from a theoretical and methodological point of view. Based on Vygotskian principles and Luria's framework, I offer an example of how to establish the nature of gestures. Finally, I provide an example of alternative study designs and discuss possible further direction in gesture use studies.
Collapse
|
18
|
Negative Requests Within Hair Salons: Grammar and Embodiment in Action Formation. Front Psychol 2022; 12:689563. [PMID: 35979534 PMCID: PMC9377526 DOI: 10.3389/fpsyg.2021.689563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 12/23/2021] [Indexed: 11/23/2022] Open
Abstract
Although requests constitute a type of action that have been widely discussed within conversation analysis-oriented work, they have only recently begun to be explored in relation to the situated and multimodal dimensions in which they occur. The contribution of this paper resides in the integration of bodily-visual conduct (gaze and facial expression, gesture and locomotion, object manipulation) into a more grammatical account of requesting. Drawing on video recordings collected in two different hair salons located in the French-speaking part of Switzerland and in France (23 h in total), this paper analyzes clients' negative requests by exploring how they interface with the participants' embodied conducts. Contrary to what the literature describes for positively formulated requests, with negative requests clients challenge an expectable next action (or ongoing action) by the hairdresser. One linguistic format constitutes the focus of this article, roughly glossable as 'You don't do [action X] too much (huh)'. Our analysis of a consistent collection of such formatted turns will show that clients present them (and hairdressers tend to treat them) in different ways, depending on how they relate to embodied conduct: When these turns are used by the client as instructions, they are accompanied by manipulations of the client's own hair and tend to occur toward the initial phase of the encounter, at a stage when hairdressers and clients collaboratively negotiate the service in prospect. When uttered as directives, these turns are not accompanied by any touching practices from the client and are typically observable in subsequent phases of the encounter, making relevant an immediate linguistic or/and bodily response from the professional, as shown by the client who is actively pursuing mutual gaze with him/her. Therefore, an action cannot be distinguished from another on the basis of the turn format alone: Its sequential placement and the participants' co-occurring embodied conduct contribute to its situated and shared understanding. By analyzing the clients' use of a specific linguistic format conjointly with the deployment of specific embodied resources, this study will advance our understanding of how verbal resources and embodiment operate in concert with each other in the formation and understanding of actions, thereby feeding into new areas of research on the grammar-body interface.
Collapse
|
19
|
Iconicity as Multimodal, Polysemiotic, and Plurifunctional. Front Psychol 2022; 13:808896. [PMID: 35769755 PMCID: PMC9234520 DOI: 10.3389/fpsyg.2022.808896] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 03/18/2022] [Indexed: 11/13/2022] Open
Abstract
Investigations of iconicity in language, whereby interactants coordinate meaningful bodily actions to create resemblances, are prevalent across the human communication sciences. However, when it comes to analysing and comparing iconicity across different interactions (e.g., deaf, deafblind, hearing) and modes of communication (e.g., manual signs, speech, writing), it is not always clear we are looking at the same thing. For example, tokens of spoken ideophones and manual depicting actions may both be analysed as iconic forms. Yet spoken ideophones may signal depictive and descriptive qualities via speech, while manual actions may signal depictive, descriptive, and indexical qualities via the shape, movement, and placement of the hands in space. Furthermore, each may co-occur with other semiotics articulated with the face, hands, and body within composite utterances. The paradigm of iconicity as a single property is too broad and coarse for comparative semiotics, as important details necessary for understanding the range of human communicative potentialities may be masked. Here, we draw on semiotic approaches to language and communication, including the model of language as signalled via describing, indicating and/or depicting and the notion of non-referential indexicality, to illustrate the multidimensionality of iconicity in co-present interactions. This builds on our earlier proposal for analysing how different methods of semiotic signalling are combined in multimodal language use. We discuss some implications for the language and communication sciences and explain how this approach may inform a theory of biosemiotics.
Collapse
|
20
|
Phonological characteristics of novel gesture production in children with developmental language disorder: Longitudinal findings. APPLIED PSYCHOLINGUISTICS 2022; 43:333-362. [PMID: 35342208 PMCID: PMC8955622 DOI: 10.1017/s0142716421000540] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Children with developmental language disorder (DLD; aka specific language impairment) are characterized based on deficits in language, especially morphosyntax, in the absence of other explanatory conditions. However, deficits in speech production, as well as fine and gross motor skill, have also been observed, implicating both the linguistic and motor systems. Situated at the intersection of these domains, and providing insight into both, is manual gesture. In the current work, we asked whether children with DLD showed phonological deficits in the production of novel gestures and whether gesture production at 4 years of age is related to language and motor outcomes two years later. Twenty-eight children (14 with DLD) participated in a two-year longitudinal novel gesture production study. At the first and final time points, language and fine motor skills were measured and gestures were analyzed for phonological feature accuracy, including handshape, path, and orientation. Results indicated that, while early deficits in phonological accuracy did not persist for children with DLD, all children struggled with orientation while handshape was the most accurate. Early handshape and orientation accuracy were also predictive of later language skill, but only for the children with DLD. Theoretical and clinical implications of these findings are discussed.
Collapse
|
21
|
Autonomic system tuning during gesture observation and reproduction. Acta Psychol (Amst) 2022; 222:103477. [PMID: 34971949 DOI: 10.1016/j.actpsy.2021.103477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 10/06/2021] [Accepted: 12/15/2021] [Indexed: 11/01/2022] Open
Abstract
Gestural communication allows providing information about thoughts and feelings, characterizing face-to-face interactions, also during non-verbal exchanges. In the present study, the autonomic responses and peripheral synchronization mechanisms of two individuals (encoder and decoder) were recorded simultaneously, through the use of biofeedback in hyperscanning, during two different experimental phases consisting in the observation (watching videos of gestures) and reproduction of positive and negative different types of gestures (affective, social and informative) supported by linguistic contexts. Therefore, the main aim of this study was focused on the analysis of simultaneous individuals' peripheral mechanisms during the performing of complex joint action, consisting of the observation (watching videos) and the reproduction of positive and negative social, affective, and informative gestures each supported by a linguistic script. Single-subject and inter-subject correlation analyses were conducted to observe individuals' autonomic responses and physiological synchronization. Single-subject results revealed an increase in emotional arousal, indicated by an increase in electrodermal activity (skin conductance level - SCL and response - SCR), during both the observation (watching videos) and reproduction of negative social and affective gestures contextualized by a linguistic context. Moreover, an increase of emotional engagement, expressed by an increase in heart rate (HR) activity, emerged in the encoder compare to the decoder during gestures reproduction (simulation of gestures). Inter-subject correlation results showed the presence of mirroring mechanisms, indicated by an increase in SCL, SCR, and HR synchronization, during the linguistic contexts and gesture observation (watching videos). Furthermore, an increase in SCL and SCR synchronization emerged during the observation (watching videos) and reproduction of negative social and affective gestures. Therefore, the present study allowed to obtain information on the mirroring mechanisms and physiological synchronization underlying the linguistic and gesture system during non-verbal interaction.
Collapse
|
22
|
Children Use Non-referential Gestures in Narrative Speech to Mark Discourse Elements Which Update Common Ground. Front Psychol 2022; 12:661339. [PMID: 35087436 PMCID: PMC8787325 DOI: 10.3389/fpsyg.2021.661339] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 12/01/2021] [Indexed: 12/05/2022] Open
Abstract
While recent studies have claimed that non-referential gestures (i.e., gestures that do not visually represent any semantic content in speech) are used to mark discourse-new and/or -accessible referents and focused information in adult speech, to our knowledge, no prior investigation has studied the relationship between information structure (IS) and gesture referentiality in children’s narrative speech from a developmental perspective. A longitudinal database consisting of 332 narratives performed by 83 children at two different time points in development was coded for IS and gesture referentiality (i.e., referential and non-referential gestures). Results revealed that at both time points, both referential and non-referential gestures were produced more with information that moves discourse forward (i.e., focus) and predication (i.e., comment) rather than topical or background information. Further, at 7–9 years of age, children tended to use more non-referential gestures to mark focus and comment constituents than referential gestures. In terms of the marking of the newness of discourse referents, non-referential gestures already seem to play a key role at 5–6 years old, whereas referential gestures did not show any patterns. This relationship was even stronger at 7–9 years old. All in all, our findings offer supporting evidence that in contrast with referential gestures, non-referential gestures have been found to play a key role in marking IS, and that the development of this relationship solidifies at a period in development that coincides with a spurt in non-referential gesture production.
Collapse
|
23
|
The study of gesture in cognitive linguistics: How it could inform and inspire other research in cognitive science. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2022; 13:e1623. [PMID: 36148788 PMCID: PMC9788131 DOI: 10.1002/wcs.1623] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 08/16/2022] [Indexed: 12/30/2022]
Abstract
Cognitive linguists are increasingly extending their paradigm to include the study of gestures. The bottom-up, usage-based approach in cognitive linguistics has advanced the methods for identifying gesture functions, starting from a detailed analysis of gesture forms. Theoretical notions from cognitive linguistics also help explain the means by which the forms of gestures can be interpreted as meaningful functions. Principles of conceptual metonymy explain how gestures indicate referents through the partial representation of their features that are relevant in the context of use. Conceptual metaphor theory sheds light on how abstract notions can be represented in gesture via comparison with physical source domains. Furthermore, every gestural representation inherently requires the gesturing speaker to employ a specific viewpoint for their depiction-something which is normally not expressed verbally. These aspects of gesture provide insights into processes of thinking for speaking that can be exploited in various fields of cognitive science research. Referential gestures also normally combine pragmatic and interactive functions (showing stance-taking, for example) with representational or deictic functions. The multiple functions of gesture combined with those of speech raise questions for further research about how viewing-listeners interpret and combine information from the multiple semiotic systems employed by gesturing-speakers. Finally, gesture use has been shown to correlate not only with lexical concepts but also in some ways with grammatical constructions. This gives rise to fundamental questions about what constitutes the grammar of a language. Gesture analysis thus raises issues for consideration in any research in cognitive science that concerns spoken language. This article is categorized under: Linguistics > Cognitive Linguistics > Linguistic Theory Psychology > Language.
Collapse
|
24
|
Contribution of working memory to gesture production in toddlers. COGNITIVE DEVELOPMENT 2021. [DOI: 10.1016/j.cogdev.2021.101113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
25
|
Visual recognition of words learned with gestures induces motor resonance in the forearm muscles. Sci Rep 2021; 11:17278. [PMID: 34446772 PMCID: PMC8390650 DOI: 10.1038/s41598-021-96792-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 08/03/2021] [Indexed: 02/07/2023] Open
Abstract
According to theories of Embodied Cognition, memory for words is related to sensorimotor experiences collected during learning. At a neural level, words encoded with self-performed gestures are represented in distributed sensorimotor networks that resonate during word recognition. Here, we ask whether muscles involved in gesture execution also resonate during word recognition. Native German speakers encoded words by reading them (baseline condition) or by reading them in tandem with picture observation, gesture observation, or gesture observation and execution. Surface electromyogram (EMG) activity from both arms was recorded during the word recognition task and responses were detected using eye-tracking. The recognition of words encoded with self-performed gestures coincided with an increase in arm muscle EMG activity compared to the recognition of words learned under other conditions. This finding suggests that sensorimotor networks resonate into the periphery and provides new evidence for a strongly embodied view of recognition memory.
Collapse
|
26
|
Gesture Helps, Only If You Need It: Inhibiting Gesture Reduces Tip-of-the-Tongue Resolution for Those With Weak Short-Term Memory. Cogn Sci 2021; 45:e12914. [PMID: 33389787 DOI: 10.1111/cogs.12914] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 06/29/2020] [Accepted: 07/20/2020] [Indexed: 11/27/2022]
Abstract
People frequently gesture when a word is on the tip of their tongue (TOT), yet research is mixed as to whether and why gesture aids lexical retrieval. We tested three accounts: the lexical retrieval hypothesis, which predicts that semantically related gestures facilitate successful lexical retrieval; the cognitive load account, which predicts that matching gestures facilitate lexical retrieval only when retrieval is hard, as in the case of a TOT; and the motor movement account, which predicts that any motor movements should support lexical retrieval. In Experiment 1 (a between-subjects study; N = 90), gesture inhibition, but not neck inhibition, affected TOT resolution but not overall lexical retrieval; participants in the gesture-inhibited condition resolved fewer TOTs than participants who were allowed to gesture. When participants could gesture, they produced more representational gestures during resolved than unresolved TOTs, a pattern not observed for meaningless motor movements (e.g., beats). However, the effect of gesture inhibition on TOT resolution was not uniform; some participants resolved many TOTs, while others struggled. In Experiment 2 (a within-subjects study; N = 34), the effect of gesture inhibition was traced to individual differences in verbal, not spatial short-term memory (STM) span; those with weaker verbal STM resolved fewer TOTs when unable to gesture. This relationship between verbal STM and TOT resolution was not observed when participants were allowed to gesture. Taken together, these results fit the cognitive load account; when lexical retrieval is hard, gesture effectively reduces the cognitive load of TOT resolution for those who find the task especially taxing.
Collapse
|
27
|
Easier Said Than Done? Task Difficulty's Influence on Temporal Alignment, Semantic Similarity, and Complexity Matching Between Gestures and Speech. Cogn Sci 2021; 45:e12989. [PMID: 34170013 PMCID: PMC8365723 DOI: 10.1111/cogs.12989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 04/08/2021] [Accepted: 04/25/2021] [Indexed: 11/28/2022]
Abstract
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.
Collapse
|
28
|
Abstract
Individuals diagnosed with psychotic disorders exhibit abnormalities in the perception of expressive behaviors, which are linked to symptoms and visual information processing domains. Specifically, literature suggests these groups have difficulties perceiving gestures that accompany speech. While our understanding of gesture perception in psychotic disorders is growing, gesture perception abnormalities and clues about potential causes and consequences among individuals meeting criteria for a clinical high-risk (CHR) syndrome is limited. Presently, 29 individuals with a CHR syndrome and 32 healthy controls completed an eye-tracking gesture perception paradigm. In this task, participants viewed an actor using abstract and literal gestures while presenting a story and eye gaze data (eg, fixation counts and total fixation time) was collected. Furthermore, relationships between fixation variables and both symptoms (positive, negative, anxiety, and depression) and measures of visual information processing (working memory and attention) were examined. Findings revealed that the CHR group gazed at abstract gestures fewer times than the control group. When individuals in the CHR group did gaze at abstract gestures, on average, they spent significantly less time fixating compared to controls. Furthermore, reduced fixation (ie, count and time) was related to depression and slower response time on an attentional task. While a similar pattern of group differences in the same direction appeared for literal gestures, the effect was not significant. These data highlight the importance of integrating gesture perception abnormalities into vulnerability models of psychosis and inform the development of targeted treatments for social communicative deficits.
Collapse
|
29
|
Creative Action at a Distance: A Conceptual Framework for Embodied Performance With Robotic Actors. Front Robot AI 2021; 8:662182. [PMID: 33996928 PMCID: PMC8120109 DOI: 10.3389/frobt.2021.662182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 04/12/2021] [Indexed: 11/25/2022] Open
Abstract
Acting, stand-up and dancing are creative, embodied performances that nonetheless follow a script. Unless experimental or improvised, the performers draw their movements from much the same stock of embodied schemas. A slavish following of the script leaves no room for creativity, but active interpretation of the script does. It is the choices one makes, of words and actions, that make a performance creative. In this theory and hypothesis article, we present a framework for performance and interpretation within robotic storytelling. The performance framework is built upon movement theory, and defines a taxonomy of basic schematic movements and the most important gesture types. For the interpretation framework, we hypothesise that emotionally-grounded choices can inform acts of metaphor and blending, to elevate a scripted performance into a creative one. Theory and hypothesis are each grounded in empirical research, and aim to provide resources for other robotic studies of the creative use of movement and gestures.
Collapse
|
30
|
Controlling Video Stimuli in Sign Language and Gesture Research: The OpenPoseR Package for Analyzing OpenPose Motion-Tracking Data in R. Front Psychol 2021; 12:628728. [PMID: 33679550 PMCID: PMC7932993 DOI: 10.3389/fpsyg.2021.628728] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 01/29/2021] [Indexed: 01/08/2023] Open
Abstract
Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor’s movements. Computer vision methods such as OpenPose enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The OpenPoseR package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using OpenPose, allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files.
Collapse
|
31
|
Construing events first-hand: Gesture viewpoints interact with speech to shape the attribution and memory of agency. Mem Cognit 2021; 49:884-894. [PMID: 33415717 DOI: 10.3758/s13421-020-01135-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2020] [Indexed: 11/08/2022]
Abstract
Beyond conveying objective content about objects and actions, what can co-speech iconic gestures reveal about a speaker's subjective relationship to that content? The present study explores this question by investigating how gesture viewpoints can inform a listener's construal of a speaker's agency. Forty native English speakers watched videos of an actor uttering sentences with different viewpoints-that of low agency or high agency-conveyed through both speech and gesture. Participants were asked to (1) rate the speaker's responsibility for the action described in each video (encoding task) and (2) complete a surprise memory test of the spoken sentences (recall task). For the encoding task, participants rated responsibility near ceiling when agency in speech was high, with a slight dip when accompanied by gestures of low agency. When agency in speech was low, responsibility ratings were raised markedly when accompanied by gestures of high agency. In the recall task, participants produced more incorrect recall of spoken agency when the viewpoints expressed through speech and gesture were inconsistent with one another. Our findings suggest that, beyond conveying objective content, co-speech iconic gestures can also guide listeners in gauging a speaker's agentic relationship to actions and events.
Collapse
|
32
|
Abstract
AbstractMany studies have been conducted to find approaches to overcome the Uncanny Valley. However, the focus on the influence of the robot’s appearance leaves a big missing part: the influence of the robot’s nonverbal behaviour. This impedes the complete exploration of the Uncanny Valley. In this study, we explored the Uncanny Valley from the viewpoint of the robot’s nonverbal behaviour in regard to the Uncanny Valley hypothesis. We observed a relationship between the participants’ ratings on human-likeness of the robot’s nonverbal behavior and affinity toward the robot’s nonverbal behavior, and define the point where the affinity toward the robot’s nonverbal behavior significantly drops down as the Uncanny Valley. In this study, an experiment of human–robot interaction was conducted. The participants were asked to interact with a robot with different nonverbal behaviours, ranging from 0 (no nonverbal behavior, speaking only) to 3 (gaze, head nodding, and gestures) combinations and to rate the perceived human-likeness and affinity toward the robot’s nonverbal behavior by using a questionnaire. Additionally, the participants’ fixation duration was measured during the experiment. The result showed a biphasic relationship between human-likeness and affinity rating results. A curve resembling the Uncanny Valley is found. The result was also supported by participants’ fixation duration. It showed that the participants had the longest fixation at the robot when the robot expressed the nonverbal behaviours that fall into the Uncanny Valley. This exploratory study provides evidence suggesting the existence of the Uncanny Valley from the viewpoint of the robot’s nonverbal behaviour.
Collapse
|
33
|
N400 amplitude, latency, and variability reflect temporal integration of beat gesture and pitch accent during language processing. Brain Res 2020; 1747:147059. [PMID: 32818527 PMCID: PMC7493208 DOI: 10.1016/j.brainres.2020.147059] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 08/03/2020] [Accepted: 08/12/2020] [Indexed: 01/19/2023]
Abstract
This study examines how across-trial (average) and trial-by-trial (variability in) amplitude and latency of the N400 event-related potential (ERP) reflect temporal integration of pitch accent and beat gesture. Thirty native English speakers viewed videos of a talker producing sentences with beat gesture co-occurring with a pitch accented focus word (synchronous), beat gesture co-occurring with the onset of a subsequent non-focused word (asynchronous), or the absence of beat gesture (no beat). Across trials, increased amplitude and earlier latency were observed when beat gesture was temporally asynchronous with pitch accenting than when it was temporally synchronous with pitch accenting or absent. Moreover, temporal asynchrony of beat gesture relative to pitch accent increased trial-by-trial variability of N400 amplitude and latency and influenced the relationship between across-trial and trial-by-trial N400 latency. These results indicate that across-trial and trial-by-trial amplitude and latency of the N400 ERP reflect temporal integration of beat gesture and pitch accent during language comprehension, supporting extension of the integrated systems hypothesis of gesture-speech processing and neural noise theories to focus processing in typical adult populations.
Collapse
|
34
|
Emblem Gestures Improve Perception and Evaluation of Non-native Speech. Front Psychol 2020; 11:574418. [PMID: 33071912 PMCID: PMC7536367 DOI: 10.3389/fpsyg.2020.574418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 08/19/2020] [Indexed: 01/02/2023] Open
Abstract
Traditionally, much of the attention on the communicative effects of non-native accent has focused on the accent itself rather than how it functions within a more natural context. The present study explores how the bodily context of co-speech emblematic gestures affects perceptual and social evaluation of non-native accent. In two experiments in two different languages, Mandarin and Japanese, we filmed learners performing a short utterance in three different within-subjects conditions: speech alone, culturally familiar gesture, and culturally unfamiliar gesture. Native Mandarin participants watched videos of foreign-accented Mandarin speakers (Experiment 1), and native Japanese participants watched videos of foreign-accented Japanese speakers (Experiment 2). Following each video, native language participants were asked a set of questions targeting speech perception and social impressions of the learners. Results from both experiments demonstrate that familiar—and occasionally unfamiliar—emblems facilitated speech perception and enhanced social evaluations compared to the speech alone baseline. The variability in our findings suggests that gesture may serve varied functions in the perception and evaluation of non-native accent.
Collapse
|
35
|
Evaluating Models of Gesture and Speech Production for People With Aphasia. Cogn Sci 2020; 44:e12890. [PMID: 32939773 DOI: 10.1111/cogs.12890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 07/16/2020] [Accepted: 07/31/2020] [Indexed: 11/29/2022]
Abstract
People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.
Collapse
|
36
|
Gesture, communication, and adult acquired hearing loss. JOURNAL OF COMMUNICATION DISORDERS 2020; 87:106030. [PMID: 32707420 DOI: 10.1016/j.jcomdis.2020.106030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 06/18/2020] [Accepted: 06/19/2020] [Indexed: 06/11/2023]
Abstract
Nonverbal communication, specifically hand and arm movements (commonly known as gesture), has long been recognized and explored as a significant element in human interaction as well as potential compensatory behavior for individuals with communication difficulties. The use of gesture as a compensatory communication method in expressive and receptive human communication disorders has been the subject of much investigation. Yet within the context of adult acquired hearing loss, gesture has received limited research attention and much remains unknown about patterns of nonverbal behaviors in conversations in which hearing loss is a factor. This paper presents key elements of the background of gesture studies and the theories of gesture function and production followed by a review of research focused on adults with hearing loss and the role of gesture and gaze in rehabilitation. The current examination of the visual resource of co-speech gesture in the context of everyday interactions involving adults with acquired hearing loss suggests the need for the development of an evidence base to effect enhancements and changes in the way in which rehabilitation services are conducted.
Collapse
|
37
|
Seeing Iconic Gesture Promotes First- and Second-Order Verb Generalization in Preschoolers. Child Dev 2020; 92:124-141. [PMID: 32666515 DOI: 10.1111/cdev.13392] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
This study investigated whether seeing iconic gestures depicting verb referents promotes two types of generalization. We taught 3- to 4-year-olds novel locomotion verbs. Children who saw iconic manner gestures during training generalized more verbs to novel events (first-order generalization) than children who saw interactive gestures (Experiment 1, N = 48; Experiment 2, N = 48) and path-tracing gestures (Experiment 3, N = 48). Furthermore, immediately (Experiments 1 and 3) and after 1 week (Experiment 2), the iconic manner gesture group outperformed the control groups in subsequent generalization trials with different novel verbs (second-order generalization), although all groups saw interactive gestures. Thus, seeing iconic gestures that depict verb referents helps children (a) generalize individual verb meanings to novel events and (b) learn more verbs from the same subcategory.
Collapse
|
38
|
Ageing, working memory, and mental imagery: Understanding gestural communication in younger and older adults. Q J Exp Psychol (Hove) 2020; 74:29-44. [PMID: 32640872 DOI: 10.1177/1747021820944696] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Ageing has effects both on language and gestural communication skills. Although gesture use is similar between younger and older adults, the use of representational gestures (e.g., drawing a line with fingers on the air to indicate a road) decreases with age. This study investigates whether this change in the production of representational gestures is related to individuals' working memory and/or mental imagery skills. We used three gesture tasks (daily activity description, story completion, and address description) to obtain spontaneous co-speech gestures from younger and older individuals (N = 60). Participants also completed the Corsi working memory task and a mental imagery task. Results showed that although the two age groups' overall gesture frequencies were similar across the three tasks, the younger adults used relatively higher proportions of representational gestures than the older adults only in the address description task. Regardless of age, the mental imagery but not working memory score was associated with the use of representational gestures only in this task. However, the use of spatial words in the address description task did not differ between the two age groups. The mental imagery or working memory scores did not associate with the spatial word use. These findings suggest that mental imagery can play a role in gesture production. Gesture and speech production might have separate timelines in terms of being affected by the ageing process, particularly for spatial content.
Collapse
|
39
|
|
40
|
Lexical and gestural development in 5p deletion syndrome-A case report. JOURNAL OF COMMUNICATION DISORDERS 2020; 83:105949. [PMID: 31739224 DOI: 10.1016/j.jcomdis.2019.105949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 09/25/2019] [Accepted: 10/21/2019] [Indexed: 06/10/2023]
Abstract
PURPOSE Individuals with 5p deletion syndrome (also known as cri du chat syndrome) have various speech and language problems. The aim of this work was to examine early gestural and lexical development in a boy with this syndrome and to see to what extent his skills in these areas were delayed and/or deviant when compared to typically developing children. METHOD The participant's parents completed the Norwegian adaptation of the MacArthur-Bates Communicative Development inventories (CDI) ten times over a period of five years. His scores were compared to those of typically developing infants aged eight to 20 months. RESULTS It was found that the subject followed a considerably delayed, but not deviant, developmental trajectory in three areas, receptive vocabulary, productive vocabulary and communicative gestures, compared to typically developing infants and toddlers. CONCLUSION The speech and language problems of individuals with 5p deletion syndrome, which have been documented in the domains of phonetics and phonology and grammar, also extend to gestural and lexical development. The findings of this study will have clinical implications for assessment, in that a broad assessment of gestural and lexical skills should be carried out as early as possible as a basis for interventions to improve communicative skills.
Collapse
|
41
|
The Production of Gesture and Speech by People With Aphasia: Influence of Communicative Constraints. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4417-4432. [PMID: 31710512 DOI: 10.1044/2019_jslhr-l-19-0020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose People with aphasia (PWA) use different kinds of gesture spontaneously when they communicate. Although there is evidence that the nature of the communicative task influences the linguistic performance of PWA, so far little is known about the influence of the communicative task on the production of gestures by PWA. We aimed to investigate the influence of varying communicative constraints on the production of gesture and spoken expression by PWA in comparison to persons without language impairment. Method Twenty-six PWA with varying aphasia severities and 26 control participants (CP) without language impairment participated in the study. Spoken expression and gesture production were investigated in 2 different tasks: (a) spontaneous conversation about topics of daily living and (b) a cartoon narration task, that is, retellings of short cartoon clips. The frequencies of words and gestures as well as of different gesture types produced by the participants were analyzed and tested for potential effects of group and task. Results Main results for task effects revealed that PWA and CP used more iconic gestures and pantomimes in the cartoon narration task than in spontaneous conversation. Metaphoric gestures, deictic gestures, number gestures, and emblems were more frequently used in spontaneous conversation than in cartoon narrations by both participant groups. Group effects show that, in both tasks, PWA's gesture-to-word ratios were higher than those for the CP. Furthermore, PWA produced more interactive gestures than the CP in both tasks, as well as more number gestures and pantomimes in spontaneous conversation. Conclusions The current results suggest that PWA use gestures to compensate for their verbal limitations under varying communicative constraints. The properties of the communicative task influence the use of different gesture types in people with and without aphasia. Thus, the influence of communicative constraints needs to be considered when assessing PWA's multimodal communicative abilities.
Collapse
|
42
|
Abstract
Digitally animated characters are promising tools in research studying how we integrate information from speech and visual sources such as gestures because they allow specific gesture features to be manipulated in isolation. We present an approach combining motion capture and 3D-animated characters that allows us to manipulate natural individual gesture strokes for experimental purposes, for example to temporally shift and present gestures in ecologically valid sequences. We exemplify how such stimuli can be used in an experiment investigating implicit detection of speech–gesture (a) synchrony, and discuss the general applicability of the workflow for research in this domain.
Collapse
|
43
|
|
44
|
Gestural communication in olive baboons (Papio anubis): repertoire and intentionality. Anim Cogn 2019; 23:19-40. [DOI: 10.1007/s10071-019-01312-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 09/06/2019] [Accepted: 09/21/2019] [Indexed: 02/07/2023]
|
45
|
Gesture Analysis and Organizational Research: The Development and Application of a Protocol for Naturalistic Settings. ORGANIZATIONAL RESEARCH METHODS 2019. [DOI: 10.1177/1094428119877450] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Gestures are an underresearched but potentially significant aspect of organizational conduct that is relevant to researchers across a range of theoretical and empirical domains. In engaging the cross-disciplinary field of gesture studies, we develop and apply a protocol for analyzing gestures produced in naturalistic settings during ongoing streams of talk and embodied activity. Analyzing video recordings of entrepreneurial investor pitches, we work through this protocol and demonstrate its usefulness. While doing so, we also explore methodological tensions in gesture studies and draw out methodological arguments as they relate to the analysis of these fleeting and often intricate bodily movements. The article contributes a generally applicable protocol for the analysis of gestures in naturalistic settings, and it assesses the methodological implications of this protocol both for research on entrepreneurship and new venture creation and management and organization research more generally.
Collapse
|
46
|
Gestural acquisition in great apes: the Social Negotiation Hypothesis. Anim Cogn 2019; 22:551-565. [PMID: 29368287 PMCID: PMC6647412 DOI: 10.1007/s10071-017-1159-6] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 12/09/2017] [Accepted: 12/29/2017] [Indexed: 02/07/2023]
Abstract
Scientific interest in the acquisition of gestural signalling dates back to the heroic figure of Charles Darwin. More than a hundred years later, we still know relatively little about the underlying evolutionary and developmental pathways involved. Here, we shed new light on this topic by providing the first systematic, quantitative comparison of gestural development in two different chimpanzee (Pan troglodytes verus and Pan troglodytes schweinfurthii) subspecies and communities living in their natural environments. We conclude that the three most predominant perspectives on gestural acquisition-Phylogenetic Ritualization, Social Transmission via Imitation, and Ontogenetic Ritualization-do not satisfactorily explain our current findings on gestural interactions in chimpanzees in the wild. In contrast, we argue that the role of interactional experience and social exposure on gestural acquisition and communicative development has been strongly underestimated. We introduce the revised Social Negotiation Hypothesis and conclude with a brief set of empirical desiderata for instigating more research into this intriguing research domain.
Collapse
|
47
|
Pain communication during medical examination: beyond words. Minerva Anestesiol 2019; 85:1243-1244. [PMID: 31213048 DOI: 10.23736/s0375-9393.19.13784-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
48
|
Body-oriented gestures as a practitioner's window into interpreted communication. Soc Sci Med 2019; 233:171-180. [PMID: 31203145 DOI: 10.1016/j.socscimed.2019.05.040] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 05/21/2019] [Accepted: 05/24/2019] [Indexed: 11/22/2022]
Abstract
With increasing global migration, health care providers and patients may lack a shared language. Interpreters help to secure understanding. Doctors and patients cannot evaluate how the interpreter translates their utterances; however, they can see hand movements, which can provide a window into the interpretation process. While research on natural language use has acknowledged the semiotic contribution of co-speech gestures (i.e., spontaneous hand and arm movements that are tightly synchronized with speech), their role in interpreted interactions is unstudied. We aimed to reveal whether gestures could shed light on the interpreting process and to develop a systematic methodology for investigating gesture-use in interpreted encounters. Using data from authentic, interpreted clinical interactions, we identified and analyzed gestures referring to the body (i.e., body-oriented gestures). Data were 76 min of video-recorded doctor-patient consultations at two UK inner-city general practices in 2009. Using microanalysis of face-to-face dialogue, we revealed how participants used body-oriented gestures and how interpreters transmitted them. Participants used 264 body-oriented gestures (doctors = 113, patients = 54, interpreters = 97). Gestures served an important semiotic function: On average, 70% of the doctors' and patients' gestures provided information not conveyed in speech. When interpreters repeated the primary participants' body-oriented gestures, they were highly likely to accompany the gesture with speech that retained the overall utterance meaning. Conversely, when interpreters did not repeat the gesture, their speech tended to lack that information as well. A qualitative investigation into the local effect of gesture transmission suggested a means for quality control: visible discrepancies in interpretation generated opportunities to check understanding. The findings suggest that clinical communication training could benefit from including skills to understand and attend to gestures. The analysis developed here provides a promising schema and method for future research informing clinical guidelines and training.
Collapse
|
49
|
To freeze or not to freeze: A culture-sensitive motion capture approach to detecting deceit. PLoS One 2019; 14:e0215000. [PMID: 30978207 PMCID: PMC6461255 DOI: 10.1371/journal.pone.0215000] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 03/25/2019] [Indexed: 11/19/2022] Open
Abstract
We present a new signal for detecting deception: full body motion. Previous work on detecting deception from body movement has relied either on human judges or on specific gestures (such as fidgeting or gaze aversion) that are coded by humans. While this research has helped to build the foundation of the field, results are often characterized by inconsistent and contradictory findings, with small-stakes lies under lab conditions detected at rates little better than guessing. We examine whether a full body motion capture suit, which records the position, velocity, and orientation of 23 points in the subject's body, could yield a better signal of deception. Interviewees of South Asian (n = 60) or White British culture (n = 30) were required to either tell the truth or lie about two experienced tasks while being interviewed by somebody from their own (n = 60) or different culture (n = 30). We discovered that full body motion-the sum of joint displacements-was indicative of lying 74.4% of the time. Further analyses indicated that including individual limb data in our full body motion measurements can increase its discriminatory power to 82.2%. Furthermore, movement was guilt- and penitential-related, and occurred independently of anxiety, cognitive load, and cultural background. It appears that full body motion can be an objective nonverbal indicator of deceit, showing that lying does not cause people to freeze.
Collapse
|
50
|
Editors' Introduction: Miscommunication. Top Cogn Sci 2018; 10:264-278. [PMID: 29749040 DOI: 10.1111/tops.12340] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Revised: 03/02/2018] [Accepted: 03/04/2018] [Indexed: 11/29/2022]
Abstract
Miscommunication is a neglected issue in the cognitive sciences, where it has often been discounted as noise in the system. This special issue argues for the opposite view: Miscommunication is a highly structured and ubiquitous feature of human interaction that systematically underpins people's ability to create and maintain shared languages. Contributions from conversation analysis, computational linguistics, experimental psychology, and formal semantics provide evidence for these claims. They highlight the multi-modal, multi-person character of miscommunication. They demonstrate the incremental, contingent, and locally adaptive nature of the processes people use to detect and deal with miscommunication. They show how these processes can drive language change. In doing so, these contributions introduce an alternative perspective on what successful communication is, new methods for studying it, and application areas where these ideas have a particular impact. We conclude that miscommunication is not noise but essential to the productive flexibility of human communication, especially our ability to respond constructively to new people and new situations.
Collapse
|