1
|
Ter Bekke M, Drijvers L, Holler J. Co-Speech Hand Gestures Are Used to Predict Upcoming Meaning. Psychol Sci 2025:9567976251331041. [PMID: 40261301 DOI: 10.1177/09567976251331041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/24/2025] Open
Abstract
In face-to-face conversation, people use speech and gesture to convey meaning. Seeing gestures alongside speech facilitates comprehenders' language processing, but crucially, the mechanisms underlying this facilitation remain unclear. We investigated whether comprehenders use the semantic information in gestures, typically preceding related speech, to predict upcoming meaning. Dutch adults listened to questions asked by a virtual avatar. Questions were accompanied by an iconic gesture (e.g., typing) or meaningless control movement (e.g., arm scratch) followed by a short pause and target word (e.g., "type"). A Cloze experiment showed that gestures improved explicit predictions of upcoming target words. Moreover, an EEG experiment showed that gestures reduced alpha and beta power during the pause, indicating anticipation, and reduced N400 amplitudes, demonstrating facilitated semantic processing. Thus, comprehenders use iconic gestures to predict upcoming meaning. Theories of linguistic prediction should incorporate communicative bodily signals as predictive cues to capture how language is processed in face-to-face interaction.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Linda Drijvers
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| |
Collapse
|
2
|
Clough S, Brown-Schmidt S, Cho SJ, Duff MC. Reduced on-line speech gesture integration during multimodal language processing in adults with moderate-severe traumatic brain injury: Evidence from eye-tracking. Cortex 2024; 181:26-46. [PMID: 39488986 DOI: 10.1016/j.cortex.2024.08.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 05/30/2024] [Accepted: 08/21/2024] [Indexed: 11/05/2024]
Abstract
BACKGROUND Language is multimodal and situated in rich visual contexts. Language is also incremental, unfolding moment-to-moment in real time, yet few studies have examined how spoken language interacts with gesture and visual context during multimodal language processing. Gesture is a rich communication cue that is integrally related to speech and often depicts concrete referents from the visual world. Using eye-tracking in an adapted visual world paradigm, we examined how participants with and without moderate-severe traumatic brain injury (TBI) use gesture to resolve temporary referential ambiguity. METHODS Participants viewed a screen with four objects and one video. The speaker in the video produced sentences (e.g., "The girl will eat the very good sandwich"), paired with either a meaningful gesture (e.g., sandwich-holding gesture) or a meaningless grooming movement (e.g., arm scratch) at the verb "will eat." We measured participants' gaze to the target object (e.g., sandwich), a semantic competitor (e.g., apple), and two unrelated distractors (e.g., piano, guitar) during the critical window between movement onset in the gesture modality and onset of the spoken referent in speech. RESULTS Both participants with and without TBI were more likely to fixate the target when the speaker produced a gesture compared to a grooming movement; however, relative to non-injured participants, the effect was significantly attenuated in the TBI group. DISCUSSION We demonstrated evidence of reduced speech-gesture integration in participants with TBI relative to non-injured peers. This study advances our understanding of the communicative abilities of adults with TBI and could lead to a more mechanistic account of the communication difficulties adults with TBI experience in rich communication contexts that require the processing and integration of multiple co-occurring cues. This work has the potential to increase the ecological validity of language assessment and provide insights into the cognitive and neural mechanisms that support multimodal language processing.
Collapse
Affiliation(s)
- Sharice Clough
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA; Multimodal Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Sarah Brown-Schmidt
- Department of Psychology and Human Development, Vanderbilt University, Nashville, Tennessee, USA
| | - Sun-Joo Cho
- Department of Psychology and Human Development, Vanderbilt University, Nashville, Tennessee, USA
| | - Melissa C Duff
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
3
|
Cacciante L, Pregnolato G, Salvalaggio S, Federico S, Kiper P, Smania N, Turolla A. Language and gesture neural correlates: A meta-analysis of functional magnetic resonance imaging studies. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2024; 59:902-912. [PMID: 37971416 DOI: 10.1111/1460-6984.12987] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 11/03/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Humans often use co-speech gestures to promote effective communication. Attention has been paid to the cortical areas engaged in the processing of co-speech gestures. AIMS To investigate the neural network underpinned in the processing of co-speech gestures and to observe whether there is a relationship between areas involved in language and gesture processing. METHODS & PROCEDURES We planned to include studies with neurotypical and/or stroke participants who underwent a bimodal task (i.e., processing of co-speech gestures with relative speech) and a unimodal task (i.e., speech or gesture alone) during a functional magnetic resonance imaging (fMRI) session. After a database search, abstract and full-text screening were conducted. Qualitative and quantitative data were extracted, and a meta-analysis was performed with the software GingerALE 3.0.2, performing contrast analyses of uni- and bimodal tasks. MAIN CONTRIBUTION The database search produced 1024 records. After the screening process, 27 studies were included in the review. Data from 15 studies were quantitatively analysed through meta-analysis. Meta-analysis found three clusters with a significant activation of the left middle frontal gyrus and inferior frontal gyrus, and bilateral middle occipital gyrus and inferior temporal gyrus. CONCLUSIONS There is a close link at the neural level for the semantic processing of auditory and visual information during communication. These findings encourage the integration of the use of co-speech gestures during aphasia treatment as a strategy to foster the possibility to communicate effectively for people with aphasia. WHAT THIS PAPER ADDS What is already known on this subject Gestures are an integral part of human communication, and they may have a relationship at neural level with speech processing. What this paper adds to the existing knowledge During processing of bi- and unimodal communication, areas related to semantic processing and multimodal processing are activated, suggesting that there is a close link between co-speech gestures and spoken language at a neural level. What are the potential or actual clinical implications of this work? Knowledge of the functions related to gesture and speech processing neural networks will allow for the adoption of model-based neurorehabilitation programs to foster recovery from aphasia by strengthening the specific functions of these brain networks.
Collapse
Affiliation(s)
- Luisa Cacciante
- Laboratory of Healthcare Innovation Technology, IRCCS San Camillo Hospital, Venice, Italy
| | - Giorgia Pregnolato
- Laboratory of Healthcare Innovation Technology, IRCCS San Camillo Hospital, Venice, Italy
| | - Silvia Salvalaggio
- Laboratory of Computational Neuroimaging, IRCCS San Camillo Hospital, Venice, Italy
- Padova Neuroscience Center, Università degli Studi di Padova, Padua, Italy
| | - Sara Federico
- Laboratory of Healthcare Innovation Technology, IRCCS San Camillo Hospital, Venice, Italy
| | - Pawel Kiper
- Laboratory of Healthcare Innovation Technology, IRCCS San Camillo Hospital, Venice, Italy
| | - Nicola Smania
- Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Andrea Turolla
- Department of Biomedical and Neuromotor Sciences-DIBINEM, Alma Mater Studiorum Università di Bologna, Bologna, Italy
- Unit of Occupational Medicine, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| |
Collapse
|
4
|
Ter Bekke M, Drijvers L, Holler J. Hand Gestures Have Predictive Potential During Conversation: An Investigation of the Timing of Gestures in Relation to Speech. Cogn Sci 2024; 48:e13407. [PMID: 38279899 DOI: 10.1111/cogs.13407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 07/09/2023] [Accepted: 01/10/2024] [Indexed: 01/29/2024]
Abstract
During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| | - Linda Drijvers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| |
Collapse
|
5
|
Morett LM. Observing gesture at learning enhances subsequent phonological and semantic processing of L2 words: An N400 study. BRAIN AND LANGUAGE 2023; 246:105327. [PMID: 37804717 DOI: 10.1016/j.bandl.2023.105327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/26/2023] [Accepted: 09/29/2023] [Indexed: 10/09/2023]
Abstract
This study employed the N400 event-related potential (ERP) to investigate how observing different types of gestures at learning affects the subsequent processing of L2 Mandarin words differing in lexical tone by L1 English speakers. The effects of pitch gestures conveying lexical tones (e.g., upwards diagonal movements for rising tone), semantic gestures conveying word meanings (e.g., waving goodbye for to wave), and no gesture were compared. In a lexical tone discrimination task, larger N400s for Mandarin target words mismatching vs. matching Mandarin prime words in lexical tone were observed for words learned with pitch gesture. In a meaning discrimination task, larger N400s for English target words mismatching vs. matching Mandarin prime words in meaning were observed for words learned with pitch and semantic gesture. These findings provide the first neural evidence that observing gestures during L2 word learning enhances subsequent phonological and semantic processing of learned L2 words.
Collapse
Affiliation(s)
- Laura M Morett
- Department of Speech, Language and Hearing Sciences, University of Missouri, 421 Lewis Hall, Columbia, MO 65211, United States; Department of Educational Studies in Psychology, Research Methodology, and Counseling, University of Alabama, United States.
| |
Collapse
|
6
|
Clough S, Padilla VG, Brown-Schmidt S, Duff MC. Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia 2023; 189:108665. [PMID: 37619936 PMCID: PMC10592037 DOI: 10.1016/j.neuropsychologia.2023.108665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/27/2023] [Accepted: 08/18/2023] [Indexed: 08/26/2023]
Abstract
PURPOSE Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying "He searched for a new recipe" while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays. METHODS 60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., "He searched for a new recipe"), a Gesture Match (e.g., "He searched for a new recipe online), or Other ("He looked for a new recipe"). We also examined whether participants produced representative gestures themselves when retelling these details. RESULTS Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story. CONCLUSION We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
Collapse
Affiliation(s)
- Sharice Clough
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States.
| | - Victoria-Grace Padilla
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| | - Sarah Brown-Schmidt
- Department of Psychology and Human Development, Vanderbilt University, United States
| | - Melissa C Duff
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| |
Collapse
|
7
|
Cavicchio F, Busà MG. The Role of Representational Gestures and Speech Synchronicity in Auditory Input by L2 and L1 Speakers. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2023; 52:1721-1735. [PMID: 37171686 DOI: 10.1007/s10936-023-09947-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/19/2023] [Indexed: 05/13/2023]
Abstract
Speech and gesture are two integrated and temporally coordinated systems. Manual gestures can help second language (L2) speakers with vocabulary learning and word retrieval. However, it is still under-investigated whether the synchronisation of speech and gesture has a role in helping listeners compensate for the difficulties in processing L2 aural information. In this paper, we tested, in two behavioural experiments, how L2 speakers process speech and gesture asynchronies in comparison to native speakers (L1). L2 speakers responded significantly faster when gestures and the semantic relevant speech were synchronous than asynchronous. They responded significantly slower than L1 speakers regardless of speech/gesture synchronisation. On the other hand, L1 speakers did not show a significant difference between asynchronous and synchronous integration of gestures and speech. We conclude that gesture-speech asynchrony affects L2 speakers more than L1 speakers.
Collapse
Affiliation(s)
| | - Maria Grazia Busà
- Dipartimento di Studi Linguistici e Letterari, Università degli Studi di Padova, Padova, Italy
| |
Collapse
|
8
|
Zhao W. TMS reveals a two-stage priming circuit of gesture-speech integration. Front Psychol 2023; 14:1156087. [PMID: 37228338 PMCID: PMC10203497 DOI: 10.3389/fpsyg.2023.1156087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 04/19/2023] [Indexed: 05/27/2023] Open
Abstract
Introduction Naturalistically, multisensory information of gesture and speech is intrinsically integrated to enable coherent comprehension. Such cross-modal semantic integration is temporally misaligned, with the onset of gesture preceding the relevant speech segment. It has been proposed that gestures prime subsequent speech. However, there are unresolved questions regarding the roles and time courses that the two sources of information play in integration. Methods In two between-subject experiments of healthy college students, we segmented the gesture-speech integration period into 40-ms time windows (TWs) based on two separately division criteria, while interrupting the activity of the integration node of the left posterior middle temporal gyrus (pMTG) and the left inferior frontal gyrus (IFG) with double-pulse transcranial magnetic stimulation (TMS). In Experiment 1, we created fixed time-advances of gesture over speech and divided the TWs from the onset of speech. In Experiment 2, we differentiated the processing stages of gesture and speech and segmented the TWs in reference to the speech lexical identification point (IP), while speech onset occurred at the gesture semantic discrimination point (DP). Results The results showed a TW-selective interruption of the pMTG and IFG only in Experiment 2, with the pMTG involved in TW1 (-120 ~ -80 ms of speech IP), TW2 (-80 ~ -40 ms), TW6 (80 ~ 120 ms) and TW7 (120 ~ 160 ms) and the IFG involved in TW3 (-40 ~ 0 ms) and TW6. Meanwhile no significant disruption of gesture-speech integration was reported in Experiment 1. Discussion We determined that after the representation of gesture has been established, gesture-speech integration occurs such that speech is first primed in a phonological processing stage before gestures are unified with speech to form a coherent meaning. Our findings provide new insights into multisensory speech and co-speech gesture integration by tracking the causal contributions of the two sources of information.
Collapse
Affiliation(s)
- Wanying Zhao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
9
|
Yao R, Guan CQ, Smolen ER, MacWhinney B, Meng W, Morett LM. Gesture-Speech Integration in Typical and Atypical Adolescent Readers. Front Psychol 2022; 13:890962. [PMID: 35719574 PMCID: PMC9204151 DOI: 10.3389/fpsyg.2022.890962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/09/2022] [Indexed: 11/24/2022] Open
Abstract
This study investigated gesture-speech integration (GSI) among adolescents who are deaf or hard of hearing (DHH) and those with typical hearing. Thirty-eight adolescents (19 with hearing loss) performed a Stroop-like task in which they watched 120 short video clips of gestures and actions twice at random. Participants were asked to press one button if the visual content of the speaker's movements was related to a written word and to press another button if it was unrelated to a written word while accuracy rates and response times were recorded. We found stronger GSI effects among DHH participants than hearing participants. The semantic congruency effect was significantly larger in DHH participants than in hearing participants, and results of our experiments indicated a significantly larger gender congruency effect in DHH participants as compared to hearing participants. Results of this study shed light on GSI among DHH individuals and suggest future avenues for research examining the impact of gesture on language processing and communication in this population.
Collapse
Affiliation(s)
- Ru Yao
- China National Institute of Education Sciences, Beijing, China
| | - Connie Qun Guan
- School of Foreign Studies, Beijing Language and Culture University, Beijing, China
| | - Elaine R. Smolen
- Teachers College, Columbia University, New York, NY, United States
| | - Brian MacWhinney
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Wanjin Meng
- Department of Moral, Psychological and Special Education, China National Institute of Education Sciences, Beijing, China
| | - Laura M. Morett
- Department of Educational Studies in Psychology, Research Methodology, and Counseling, University of Alabama, Tuscaloosa, AL, United States
| |
Collapse
|
10
|
Wu YC, Müller HM, Coulson S. Visuospatial Working Memory and Understanding Co-Speech Iconic Gestures: Do Gestures Help to Paint a Mental Picture? DISCOURSE PROCESSES 2022. [DOI: 10.1080/0163853x.2022.2028087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Ying Choon Wu
- Institute for Neural Computation, University of California, San Diego
| | - Horst M. Müller
- Faculty of Linguistics and Literary Studies, Bielefeld University
| | - Seana Coulson
- Cognitive Science Department, University of California, San Diego
| |
Collapse
|
11
|
Schubotz L, Holler J, Drijvers L, Özyürek A. Aging and working memory modulate the ability to benefit from visible speech and iconic gestures during speech-in-noise comprehension. PSYCHOLOGICAL RESEARCH 2021; 85:1997-2011. [PMID: 32627053 PMCID: PMC8289811 DOI: 10.1007/s00426-020-01363-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 05/20/2020] [Indexed: 12/19/2022]
Abstract
When comprehending speech-in-noise (SiN), younger and older adults benefit from seeing the speaker's mouth, i.e. visible speech. Younger adults additionally benefit from manual iconic co-speech gestures. Here, we investigate to what extent younger and older adults benefit from perceiving both visual articulators while comprehending SiN, and whether this is modulated by working memory and inhibitory control. Twenty-eight younger and 28 older adults performed a word recognition task in three visual contexts: mouth blurred (speech-only), visible speech, or visible speech + iconic gesture. The speech signal was either clear or embedded in multitalker babble. Additionally, there were two visual-only conditions (visible speech, visible speech + gesture). Accuracy levels for both age groups were higher when both visual articulators were present compared to either one or none. However, older adults received a significantly smaller benefit than younger adults, although they performed equally well in speech-only and visual-only word recognition. Individual differences in verbal working memory and inhibitory control partly accounted for age-related performance differences. To conclude, perceiving iconic gestures in addition to visible speech improves younger and older adults' comprehension of SiN. Yet, the ability to benefit from this additional visual information is modulated by age and verbal working memory. Future research will have to show whether these findings extend beyond the single word level.
Collapse
Affiliation(s)
- Louise Schubotz
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition, and Behaviour, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands.
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands
| | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University Nijmegen, P.O. Box 9103, 6500 HD, Nijmegen, The Netherlands
| |
Collapse
|
12
|
Momsen J, Gordon J, Wu YC, Coulson S. Event related spectral perturbations of gesture congruity: Visuospatial resources are recruited for multimodal discourse comprehension. BRAIN AND LANGUAGE 2021; 216:104916. [PMID: 33652372 PMCID: PMC11296609 DOI: 10.1016/j.bandl.2021.104916] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 11/30/2020] [Accepted: 01/08/2021] [Indexed: 06/12/2023]
Abstract
Here we examine the role of visuospatial working memory (WM) during the comprehension of multimodal discourse with co-speech iconic gestures. EEG was recorded as healthy adults encoded either a sequence of one (low load) or four (high load) dot locations on a grid and rehearsed them until a free recall response was collected later in the trial. During the rehearsal period of the WM task, participants observed videos of a speaker describing objects in which half of the trials included semantically related co-speech gestures (congruent), and the other half included semantically unrelated gestures (incongruent). Discourse processing was indexed by oscillatory EEG activity in the alpha and beta bands during the videos. Across all participants, effects of speech and gesture incongruity were more evident in low load trials than in high load trials. Effects were also modulated by individual differences in visuospatial WM capacity. These data suggest visuospatial WM resources are recruited in the comprehension of multimodal discourse.
Collapse
Affiliation(s)
- Jacob Momsen
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States
| | - Jared Gordon
- Cognitive Science Department, UC San Diego, United States
| | - Ying Choon Wu
- Swartz Center for Computational Neuroscience, UC San Diego, United States
| | - Seana Coulson
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States; Cognitive Science Department, UC San Diego, United States.
| |
Collapse
|
13
|
Momsen J, Gordon J, Wu YC, Coulson S. Verbal working memory and co-speech gesture processing. Brain Cogn 2020; 146:105640. [PMID: 33171343 PMCID: PMC11299644 DOI: 10.1016/j.bandc.2020.105640] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 09/21/2020] [Accepted: 10/19/2020] [Indexed: 12/15/2022]
Abstract
Multimodal discourse requires an assembly of cognitive processes that are uniquely recruited for language comprehension in social contexts. In this study, we investigated the role of verbal working memory for the online integration of speech and iconic gestures. Participants memorized and rehearsed a series of auditorily presented digits in low (one digit) or high (four digits) memory load conditions. To observe how verbal working memory load impacts online discourse comprehension, ERPs were recorded while participants watched discourse videos containing either congruent or incongruent speech-gesture combinations during the maintenance portion of the memory task. While expected speech-gesture congruity effects were found in the low memory load condition, high memory load trials elicited enhanced frontal positivities that indicated a unique interaction between online speech-gesture integration and the availability of verbal working memory resources. This work contributes to an understanding of discourse comprehension by demonstrating that language processing in a multimodal context is subject to the relationship between cognitive resource availability and the degree of controlled processing required for task performance. We suggest that verbal working memory is less important for speech-gesture integration than it is for mediating speech processing under high task demands.
Collapse
Affiliation(s)
- Jacob Momsen
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States
| | - Jared Gordon
- Cognitive Science Department, UC San Diego, United States
| | - Ying Choon Wu
- Swartz Center for Computational Neuroscience, UC San Diego, United States
| | - Seana Coulson
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States; Cognitive Science Department, UC San Diego, United States.
| |
Collapse
|
14
|
Morett LM, Landi N, Irwin J, McPartland JC. N400 amplitude, latency, and variability reflect temporal integration of beat gesture and pitch accent during language processing. Brain Res 2020; 1747:147059. [PMID: 32818527 PMCID: PMC7493208 DOI: 10.1016/j.brainres.2020.147059] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 08/03/2020] [Accepted: 08/12/2020] [Indexed: 01/19/2023]
Abstract
This study examines how across-trial (average) and trial-by-trial (variability in) amplitude and latency of the N400 event-related potential (ERP) reflect temporal integration of pitch accent and beat gesture. Thirty native English speakers viewed videos of a talker producing sentences with beat gesture co-occurring with a pitch accented focus word (synchronous), beat gesture co-occurring with the onset of a subsequent non-focused word (asynchronous), or the absence of beat gesture (no beat). Across trials, increased amplitude and earlier latency were observed when beat gesture was temporally asynchronous with pitch accenting than when it was temporally synchronous with pitch accenting or absent. Moreover, temporal asynchrony of beat gesture relative to pitch accent increased trial-by-trial variability of N400 amplitude and latency and influenced the relationship between across-trial and trial-by-trial N400 latency. These results indicate that across-trial and trial-by-trial amplitude and latency of the N400 ERP reflect temporal integration of beat gesture and pitch accent during language comprehension, supporting extension of the integrated systems hypothesis of gesture-speech processing and neural noise theories to focus processing in typical adult populations.
Collapse
Affiliation(s)
| | - Nicole Landi
- Haskins Laboratories, University of Connecticut, United States
| | - Julia Irwin
- Haskins Laboratories, Southern Connecticut State University, United States
| | | |
Collapse
|
15
|
Rohrer PL, Delais-Roussarie E, Prieto P. Beat Gestures for Comprehension and Recall: Differential Effects of Language Learners and Native Listeners. Front Psychol 2020; 11:575929. [PMID: 33192882 PMCID: PMC7605175 DOI: 10.3389/fpsyg.2020.575929] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 09/28/2020] [Indexed: 11/13/2022] Open
Abstract
Previous work has shown how native listeners benefit from observing iconic gestures during speech comprehension tasks of both degraded and non-degraded speech. By contrast, effects of the use of gestures in non-native listener populations are less clear and studies have mostly involved iconic gestures. The current study aims to complement these findings by testing the potential beneficial effects of beat gestures (non-referential gestures which are often used for information- and discourse marking) on language recall and discourse comprehension using a narrative-drawing task carried out by native and non-native listeners. Using a within-subject design, 51 French intermediate learners of English participated in a narrative-drawing task. Each participant was assigned 8 videos to watch, where a native speaker describes the events of a short comic strip. Videos were presented in random order, in four conditions: in Native listening conditions with frequent, naturally-modeled beat gestures, in Native listening conditions without any gesture, in Non-native listening conditions with frequent, naturally-modeled beat gestures, and in Non-native listening conditions without any gesture. Participants watched each video twice and then immediately recreated the comic strip through their own drawings. Participants' drawings were then evaluated for discourse comprehension (via their ability to convey the main goals of the narrative through their drawings) and recall (via the number of gesturally-marked elements in the narration that were included in their drawings). Results showed that for native listeners, beat gestures had no significant effect on either recall or comprehension. In non-native speech, however, beat gestures led to significantly lower comprehension and recall scores. These results suggest that frequent, naturally-modeled beat gestures in longer discourses may increase cognitive load for language learners, resulting in negative effects on both memory and language understanding. These findings add to the growing body of literature that suggests that gesture benefits are not a "one-size-fits-all" solution, but rather may be contingent on factors such as language proficiency and gesture rate, particularly in that whenever beat gestures are repeatedly used in discourse, they inherently lose their saliency as markers of important information.
Collapse
Affiliation(s)
- Patrick Louis Rohrer
- Université de Nantes, UMR 6310, Laboratoire de Linguistique de Nantes (LLING), Nantes, France
- Grup d’Estudis de Prosòdia, Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, Spain
| | | | - Pilar Prieto
- Grup d’Estudis de Prosòdia, Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| |
Collapse
|
16
|
Holler J, Levinson SC. Multimodal Language Processing in Human Communication. Trends Cogn Sci 2019; 23:639-652. [PMID: 31235320 DOI: 10.1016/j.tics.2019.05.006] [Citation(s) in RCA: 127] [Impact Index Per Article: 21.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 05/17/2019] [Accepted: 05/21/2019] [Indexed: 11/25/2022]
Abstract
The natural ecology of human language is face-to-face interaction comprising the exchange of a plethora of multimodal signals. Trying to understand the psycholinguistic processing of language in its natural niche raises new issues, first and foremost the binding of multiple, temporally offset signals under tight time constraints posed by a turn-taking system. This might be expected to overload and slow our cognitive system, but the reverse is in fact the case. We propose cognitive mechanisms that may explain this phenomenon and call for a multimodal, situated psycholinguistic framework to unravel the full complexities of human language processing.
Collapse
Affiliation(s)
- Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - Stephen C Levinson
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
17
|
He Y, Nagels A, Schlesewsky M, Straube B. The Role of Gamma Oscillations During Integration of Metaphoric Gestures and Abstract Speech. Front Psychol 2018; 9:1348. [PMID: 30104995 PMCID: PMC6077537 DOI: 10.3389/fpsyg.2018.01348] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2017] [Accepted: 07/13/2018] [Indexed: 11/13/2022] Open
Abstract
Metaphoric (MP) co-speech gestures are commonly used during daily communication. They communicate about abstract information by referring to gestures that are clearly concrete (e.g., raising a hand for “the level of the football game is high”). To understand MP co-speech gestures, a multisensory integration at semantic level is necessary between abstract speech and concrete gestures. While semantic gesture-speech integration has been extensively investigated using functional magnetic resonance imaging, evidence from electroencephalography (EEG) is rare. In the current study, we set out an EEG experiment, investigating the processing of MP vs. iconic (IC) co-speech gestures in different contexts, to reveal the oscillatory signature of MP gesture integration. German participants (n = 20) viewed video clips with an actor performing both types of gestures, accompanied by either comprehensible German or incomprehensible Russian (R) speech, or speaking German sentences without any gestures. Time-frequency analysis of the EEG data showed that, when gestures were accompanied by comprehensible German speech, MP gestures elicited decreased gamma band power (50–70 Hz) between 500 and 700 ms in the parietal electrodes when compared to IC gestures, and the source of this effect was localized to the right middle temporal gyrus. This difference is likely to reflect integration processes, as it was reduced in the R language and no-gesture conditions. Our findings provide the first empirical evidence suggesting the functional relationship between gamma band oscillations and higher-level semantic processes in a multisensory setting.
Collapse
Affiliation(s)
- Yifei He
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Marburg Center for Mind, Brain and Behavior, Philipps-University Marburg, Marburg, Germany
| | - Arne Nagels
- Department of General Linguistics, Johannes Gutenberg University Mainz, Mainz, Germany
| | - Matthias Schlesewsky
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, SA, Australia
| | - Benjamin Straube
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Marburg Center for Mind, Brain and Behavior, Philipps-University Marburg, Marburg, Germany
| |
Collapse
|
18
|
Drijvers L, Özyürek A. Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. BRAIN AND LANGUAGE 2018; 177-178:7-17. [PMID: 29421272 DOI: 10.1016/j.bandl.2018.01.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Revised: 01/05/2018] [Accepted: 01/15/2018] [Indexed: 06/08/2023]
Abstract
Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
Collapse
Affiliation(s)
- Linda Drijvers
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands.
| | - Asli Özyürek
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|
19
|
Gunter TC, Weinbrenner JED. When to Take a Gesture Seriously: On How We Use and Prioritize Communicative Cues. J Cogn Neurosci 2017; 29:1355-1367. [PMID: 28358659 DOI: 10.1162/jocn_a_01125] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
When people talk, their speech is often accompanied by gestures. Although it is known that co-speech gestures can influence face-to-face communication, it is currently unclear to what extent they are actively used and under which premises they are prioritized to facilitate communication. We investigated these open questions in two experiments that varied how pointing gestures disambiguate the utterances of an interlocutor. Participants, whose event-related brain responses were measured, watched a video, where an actress was interviewed about, for instance, classical literature (e.g., Goethe and Shakespeare). While responding, the actress pointed systematically to the left side to refer to, for example, Goethe, or to the right to refer to Shakespeare. Her final statement was ambiguous and combined with a pointing gesture. The P600 pattern found in Experiment 1 revealed that, when pointing was unreliable, gestures were only monitored for their cue validity and not used for reference tracking related to the ambiguity. However, when pointing was a valid cue (Experiment 2), it was used for reference tracking, as indicated by a reduced N400 for pointing. In summary, these findings suggest that a general prioritization mechanism is in use that constantly monitors and evaluates the use of communicative cues against communicative priors on the basis of accumulated error information.
Collapse
Affiliation(s)
- Thomas C Gunter
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | |
Collapse
|
20
|
Drijvers L, Özyürek A. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:212-222. [PMID: 27960196 DOI: 10.1044/2016_jslhr-h-16-0101] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Accepted: 06/22/2016] [Indexed: 05/21/2023]
Abstract
PURPOSE This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. METHOD Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). RESULTS Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. CONCLUSIONS When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.
Collapse
Affiliation(s)
- Linda Drijvers
- Centre for Language Studies, Radboud University, Nijmegen, The NetherlandsDonders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University, Nijmegen, The NetherlandsDonders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The NetherlandsMax Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
21
|
Biau E, Morís Fernández L, Holle H, Avila C, Soto-Faraco S. Hand gestures as visual prosody: BOLD responses to audio–visual alignment are modulated by the communicative nature of the stimuli. Neuroimage 2016; 132:129-137. [DOI: 10.1016/j.neuroimage.2016.02.018] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2014] [Revised: 12/16/2015] [Accepted: 02/09/2016] [Indexed: 11/15/2022] Open
|
22
|
Obermeier C, Gunter TC. Multisensory integration: the case of a time window of gesture-speech integration. J Cogn Neurosci 2015; 27:292-307. [PMID: 25061929 DOI: 10.1162/jocn_a_00688] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This experiment investigates the integration of gesture and speech from a multisensory perspective. In a disambiguation paradigm, participants were presented with short videos of an actress uttering sentences like "She was impressed by the BALL, because the GAME/DANCE...." The ambiguous noun (BALL) was accompanied by an iconic gesture fragment containing information to disambiguate the noun toward its dominant or subordinate meaning. We used four different temporal alignments between noun and gesture fragment: the identification point (IP) of the noun was either prior to (+120 msec), synchronous with (0 msec), or lagging behind the end of the gesture fragment (-200 and -600 msec). ERPs triggered to the IP of the noun showed significant differences for the integration of dominant and subordinate gesture fragments in the -200, 0, and +120 msec conditions. The outcome of this integration was revealed at the target words. These data suggest a time window for direct semantic gesture-speech integration ranging from at least -200 up to +120 msec. Although the -600 msec condition did not show any signs of direct integration at the homonym, significant disambiguation was found at the target word. An explorative analysis suggested that gesture information was directly integrated at the verb, indicating that there are multiple positions in a sentence where direct gesture-speech integration takes place. Ultimately, this would implicate that in natural communication, where a gesture lasts for some time, several aspects of that gesture will have their specific and possibly distinct impact on different positions in an utterance.
Collapse
Affiliation(s)
- Christian Obermeier
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | |
Collapse
|
23
|
Speaker's hand gestures modulate speech perception through phase resetting of ongoing neural oscillations. Cortex 2015; 68:76-85. [DOI: 10.1016/j.cortex.2014.11.018] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2014] [Revised: 10/21/2014] [Accepted: 11/17/2014] [Indexed: 11/22/2022]
|
24
|
The EEG and fMRI signatures of neural integration: An investigation of meaningful gestures and corresponding speech. Neuropsychologia 2015; 72:27-42. [DOI: 10.1016/j.neuropsychologia.2015.04.018] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2015] [Revised: 03/28/2015] [Accepted: 04/17/2015] [Indexed: 11/22/2022]
|
25
|
Özyürek A. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour. Philos Trans R Soc Lond B Biol Sci 2015; 369:20130296. [PMID: 25092664 DOI: 10.1098/rstb.2013.0296] [Citation(s) in RCA: 74] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
Collapse
Affiliation(s)
- Aslı Özyürek
- Department of Linguistics, Radboud University Nijmegen, Erasmus Plain 1, 6500 HD, Nijmegen, The Netherlands Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525 JT, The Netherlands
| |
Collapse
|
26
|
Obermeier C, Kelly SD, Gunter TC. A speaker's gesture style can affect language comprehension: ERP evidence from gesture-speech integration. Soc Cogn Affect Neurosci 2015; 10:1236-43. [PMID: 25688095 DOI: 10.1093/scan/nsv011] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Accepted: 02/09/2015] [Indexed: 11/13/2022] Open
Abstract
In face-to-face communication, speech is typically enriched by gestures. Clearly, not all people gesture in the same way, and the present study explores whether such individual differences in gesture style are taken into account during the perception of gestures that accompany speech. Participants were presented with one speaker that gestured in a straightforward way and another that also produced self-touch movements. Adding trials with such grooming movements makes the gesture information a much weaker cue compared with the gestures of the non-grooming speaker. The Electroencephalogram was recorded as participants watched videos of the individual speakers. Event-related potentials elicited by the speech signal revealed that adding grooming movements attenuated the impact of gesture for this particular speaker. Thus, these data suggest that there is sensitivity to the personal communication style of a speaker and that affects the extent to which gesture and speech are integrated during language comprehension.
Collapse
Affiliation(s)
- Christian Obermeier
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany and
| | - Spencer D Kelly
- Colgate University, Department of Psychology, NY 13346, Hamilton, USA
| | - Thomas C Gunter
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany and
| |
Collapse
|
27
|
Gunter TC, Weinbrenner JED, Holle H. Inconsistent use of gesture space during abstract pointing impairs language comprehension. Front Psychol 2015; 6:80. [PMID: 25709591 PMCID: PMC4321330 DOI: 10.3389/fpsyg.2015.00080] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2014] [Accepted: 01/14/2015] [Indexed: 01/08/2023] Open
Abstract
Pointing toward concrete objects is a well-known and efficient communicative strategy. Much less is known about the communicative effectiveness of abstract pointing where the pointing gestures are directed to “empty space.” McNeill's (2003) observations suggest that abstract pointing can be used to establish referents in gesture space, without the referents being physically present. Recently, however, it has been shown that abstract pointing typically provides redundant information to the uttered speech thereby suggesting a very limited communicative value (So et al., 2009). In a first approach to tackle this issue we were interested to know whether perceivers are sensitive at all to this gesture cue or whether it is completely discarded as irrelevant add-on information. Sensitivity to for instance a gesture-speech mismatch would suggest a potential communicative function of abstract pointing. Therefore, we devised a mismatch paradigm in which participants watched a video where a female was interviewed on various topics. During her responses, she established two concepts in space using abstract pointing (e.g., pointing to the left when saying Donald, and pointing to the right when saying Mickey). In the last response to each topic, the pointing gesture accompanying a target word (e.g., Donald) was either consistent or inconsistent with the previously established location. Event related brain potentials showed an increased N400 and P600 when gesture and speech referred to different referents, indicating that inconsistent use of gesture space impairs language comprehension. Abstract pointing was found to influence comprehension even though gesture was not crucial to understanding the sentences or conducting the experimental task. These data suggest that a referent was retrieved via abstract pointing and that abstract pointing can potentially be used for referent indication in a discourse. We conclude that abstract pointing has a potential communicative function.
Collapse
Affiliation(s)
- Thomas C Gunter
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - J E Douglas Weinbrenner
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Henning Holle
- Department of Psychology, University of Hull Hull, UK
| |
Collapse
|
28
|
Vainiger D, Labruna L, Ivry RB, Lavidor M. Beyond words: evidence for automatic language–gesture integration of symbolic gestures but not dynamic landscapes. PSYCHOLOGICAL RESEARCH 2013; 78:55-69. [DOI: 10.1007/s00426-012-0475-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2012] [Accepted: 12/27/2012] [Indexed: 11/24/2022]
|
29
|
Straube B, Green A, Weis S, Kircher T. A supramodal neural network for speech and gesture semantics: an fMRI study. PLoS One 2012; 7:e51207. [PMID: 23226488 PMCID: PMC3511386 DOI: 10.1371/journal.pone.0051207] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2012] [Accepted: 10/30/2012] [Indexed: 12/03/2022] Open
Abstract
In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.
Collapse
Affiliation(s)
- Benjamin Straube
- Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany.
| | | | | | | |
Collapse
|
30
|
Obermeier C, Dolk T, Gunter TC. The benefit of gestures during communication: Evidence from hearing and hearing-impaired individuals. Cortex 2012; 48:857-70. [DOI: 10.1016/j.cortex.2011.02.007] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2010] [Revised: 10/30/2010] [Accepted: 02/07/2011] [Indexed: 10/18/2022]
|
31
|
Holle H, Obermeier C, Schmidt-Kassow M, Friederici AD, Ward J, Gunter TC. Gesture facilitates the syntactic analysis of speech. Front Psychol 2012; 3:74. [PMID: 22457657 PMCID: PMC3307377 DOI: 10.3389/fpsyg.2012.00074] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2011] [Accepted: 02/28/2012] [Indexed: 12/02/2022] Open
Abstract
Recent research suggests that the brain routinely binds together information from gesture and speech. However, most of this research focused on the integration of representational gestures with the semantic content of speech. Much less is known about how other aspects of gesture, such as emphasis, influence the interpretation of the syntactic relations in a spoken message. Here, we investigated whether beat gestures alter which syntactic structure is assigned to ambiguous spoken German sentences. The P600 component of the Event Related Brain Potential indicated that the more complex syntactic structure is easier to process when the speaker emphasizes the subject of a sentence with a beat. Thus, a simple flick of the hand can change our interpretation of who has been doing what to whom in a spoken sentence. We conclude that gestures and speech are integrated systems. Unlike previous studies, which have shown that the brain effortlessly integrates semantic information from gesture and speech, our study is the first to demonstrate that this integration also occurs for syntactic information. Moreover, the effect appears to be gesture-specific and was not found for other stimuli that draw attention to certain parts of speech, including prosodic emphasis, or a moving visual stimulus with the same trajectory as the gesture. This suggests that only visual emphasis produced with a communicative intention in mind (that is, beat gestures) influences language comprehension, but not a simple visual movement lacking such an intention.
Collapse
Affiliation(s)
- Henning Holle
- Department of Psychology, University of Hull Hull, UK
| | | | | | | | | | | |
Collapse
|
32
|
Ibáñez A, Cardona JF, Dos Santos YV, Blenkmann A, Aravena P, Roca M, Hurtado E, Nerguizian M, Amoruso L, Gómez-Arévalo G, Chade A, Dubrovsky A, Gershanik O, Kochen S, Glenberg A, Manes F, Bekinschtein T. Motor-language coupling: direct evidence from early Parkinson's disease and intracranial cortical recordings. Cortex 2012; 49:968-84. [PMID: 22482695 DOI: 10.1016/j.cortex.2012.02.014] [Citation(s) in RCA: 105] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2012] [Revised: 02/28/2012] [Accepted: 02/28/2012] [Indexed: 11/24/2022]
Abstract
Language and action systems are functionally coupled in the brain as demonstrated by converging evidence using Functional magnetic resonance imaging (fMRI), electroencephalography (EEG), transcranial magnetic stimulation (TMS), and lesion studies. In particular, this coupling has been demonstrated using the action-sentence compatibility effect (ACE) in which motor activity and language interact. The ACE task requires participants to listen to sentences that described actions typically performed with an open hand (e.g., clapping), a closed hand (e.g., hammering), or without any hand action (neutral); and to press a large button with either an open hand position or closed hand position immediately upon comprehending each sentence. The ACE is defined as a longer reaction time (RT) in the action-sentence incompatible conditions than in the compatible conditions. Here we investigated direct motor-language coupling in two novel and uniquely informative ways. First, we measured the behavioural ACE in patients with motor impairment (early Parkinson's disease - EPD), and second, in epileptic patients with direct electrocorticography (ECoG) recordings. In experiment 1, EPD participants with preserved general cognitive repertoire, showed a much diminished ACE relative to non-EPD volunteers. Moreover, a correlation between ACE performance and action-verb processing (kissing and dancing test - KDT) was observed. Direct cortical recordings (ECoG) in motor and language areas (experiment 2) demonstrated simultaneous bidirectional effects: motor preparation affected language processing (N400 at left inferior frontal gyrus and middle/superior temporal gyrus), and language processing affected activity in movement-related areas (motor potential at premotor and M1). Our findings show that the ACE paradigm requires ongoing integration of preserved motor and language coupling (abolished in EPD) and engages motor-temporal cortices in a bidirectional way. In addition, both experiments suggest the presence of a motor-language network which is not restricted to somatotopically defined brain areas. These results open new pathways in the fields of motor diseases, theoretical approaches to language understanding, and models of action-perception coupling.
Collapse
Affiliation(s)
- Agustín Ibáñez
- Laboratory of Experimental Psychology and Neuroscience (LPEN), Institute of Cognitive Neurology (INECO); Favaloro University, Buenos Aires, Argentina.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|