1
|
Bluet A, Reynaud E, Federico G, Bryche C, Lesourd M, Fournel A, Lamberton F, Ibarrola D, Rossetti Y, Osiurak F. The technical-reasoning network is recruited when people observe others make or teach how to make tools: An fMRI study. iScience 2025; 28:111870. [PMID: 39995878 PMCID: PMC11848787 DOI: 10.1016/j.isci.2025.111870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 10/18/2024] [Accepted: 01/20/2025] [Indexed: 02/26/2025] Open
Abstract
Cumulative technological culture is defined as the increase in efficiency and complexity of tools over generations. The role of social cognitive skills in cultural transmission has been long acknowledged. However, recent accounts emphasized the importance of non-social cognitive skills during the social transmission of technical content with a focus on technical reasoning. Here, we contribute to this double process approach by reporting an fMRI study about the neurocognitive origins of social learning. Participants watched videos depicting tool-making episodes in three social-learning conditions: reverse engineering, observation, and teaching. Our results showed that the technical-reasoning network, centered around the area PF of the left inferior parietal cortex, was preferentially activated when watching tool-making episodes. Additionally, teaching elicited the right middle temporal gyrus. This study suggests that technical reasoning underpins technological culture, while social cognition enhances learners' technical reasoning by guiding attention to key aspects of the technology.
Collapse
Affiliation(s)
- Alexandre Bluet
- Laboratoire d’Étude des Mécanismes Cognitifs, Université de Lyon, Bron, France
- Karolinska Institutet, Stockholm, Sweden
| | - Emanuelle Reynaud
- Laboratoire d’Étude des Mécanismes Cognitifs, Université de Lyon, Bron, France
| | - Giovanni Federico
- Laboratory of Experimental and Cognitive Neuroscience, Suor Orsola Benincasa University, Naples, Italy
| | - Chloé Bryche
- Laboratoire d’Étude des Mécanismes Cognitifs, Université de Lyon, Bron, France
| | - Mathieu Lesourd
- Université Marie et Louis Pasteur, INSERM, UMR 1322 LINC, F-25000 Besançon, France
| | - Arnaud Fournel
- Laboratoire d’Étude des Mécanismes Cognitifs, Université de Lyon, Bron, France
| | - Franck Lamberton
- CERMEP-Imagerie du vivant, MRI Department and CNRS UMS3453, Lyon, France
| | - Danielle Ibarrola
- CERMEP-Imagerie du vivant, MRI Department and CNRS UMS3453, Lyon, France
| | - Yves Rossetti
- Centre de Recherche en Neurosciences de Lyon (CRNL), Trajectoires Team (Inserm UMR_S 1028-CNRS-UMR 5292-Université de Lyon), Bron, France
- Mouvement et Handicap and Neuro-Immersion, Hospices Civils de Lyon et Centre de Recherche en Neurosciences de Lyon, Hôpital Henry Gabrielle, Saint-Genis-Laval, France
| | - François Osiurak
- Laboratoire d’Étude des Mécanismes Cognitifs, Université de Lyon, Bron, France
- Institut Universitaire de France, Paris, France
| |
Collapse
|
2
|
Lee Masson H, Chang L, Isik L. Multidimensional neural representations of social features during movie viewing. Soc Cogn Affect Neurosci 2024; 19:nsae030. [PMID: 38722755 PMCID: PMC11130526 DOI: 10.1093/scan/nsae030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 03/05/2024] [Accepted: 05/03/2024] [Indexed: 05/29/2024] Open
Abstract
The social world is dynamic and contextually embedded. Yet, most studies utilize simple stimuli that do not capture the complexity of everyday social episodes. To address this, we implemented a movie viewing paradigm and investigated how everyday social episodes are processed in the brain. Participants watched one of two movies during an MRI scan. Neural patterns from brain regions involved in social perception, mentalization, action observation and sensory processing were extracted. Representational similarity analysis results revealed that several labeled social features (including social interaction, mentalization, the actions of others, characters talking about themselves, talking about others and talking about objects) were represented in the superior temporal gyrus (STG) and middle temporal gyrus (MTG). The mentalization feature was also represented throughout the theory of mind network, and characters talking about others engaged the temporoparietal junction (TPJ), suggesting that listeners may spontaneously infer the mental state of those being talked about. In contrast, we did not observe the action representations in the frontoparietal regions of the action observation network. The current findings indicate that STG and MTG serve as key regions for social processing, and that listening to characters talk about others elicits spontaneous mental state inference in TPJ during natural movie viewing.
Collapse
Affiliation(s)
| | - Lucy Chang
- Department of Cognitive Science, Johns Hopkins University, Baltimore 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore 21218, USA
| |
Collapse
|
3
|
Trujillo JP, Holler J. Interactionally Embedded Gestalt Principles of Multimodal Human Communication. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1136-1159. [PMID: 36634318 PMCID: PMC10475215 DOI: 10.1177/17456916221141422] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| |
Collapse
|
4
|
Hintz F, Khoe YH, Strauß A, Psomakas AJA, Holler J. Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:340-353. [PMID: 36823247 PMCID: PMC9949912 DOI: 10.3758/s13415-023-01074-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/30/2023] [Indexed: 02/25/2023]
Abstract
In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
Collapse
Affiliation(s)
- Florian Hintz
- Max Planck Institute for Psycholinguistics, Nijmegen, NL, The Netherlands.
- Deutscher Sprachatlas, Philipps University of Marburg, Marburg, Germany.
| | - Yung Han Khoe
- Center for Language Studies, Radboud University, Nijmegen, NL, Netherlands
| | | | | | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, NL, The Netherlands
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
5
|
Holler J. Visual bodily signals as core devices for coordinating minds in interaction. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210094. [PMID: 35876208 PMCID: PMC9310176 DOI: 10.1098/rstb.2021.0094] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/21/2022] [Indexed: 12/11/2022] Open
Abstract
The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed-and survived-owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Judith Holler
- Max-Planck-Institut für Psycholinguistik, Nijmegen, The Netherlands
- Donders Centre for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Perniss P, Vinson D, Vigliocco G. Making Sense of the Hands and Mouth: The Role of "Secondary" Cues to Meaning in British Sign Language and English. Cogn Sci 2021; 44:e12868. [PMID: 32619055 DOI: 10.1111/cogs.12868] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 05/01/2020] [Accepted: 05/06/2020] [Indexed: 01/06/2023]
Abstract
Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent "secondary" cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.
Collapse
Affiliation(s)
| | - David Vinson
- Division of Psychology and Language Sciences, University College London
| | | |
Collapse
|
7
|
Fröhlich M, Bartolotta N, Fryns C, Wagner C, Momon L, Jaffrezic M, Mitra Setia T, van Noordwijk MA, van Schaik CP. Multicomponent and multisensory communicative acts in orang-utans may serve different functions. Commun Biol 2021; 4:917. [PMID: 34316012 PMCID: PMC8316500 DOI: 10.1038/s42003-021-02429-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 07/07/2021] [Indexed: 02/07/2023] Open
Abstract
From early infancy, human face-to-face communication is multimodal, comprising a plethora of interlinked communicative and sensory modalities. Although there is also growing evidence for this in nonhuman primates, previous research rarely disentangled production from perception of signals. Consequently, the functions of integrating articulators (i.e. production organs involved in multicomponent acts) and sensory channels (i.e. modalities involved in multisensory acts) remain poorly understood. Here, we studied close-range social interactions within and beyond mother-infant pairs of Bornean and Sumatran orang-utans living in wild and captive settings, to examine use of and responses to multicomponent and multisensory communication. From the perspective of production, results showed that multicomponent acts were used more than the respective unicomponent acts when the presumed goal did not match the dominant outcome for a specific communicative act, and were more common among non-mother-infant dyads and Sumatran orang-utans. From the perception perspective, we found that multisensory acts were more effective than the respective unisensory acts, and were used more in wild compared to captive populations. We argue that multisensory acts primarily facilitate effectiveness, whereas multicomponent acts become relevant when interaction outcomes are less predictable. These different functions underscore the importance of distinguishing between production and perception in studies of communication.
Collapse
Affiliation(s)
- Marlen Fröhlich
- Department of Anthropology, University of Zurich, Zurich, Switzerland.
| | | | - Caroline Fryns
- Department of Anthropology, University of Zurich, Zurich, Switzerland
| | - Colin Wagner
- DEPE-IPHC - Département Ecologie, Physiologie et Ethologie, University of Strasbourg, Strasbourg, France
| | - Laurene Momon
- DEPE-IPHC - Département Ecologie, Physiologie et Ethologie, University of Strasbourg, Strasbourg, France
| | - Marvin Jaffrezic
- DEPE-IPHC - Département Ecologie, Physiologie et Ethologie, University of Strasbourg, Strasbourg, France
| | | | | | - Carel P van Schaik
- Department of Anthropology, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| |
Collapse
|
8
|
Godel M, Andrews DS, Amaral DG, Ozonoff S, Young GS, Lee JK, Wu Nordahl C, Schaer M. Altered Gray-White Matter Boundary Contrast in Toddlers at Risk for Autism Relates to Later Diagnosis of Autism Spectrum Disorder. Front Neurosci 2021; 15:669194. [PMID: 34220428 PMCID: PMC8248433 DOI: 10.3389/fnins.2021.669194] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 05/18/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Recent neuroimaging studies have highlighted differences in cerebral maturation in individuals with autism spectrum disorder (ASD) in comparison to typical development. For instance, the contrast of the gray-white matter boundary is decreased in adults with ASD. To determine how gray-white matter boundary integrity relates to early ASD phenotypes, we used a regional structural MRI index of gray-white matter contrast (GWC) on a sample of toddlers with a hereditary high risk for ASD. MATERIALS AND METHODS We used a surface-based approach to compute vertex-wise GWC in a longitudinal cohort of toddlers at high-risk for ASD imaged twice between 12 and 24 months (n = 20). A full clinical assessment of ASD-related symptoms was performed in conjunction with imaging and again at 3 years of age for diagnostic outcome. Three outcome groups were defined (ASD, n = 9; typical development, n = 8; non-typical development, n = 3). RESULTS ASD diagnostic outcome at age 3 was associated with widespread increases in GWC between age 12 and 24 months. Many cortical regions were affected, including regions implicated in social processing and language acquisition. In parallel, we found that early onset of ASD symptoms (i.e., prior to 18-months) was specifically associated with slower GWC rates of change during the second year of life. These alterations were found in areas mainly belonging to the central executive network. LIMITATIONS Our study is the first to measure maturational changes in GWC in toddlers who developed autism, but given the limited size of our sample results should be considered exploratory and warrant further replication in independent and larger samples. CONCLUSION These preliminary results suggest that ASD is linked to early alterations of the gray-white matter boundary in widespread brain regions. Early onset of ASD diagnosis constitutes an independent clinical parameter associated with a specific corresponding neurobiological developmental trajectory. Altered neural migration and/or altered myelination processes potentially explain these findings.
Collapse
Affiliation(s)
- Michel Godel
- Department of Psychiatry, University of Geneva School of Medicine, Geneva, Switzerland
| | - Derek S. Andrews
- Department of Psychiatry and Behavioral Sciences, The Medical Investigation of Neurodevelopmental Disorders (MIND) Institute, UC Davis School of Medicine, University of California, Davis, Sacramento, CA, United States
| | - David G. Amaral
- Department of Psychiatry and Behavioral Sciences, The Medical Investigation of Neurodevelopmental Disorders (MIND) Institute, UC Davis School of Medicine, University of California, Davis, Sacramento, CA, United States
| | - Sally Ozonoff
- Department of Psychiatry and Behavioral Sciences, The Medical Investigation of Neurodevelopmental Disorders (MIND) Institute, UC Davis School of Medicine, University of California, Davis, Sacramento, CA, United States
| | - Gregory S. Young
- Department of Psychiatry and Behavioral Sciences, The Medical Investigation of Neurodevelopmental Disorders (MIND) Institute, UC Davis School of Medicine, University of California, Davis, Sacramento, CA, United States
| | - Joshua K. Lee
- Department of Psychiatry and Behavioral Sciences, The Medical Investigation of Neurodevelopmental Disorders (MIND) Institute, UC Davis School of Medicine, University of California, Davis, Sacramento, CA, United States
| | - Christine Wu Nordahl
- Department of Psychiatry and Behavioral Sciences, The Medical Investigation of Neurodevelopmental Disorders (MIND) Institute, UC Davis School of Medicine, University of California, Davis, Sacramento, CA, United States
| | - Marie Schaer
- Department of Psychiatry, University of Geneva School of Medicine, Geneva, Switzerland
| |
Collapse
|
9
|
Kandana Arachchige KG, Simoes Loureiro I, Blekic W, Rossignol M, Lefebvre L. The Role of Iconic Gestures in Speech Comprehension: An Overview of Various Methodologies. Front Psychol 2021; 12:634074. [PMID: 33995189 PMCID: PMC8118122 DOI: 10.3389/fpsyg.2021.634074] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 04/01/2021] [Indexed: 11/28/2022] Open
Abstract
Iconic gesture-speech integration is a relatively recent field of investigation with numerous researchers studying its various aspects. The results obtained are just as diverse. The definition of iconic gestures is often overlooked in the interpretations of results. Furthermore, while most behavioral studies have demonstrated an advantage of bimodal presentation, brain activity studies show a diversity of results regarding the brain regions involved in the processing of this integration. Clinical studies also yield mixed results, some suggesting parallel processing channels, others a unique and integrated channel. This review aims to draw attention to the methodological variations in research on iconic gesture-speech integration and how they impact conclusions regarding the underlying phenomena. It will also attempt to draw together the findings from other relevant research and suggest potential areas for further investigation in order to better understand processes at play during speech integration process.
Collapse
Affiliation(s)
| | | | - Wivine Blekic
- Cognitive Psychology and Neuropsychology, University of Mons, Mons, Belgium
| | - Mandy Rossignol
- Cognitive Psychology and Neuropsychology, University of Mons, Mons, Belgium
| | - Laurent Lefebvre
- Cognitive Psychology and Neuropsychology, University of Mons, Mons, Belgium
| |
Collapse
|
10
|
He Y, Luell S, Muralikrishnan R, Straube B, Nagels A. Gesture's body orientation modulates the N400 for visual sentences primed by gestures. Hum Brain Mapp 2020; 41:4901-4911. [PMID: 32808721 PMCID: PMC7643362 DOI: 10.1002/hbm.25166] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/16/2020] [Accepted: 07/23/2020] [Indexed: 01/08/2023] Open
Abstract
Body orientation of gesture entails social-communicative intention, and may thus influence how gestures are perceived and comprehended together with auditory speech during face-to-face communication. To date, despite the emergence of neuroscientific literature on the role of body orientation on hand action perception, limited studies have directly investigated the role of body orientation in the interaction between gesture and language. To address this research question, we carried out an electroencephalography (EEG) experiment presenting to participants (n = 21) videos of frontal and lateral communicative hand gestures of 5 s (e.g., raising a hand), followed by visually presented sentences that are either congruent or incongruent with the gesture (e.g., "the mountain is high/low…"). Participants underwent a semantic probe task, judging whether a target word is related or unrelated to the gesture-sentence event. EEG results suggest that, during the perception phase of handgestures, while both frontal and lateral gestures elicited a power decrease in both the alpha (8-12 Hz) and the beta (16-24 Hz) bands, lateral versus frontal gestures elicited reduced power decrease in the beta band, source-located to the medial prefrontal cortex. For sentence comprehension, at the critical word whose meaning is congruent/incongruent with the gesture prime, frontal gestures elicited an N400 effect for gesture-sentence incongruency. More importantly, this incongruency effect was significantly reduced for lateral gestures. These findings suggest that body orientation plays an important role in gesture perception, and that its inferred social-communicative intention may influence gesture-language interaction at semantic level.
Collapse
Affiliation(s)
- Yifei He
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgMarburgGermany
| | - Svenja Luell
- Department of General LinguisticsJohannes‐Gutenberg University MainzMainzGermany
| | - R. Muralikrishnan
- Department of NeuroscienceMax Planck Institute for Empirical AestheticsFrankfurtGermany
| | - Benjamin Straube
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgMarburgGermany
| | - Arne Nagels
- Department of General LinguisticsJohannes‐Gutenberg University MainzMainzGermany
| |
Collapse
|
11
|
Trujillo JP, Simanova I, Bekkering H, Özyürek A. The communicative advantage: how kinematic signaling supports semantic comprehension. PSYCHOLOGICAL RESEARCH 2020; 84:1897-1911. [PMID: 31079227 PMCID: PMC7772160 DOI: 10.1007/s00426-019-01198-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 05/02/2019] [Indexed: 11/04/2022]
Abstract
Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees' comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor's faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands.
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands.
| | - Irina Simanova
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands
| | - Harold Bekkering
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Montessorilaan 3, B.01.25, 6525GR, Nijmegen, The Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| |
Collapse
|
12
|
Suffel A, Nagels A, Steines M, Kircher T, Straube B. Feeling addressed! The neural processing of social communicative cues in patients with major depression. Hum Brain Mapp 2020; 41:3541-3554. [PMID: 32432387 PMCID: PMC7416026 DOI: 10.1002/hbm.25027] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 04/21/2020] [Accepted: 04/22/2020] [Indexed: 12/22/2022] Open
Abstract
The feeling of being addressed is the first step in a complex processing stream enabling successful social communication. Social impairments are a relevant characteristic of patients with major depressive disorder (MDD). Here, we investigated a mechanism which—if impaired—might contribute to withdrawal or isolation in MDD, namely, the neural processing of social cues such as body orientation and gesture. During funtional magnetic resonance imaging (fMRI) data acquisition, 33 patients with MDD and 43 healthy control subjects watched video clips of a speaking actor: one version with a gesture accompanying the speech and one without gesture. Videos were filmed simultaneously from two different viewpoints: one with the actor facing the viewer head‐on (frontal) and one side‐view (lateral). After every clip, the participants were instructed to evaluate whether they felt addressed or not. Despite overall comparable addressment ratings and a large overlap in activation patterns in MDD and healthy subjects for gesture processing, the anterior cingulate cortex, bilateral superior/middle frontal cortex, and right angular gyrus were more strongly activated in patients than in healthy subjects for the frontal conditions. Our analyses revealed that patients showed specifically higher activation than healthy subjects for the frontal condition without gesture in regions including the posterior cingulate cortex, left prefrontal cortex, and the left hippocampus. We conclude that MDD patients can recognize and interpret social cues such as gesture or body orientation; however, they seem to require more neural resources. This additional effort might affect successful communication and contribute to social isolation in MDD.
Collapse
Affiliation(s)
- Anne Suffel
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany
| | - Arne Nagels
- Department of English and Linguistics, Johannes Gutenberg-University Mainz, Germany
| | - Miriam Steines
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Marburg, Germany
| | - Tilo Kircher
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Marburg, Germany.,Systems Neuroscience, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany
| | - Benjamin Straube
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Marburg, Germany
| |
Collapse
|
13
|
Macuch Silva V, Holler J, Ozyurek A, Roberts SG. Multimodality and the origin of a novel communication system in face-to-face interaction. ROYAL SOCIETY OPEN SCIENCE 2020; 7:182056. [PMID: 32218922 PMCID: PMC7029942 DOI: 10.1098/rsos.182056] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 11/27/2019] [Indexed: 05/05/2023]
Abstract
Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared with vocalization, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
Collapse
Affiliation(s)
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Asli Ozyurek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Center for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Seán G. Roberts
- Department of Archaeology and Anthropology (excd.lab), University of Bristol, Bristol, UK
| |
Collapse
|
14
|
Jouravlev O, Zheng D, Balewski Z, Le Arnz Pongos A, Levan Z, Goldin-Meadow S, Fedorenko E. Speech-accompanying gestures are not processed by the language-processing mechanisms. Neuropsychologia 2019; 132:107132. [PMID: 31276684 PMCID: PMC6708375 DOI: 10.1016/j.neuropsychologia.2019.107132] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 06/01/2019] [Accepted: 06/30/2019] [Indexed: 12/15/2022]
Abstract
Speech-accompanying gestures constitute one information channel during communication. Some have argued that processing gestures engages the brain regions that support language comprehension. However, studies that have been used as evidence for shared mechanisms suffer from one or more of the following limitations: they (a) have not directly compared activations for gesture and language processing in the same study and relied on the fallacious reverse inference (Poldrack, 2006) for interpretation, (b) relied on traditional group analyses, which are bound to overestimate overlap (e.g., Nieto-Castañon and Fedorenko, 2012), (c) failed to directly compare the magnitudes of response (e.g., Chen et al., 2017), and (d) focused on gestures that may have activated the corresponding linguistic representations (e.g., "emblems"). To circumvent these limitations, we used fMRI to examine responses to gesture processing in language regions defined functionally in individual participants (e.g., Fedorenko et al., 2010), including directly comparing effect sizes, and covering a broad range of spontaneously generated co-speech gestures. Whenever speech was present, language regions responded robustly (and to a similar degree regardless of whether the video contained gestures or grooming movements). In contrast, and critically, responses in the language regions were low - at or slightly above the fixation baseline - when silent videos were processed (again, regardless of whether they contained gestures or grooming movements). Brain regions outside of the language network, including some in close proximity to its regions, differentiated between gestures and grooming movements, ruling out the possibility that the gesture/grooming manipulation was too subtle. Behavioral studies on the critical video materials further showed robust differentiation between the gesture and grooming conditions. In summary, contra prior claims, language-processing regions do not respond to co-speech gestures in the absence of speech, suggesting that these regions are selectively driven by linguistic input (e.g., Fedorenko et al., 2011). Although co-speech gestures are uncontroversially important in communication, they appear to be processed in brain regions distinct from those that support language comprehension, similar to other extra-linguistic communicative signals, like facial expressions and prosody.
Collapse
Affiliation(s)
- Olessia Jouravlev
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; Carleton University, Ottawa, ON K1S 5B6, Canada.
| | - David Zheng
- Princeton University, Princeton, NJ, 08544, USA
| | - Zuzanna Balewski
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | | | - Zena Levan
- University of Chicago, Chicago, IL, 60637, USA
| | | | - Evelina Fedorenko
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; McGovern Institute for Brain Research, Cambridge, MA, 02139, USA; Massachusetts General Hospital, Boston, MA, 02114, USA.
| |
Collapse
|
15
|
Holler J, Levinson SC. Multimodal Language Processing in Human Communication. Trends Cogn Sci 2019; 23:639-652. [PMID: 31235320 DOI: 10.1016/j.tics.2019.05.006] [Citation(s) in RCA: 129] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 05/17/2019] [Accepted: 05/21/2019] [Indexed: 11/25/2022]
Abstract
The natural ecology of human language is face-to-face interaction comprising the exchange of a plethora of multimodal signals. Trying to understand the psycholinguistic processing of language in its natural niche raises new issues, first and foremost the binding of multiple, temporally offset signals under tight time constraints posed by a turn-taking system. This might be expected to overload and slow our cognitive system, but the reverse is in fact the case. We propose cognitive mechanisms that may explain this phenomenon and call for a multimodal, situated psycholinguistic framework to unravel the full complexities of human language processing.
Collapse
Affiliation(s)
- Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - Stephen C Levinson
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
16
|
Demir-Lira ÖE, Asaridou SS, Raja Beharelle A, Holt AE, Goldin-Meadow S, Small SL. Functional neuroanatomy of gesture-speech integration in children varies with individual differences in gesture processing. Dev Sci 2018. [PMID: 29516653 DOI: 10.1111/desc.12648] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8- to 10-year-old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture-speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., "pet" + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., "bird" + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post-test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture-speech integration in children overlaps with-but is broader than-the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.
Collapse
Affiliation(s)
| | - Salomi S Asaridou
- Department of Neurology, University of California, Irvine, Irvine, California, USA
| | - Anjali Raja Beharelle
- Laboratory for Social and Neural Systems Research, Department of Economics, University of Zurich, Zurich, Switzerland
| | - Anna E Holt
- Department of Neurology, University of California, Irvine, Irvine, California, USA
| | | | - Steven L Small
- Department of Neurology, University of California, Irvine, Irvine, California, USA
| |
Collapse
|
17
|
McGettigan C, Jasmin K, Eisner F, Agnew ZK, Josephs OJ, Calder AJ, Jessop R, Lawson RP, Spielmann M, Scott SK. You talkin' to me? Communicative talker gaze activates left-lateralized superior temporal cortex during perception of degraded speech. Neuropsychologia 2017; 100:51-63. [PMID: 28400328 PMCID: PMC5446325 DOI: 10.1016/j.neuropsychologia.2017.04.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Revised: 04/05/2017] [Accepted: 04/07/2017] [Indexed: 11/13/2022]
Abstract
Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes’ responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze – further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task. Talker gaze is an important social cue during speech comprehension. Neural responses to gaze were measured during perception of degraded sentences. Gaze direction modulated activation in left-lateralized superior temporal cortex. Left lateralization became stronger when speech was less intelligible. Results suggest task-dependent flexibility in cortical responses to gaze.
Collapse
Affiliation(s)
- Carolyn McGettigan
- Department of Psychology, Royal Holloway University of London, Egham Hill, Egham TW20 0EX, UK; Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK.
| | - Kyle Jasmin
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| | - Frank Eisner
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK; Donders Institute, Radboud University, Montessorilaan 3, 6525 HR Nijmegen, Netherlands
| | - Zarinah K Agnew
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK; Department of Otolaryngology, University of California, San Francisco, 513 Parnassus Avenue, San Francisco, CA, USA
| | - Oliver J Josephs
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK; Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Andrew J Calder
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, UK
| | - Rosemary Jessop
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| | - Rebecca P Lawson
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK; Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Mona Spielmann
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| |
Collapse
|
18
|
Redcay E, Velnoskey KR, Rowe ML. Perceived communicative intent in gesture and language modulates the superior temporal sulcus. Hum Brain Mapp 2016; 37:3444-61. [PMID: 27238550 PMCID: PMC6867447 DOI: 10.1002/hbm.23251] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Revised: 03/25/2016] [Accepted: 04/27/2016] [Indexed: 11/08/2022] Open
Abstract
Behavioral evidence and theory suggest gesture and language processing may be part of a shared cognitive system for communication. While much research demonstrates both gesture and language recruit regions along perisylvian cortex, relatively less work has tested functional segregation within these regions on an individual level. Additionally, while most work has focused on a shared semantic network, less has examined shared regions for processing communicative intent. To address these questions, functional and structural MRI data were collected from 24 adult participants while viewing videos of an experimenter producing communicative, Participant-Directed Gestures (PDG) (e.g., "Hello, come here"), noncommunicative Self-adaptor Gestures (SG) (e.g., smoothing hair), and three written text conditions: (1) Participant-Directed Sentences (PDS), matched in content to PDG, (2) Third-person Sentences (3PS), describing a character's actions from a third-person perspective, and (3) meaningless sentences, Jabberwocky (JW). Surface-based conjunction and individual functional region of interest analyses identified shared neural activation between gesture (PDGvsSG) and language processing using two different language contrasts. Conjunction analyses of gesture (PDGvsSG) and Third-person Sentences versus Jabberwocky revealed overlap within left anterior and posterior superior temporal sulcus (STS). Conjunction analyses of gesture and Participant-Directed Sentences to Third-person Sentences revealed regions sensitive to communicative intent, including the left middle and posterior STS and left inferior frontal gyrus. Further, parametric modulation using participants' ratings of stimuli revealed sensitivity of left posterior STS to individual perceptions of communicative intent in gesture. These data highlight an important role of the STS in processing participant-directed communicative intent through gesture and language. Hum Brain Mapp 37:3444-3461, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Elizabeth Redcay
- Department of PsychologyUniversity of MarylandCollege ParkMaryland
| | | | - Meredith L. Rowe
- Graduate School of EducationHarvard UniversityCambridgeMassachusetts
| |
Collapse
|
19
|
de C Hamilton AF. Gazing at me: the importance of social meaning in understanding direct-gaze cues. Philos Trans R Soc Lond B Biol Sci 2016; 371:20150080. [PMID: 26644598 DOI: 10.1098/rstb.2015.0080] [Citation(s) in RCA: 89] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Abstract
Direct gaze is an engaging and important social cue, but the meaning of direct gaze depends heavily on the surrounding context. This paper reviews some recent studies of direct gaze, to understand more about what neural and cognitive systems are engaged by this social cue and why. The data show that gaze can act as an arousal cue and can modulate actions, and can activate brain regions linked to theory of mind and self-related processing. However, all these results are strongly modulated by the social meaning of a gaze cue and by whether participants believe that another person is really watching them. The implications of these contextual effects and audience effects for our theories of gaze are considered.
Collapse
|
20
|
Theofanopoulou C. Implications of Oxytocin in Human Linguistic Cognition: From Genome to Phenome. Front Neurosci 2016; 10:271. [PMID: 27378840 PMCID: PMC4906233 DOI: 10.3389/fnins.2016.00271] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Accepted: 05/31/2016] [Indexed: 11/18/2022] Open
Abstract
The neurohormone oxytocin (OXT) has been found to mediate the regulation of complex socioemotional cognition in multiple ways both in humans and other animals. Recent studies have investigated the effects of OXT in different levels of analysis (from genetic to behavioral) chiefly targeting its impact on the social component and only indirectly indicating its implications in other components of our socio-interactive abilities. This article aims at shedding light onto how OXT might be modulating the multimodality that characterizes our higher-order linguistic abilities (vocal-auditory-attentional-memory-social systems). Based on evidence coming from genetic, EEG, fMRI, and behavioral studies, I attempt to establish the promises of this perspective with the goal of stressing the need for neuropeptide treatments to enter clinical practice.
Collapse
|
21
|
Abstract
Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
Collapse
|
22
|
Rice K, Redcay E. Interaction matters: A perceived social partner alters the neural processing of human speech. Neuroimage 2015; 129:480-488. [PMID: 26608245 DOI: 10.1016/j.neuroimage.2015.11.041] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2015] [Revised: 07/20/2015] [Accepted: 11/15/2015] [Indexed: 11/30/2022] Open
Abstract
Mounting evidence suggests that social interaction changes how communicative behaviors (e.g., spoken language, gaze) are processed, but the precise neural bases by which social-interactive context may alter communication remain unknown. Various perspectives suggest that live interactions are more rewarding, more attention-grabbing, or require increased mentalizing-thinking about the thoughts of others. Dissociating between these possibilities is difficult because most extant neuroimaging paradigms examining social interaction have not directly compared live paradigms to conventional "offline" (or recorded) paradigms. We developed a novel fMRI paradigm to assess whether and how an interactive context changes the processing of speech matched in content and vocal characteristics. Participants listened to short vignettes--which contained no reference to people or mental states--believing that some vignettes were prerecorded and that others were presented over a real-time audio-feed by a live social partner. In actuality, all speech was prerecorded. Simply believing that speech was live increased activation in each participant's own mentalizing regions, defined using a functional localizer. Contrasting live to recorded speech did not reveal significant differences in attention or reward regions. Further, higher levels of autistic-like traits were associated with altered neural specialization for live interaction. These results suggest that humans engage in ongoing mentalizing about social partners, even when such mentalizing is not explicitly required, illustrating how social context shapes social cognition. Understanding communication in social context has important implications for typical and atypical social processing, especially for disorders like autism where social difficulties are more acute in live interaction.
Collapse
Affiliation(s)
- Katherine Rice
- Department of Psychology, University of Maryland, College Park, MD 20742, USA.
| | - Elizabeth Redcay
- Department of Psychology, University of Maryland, College Park, MD 20742, USA
| |
Collapse
|
23
|
Rifkin-Graboi A, Kong L, Sim LW, Sanmugam S, Broekman BFP, Chen H, Wong E, Kwek K, Saw SM, Chong YS, Gluckman PD, Fortier MV, Pederson D, Meaney MJ, Qiu A. Maternal sensitivity, infant limbic structure volume and functional connectivity: a preliminary study. Transl Psychiatry 2015; 5:e668. [PMID: 26506054 PMCID: PMC4930120 DOI: 10.1038/tp.2015.133] [Citation(s) in RCA: 71] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Revised: 07/02/2015] [Accepted: 07/22/2015] [Indexed: 11/30/2022] Open
Abstract
Mechanisms underlying the profound parental effects on cognitive, emotional and social development in humans remain poorly understood. Studies with nonhuman models suggest variations in parental care affect the limbic system, influential to learning, autobiography and emotional regulation. In some research, nonoptimal care relates to decreases in neurogenesis, although other work suggests early-postnatal social adversity accelerates the maturation of limbic structures associated with emotional learning. We explored whether maternal sensitivity predicts human limbic system development and functional connectivity patterns in a small sample of human infants. When infants were 6 months of age, 20 mother-infant dyads attended a laboratory-based observational session and the infants underwent neuroimaging at the same age. After considering age at imaging, household income and postnatal maternal anxiety, regression analyses demonstrated significant indirect associations between maternal sensitivity and bilateral hippocampal volume at six months, with the majority of associations between sensitivity and the amygdala demonstrating similar indirect, but not significant results. Moreover, functional analyses revealed direct associations between maternal sensitivity and connectivity between the hippocampus and areas important for emotional regulation and socio-emotional functioning. Sensitivity additionally predicted indirect associations between limbic structures and regions related to autobiographical memory. Our volumetric results are consistent with research indicating accelerated limbic development in response to early social adversity, and in combination with our functional results, if replicated in a larger sample, may suggest that subtle, but important, variations in maternal care influence neuroanatomical trajectories important to future cognitive and emotional functioning.
Collapse
Affiliation(s)
- A Rifkin-Graboi
- Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Singapore, Singapore,Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Brenner Centre for Molecular Medicine 30 Medical Drive, Singapore 117609, Singapore. E-mail:
| | - L Kong
- Department of Biomedical Engineering and Clinical Imaging Research Center, National University of Singapore, Singapore, Singapore
| | - L W Sim
- Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Singapore, Singapore
| | - S Sanmugam
- Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Singapore, Singapore
| | - B F P Broekman
- Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Singapore, Singapore,Department of Psychological Medicine, Yong Loo Lin School of Medicine, National University of Singapore, National University Health System, Singapore, Singapore
| | - H Chen
- Department of Psychological Medicine, KK Women's and Children's Hospital, Duke-National University of Singapore, Singapore, Singapore
| | - E Wong
- Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Singapore, Singapore
| | - K Kwek
- Department of Maternal Fetal Medicine, KK Women's and Children's Hospital, Singapore, Singapore
| | - S-M Saw
- Department of Epidemiology, Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore
| | - Y-S Chong
- Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Singapore, Singapore,Department of Obstetrics & Gynaecology, Yong Loo Lin School of Medicine, National University of Singapore, National University Health System, Singapore, Singapore
| | - P D Gluckman
- Human Development, Singapore Institute for Clinical Sciences, Singapore, Singapore,Liggins Institute, University of Auckland, Auckland, New Zealand
| | - M V Fortier
- Department of Diagnostic and Interventional Imaging, KK Women's and Children's Hospital, Singapore, Singapore
| | - D Pederson
- Department of Psychology, University of Western Ontario, London, Ontario, Canada
| | - M J Meaney
- Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Singapore, Singapore,Department of Neurosciences, Ludmer Centre for Neuroinformatics and Mental Health, Douglas Mental Health University Institute, McGill University, Montreal, Quebec, Canada,Sackler Program for Epigenetics and Psychobiology, Douglas Mental Health University Institute, McGill University, Montreal, Quebec, Canada
| | - A Qiu
- Integrative Neuroscience Program, Singapore Institute for Clinical Sciences, Singapore, Singapore,Department of Biomedical Engineering and Clinical Imaging Research Center, National University of Singapore, Singapore, Singapore,Department of Biomedical Engineering, National University of Singapore, 9 Engineering Drive 1, Block EA #03-12, Singapore 117576, Singapore. E-mail:
| |
Collapse
|
24
|
The neural basis of hand gesture comprehension: A meta-analysis of functional magnetic resonance imaging studies. Neurosci Biobehav Rev 2015; 57:88-104. [DOI: 10.1016/j.neubiorev.2015.08.006] [Citation(s) in RCA: 65] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2015] [Revised: 07/13/2015] [Accepted: 08/06/2015] [Indexed: 11/18/2022]
|
25
|
Nagels A, Kircher T, Steines M, Straube B. Feeling addressed! The role of body orientation and co-speech gesture in social communication. Hum Brain Mapp 2015; 36:1925-36. [PMID: 25640962 PMCID: PMC6869376 DOI: 10.1002/hbm.22746] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2014] [Revised: 01/12/2015] [Accepted: 01/14/2015] [Indexed: 11/06/2022] Open
Abstract
During face-to-face communication, body orientation and coverbal gestures influence how information is conveyed. The neural pathways underpinning the comprehension of such nonverbal social cues in everyday interaction are to some part still unknown. During fMRI data acquisition, 37 participants were presented with video clips showing an actor speaking short sentences. The actor produced speech-associated iconic gestures (IC) or no gestures (NG) while he was visible either from an egocentric (ego) or from an allocentric (allo) position. Participants were asked to indicate via button press whether they felt addressed or not. We found a significant interaction of body orientation and gesture in addressment evaluations, indicating that participants evaluated IC-ego conditions as most addressing. The anterior cingulate cortex (ACC) and left fusiform gyrus were stronger activated for egocentric versus allocentric actor position in gesture context. Activation increase in the ACC for IC-ego>IC-allo further correlated positively with increased addressment ratings in the egocentric gesture condition. Gesture-related activation increase in the supplementary motor area, left inferior frontal gyrus and right insula correlated positively with gesture-related increase of addressment evaluations in the egocentric context. Results indicate that gesture use and body-orientation contribute to the feeling of being addressed and together influence neural processing in brain regions involved in motor simulation, empathy and mentalizing.
Collapse
Affiliation(s)
- Arne Nagels
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgRudolf‐Bultmann‐Str. 835039MarburgGermany
| | - Tilo Kircher
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgRudolf‐Bultmann‐Str. 835039MarburgGermany
| | - Miriam Steines
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgRudolf‐Bultmann‐Str. 835039MarburgGermany
| | - Benjamin Straube
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgRudolf‐Bultmann‐Str. 835039MarburgGermany
| |
Collapse
|
26
|
Özyürek A. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour. Philos Trans R Soc Lond B Biol Sci 2015; 369:20130296. [PMID: 25092664 DOI: 10.1098/rstb.2013.0296] [Citation(s) in RCA: 74] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
Collapse
Affiliation(s)
- Aslı Özyürek
- Department of Linguistics, Radboud University Nijmegen, Erasmus Plain 1, 6500 HD, Nijmegen, The Netherlands Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525 JT, The Netherlands
| |
Collapse
|
27
|
Obermeier C, Kelly SD, Gunter TC. A speaker's gesture style can affect language comprehension: ERP evidence from gesture-speech integration. Soc Cogn Affect Neurosci 2015; 10:1236-43. [PMID: 25688095 DOI: 10.1093/scan/nsv011] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Accepted: 02/09/2015] [Indexed: 11/13/2022] Open
Abstract
In face-to-face communication, speech is typically enriched by gestures. Clearly, not all people gesture in the same way, and the present study explores whether such individual differences in gesture style are taken into account during the perception of gestures that accompany speech. Participants were presented with one speaker that gestured in a straightforward way and another that also produced self-touch movements. Adding trials with such grooming movements makes the gesture information a much weaker cue compared with the gestures of the non-grooming speaker. The Electroencephalogram was recorded as participants watched videos of the individual speakers. Event-related potentials elicited by the speech signal revealed that adding grooming movements attenuated the impact of gesture for this particular speaker. Thus, these data suggest that there is sensitivity to the personal communication style of a speaker and that affects the extent to which gesture and speech are integrated during language comprehension.
Collapse
Affiliation(s)
- Christian Obermeier
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany and
| | - Spencer D Kelly
- Colgate University, Department of Psychology, NY 13346, Hamilton, USA
| | - Thomas C Gunter
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany and
| |
Collapse
|