1
|
Orlandi A, Candidi M. Toward a neuroaesthetics of interactions: Insights from dance on the aesthetics of individual and interacting bodies. iScience 2025; 28:112365. [PMID: 40330884 PMCID: PMC12051600 DOI: 10.1016/j.isci.2025.112365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2025] Open
Abstract
Neuroaesthetics has advanced our understanding of the neural processes underlying human aesthetic evaluation of crafted and natural entities, including the human body. While much research has examined the neurocognitive mechanisms behind evaluating "single-body" forms and movements, the perception and aesthetic evaluation of multiple individuals moving together have only recently gained attention. This review examines the neural foundations of static and dynamic body perception and how neural representations of observed and executed movements influence their aesthetic evaluation. Focusing on dance, it describes the role of stimulus features and individual characteristics in movement aesthetics. We review neural systems supporting visual processing of social interactions and propose a role for these systems in the aesthetic evaluation of interpersonal interactions, defined as the neuroaesthetics of interactions. Our goal is to highlight the benefits of integrating insights and methods from social cognition, neuroscience, and neuroaesthetics to understand mechanisms underlying interaction aesthetics, while addressing future challenges.
Collapse
Affiliation(s)
- Andrea Orlandi
- Department of Psychology, Sapienza University, Rome, Italy
- IRCCS Santa Lucia Foundation, Rome, Italy
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
| | - Matteo Candidi
- Department of Psychology, Sapienza University, Rome, Italy
- IRCCS Santa Lucia Foundation, Rome, Italy
| |
Collapse
|
2
|
Liu S, Yu L, Ren J, Zhang M, Luo W. The neural representation of body orientation and emotion from biological motion. Neuroimage 2025; 310:121163. [PMID: 40118232 DOI: 10.1016/j.neuroimage.2025.121163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 01/16/2025] [Accepted: 03/18/2025] [Indexed: 03/23/2025] Open
Abstract
The perception of human body orientation and emotion in others provides crucial insights into their intentions. While significant research has explored the brain's representation of body orientation and emotion processing, their possible combined representation remains less well understood. In this study, functional magnetic resonance imaging was employed to investigate this issue. Participants were shown point-light displays and tasked with recognizing both body emotion and orientation. The analysis of functional activation revealed that the extrastriate body area encodesd emotion, while the precentral gyrus and postcentral gyrus encoded body orientation. Additionally, results from multivariate pattern analysis and representational similarity analysis demonstrated that the lingual gyrus, precentral gyrus, and postcentral gyrus played a critical role in processing body orientation, whereas the lingual gyrus and extrastriate body area were crucial for processing emotion. Furthermore, the commonality analysis found that the neural representations of emotion and body orientation in the lingual and precentral gyrus were not interacting, but rather competing. Lastly, a remarkable interaction between hemisphere and body orientation revealed in the connection analysis showed that the coupling between the inferior parietal lobule and the left precentral gyrus was more sensitive to a 90° body orientation, while the coupling between the inferior parietal lobule and the right precentral gyrus was sensitive to 0° and 45° body orientation. Overall, these findings suggest that the conflicted relationship between the neural representation of body orientation and emotion in LING and PreCG when point-light displays were shown, and the different hemispheres play different role in encoding different body orientations.
Collapse
Affiliation(s)
- Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Jie Ren
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
3
|
Reilly J, Shain C, Borghesani V, Kuhnke P, Vigliocco G, Peelle JE, Mahon BZ, Buxbaum LJ, Majid A, Brysbaert M, Borghi AM, De Deyne S, Dove G, Papeo L, Pexman PM, Poeppel D, Lupyan G, Boggio P, Hickok G, Gwilliams L, Fernandino L, Mirman D, Chrysikou EG, Sandberg CW, Crutch SJ, Pylkkänen L, Yee E, Jackson RL, Rodd JM, Bedny M, Connell L, Kiefer M, Kemmerer D, de Zubicaray G, Jefferies E, Lynott D, Siew CSQ, Desai RH, McRae K, Diaz MT, Bolognesi M, Fedorenko E, Kiran S, Montefinese M, Binder JR, Yap MJ, Hartwigsen G, Cantlon J, Bi Y, Hoffman P, Garcea FE, Vinson D. What we mean when we say semantic: Toward a multidisciplinary semantic glossary. Psychon Bull Rev 2025; 32:243-280. [PMID: 39231896 PMCID: PMC11836185 DOI: 10.3758/s13423-024-02556-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/19/2024] [Indexed: 09/06/2024]
Abstract
Tulving characterized semantic memory as a vast repository of meaning that underlies language and many other cognitive processes. This perspective on lexical and conceptual knowledge galvanized a new era of research undertaken by numerous fields, each with their own idiosyncratic methods and terminology. For example, "concept" has different meanings in philosophy, linguistics, and psychology. As such, many fundamental constructs used to delineate semantic theories remain underspecified and/or opaque. Weak construct specificity is among the leading causes of the replication crisis now facing psychology and related fields. Term ambiguity hinders cross-disciplinary communication, falsifiability, and incremental theory-building. Numerous cognitive subdisciplines (e.g., vision, affective neuroscience) have recently addressed these limitations via the development of consensus-based guidelines and definitions. The project to follow represents our effort to produce a multidisciplinary semantic glossary consisting of succinct definitions, background, principled dissenting views, ratings of agreement, and subjective confidence for 17 target constructs (e.g., abstractness, abstraction, concreteness, concept, embodied cognition, event semantics, lexical-semantic, modality, representation, semantic control, semantic feature, simulation, semantic distance, semantic dimension). We discuss potential benefits and pitfalls (e.g., implicit bias, prescriptiveness) of these efforts to specify a common nomenclature that other researchers might index in specifying their own theoretical perspectives (e.g., They said X, but I mean Y).
Collapse
Affiliation(s)
| | - Cory Shain
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Philipp Kuhnke
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Leipzig University, Leipzig, Germany
| | | | | | | | - Laurel J Buxbaum
- Thomas Jefferson University, Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | | | | | | | | | - Guy Dove
- University of Louisville, Louisville, KY, USA
| | - Liuba Papeo
- Centre National de La Recherche Scientifique (CNRS), University Claude-Bernard Lyon, Lyon, France
| | | | | | | | - Paulo Boggio
- Universidade Presbiteriana Mackenzie, São Paulo, Brazil
| | | | | | | | | | | | | | | | | | - Eiling Yee
- University of Connecticut, Storrs, CT, USA
| | | | | | | | | | | | | | | | | | | | | | | | - Ken McRae
- Western University, London, ON, Canada
| | | | | | | | | | | | | | - Melvin J Yap
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- National University of Singapore, Singapore, Singapore
| | - Gesa Hartwigsen
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Leipzig University, Leipzig, Germany
| | | | - Yanchao Bi
- University of Edinburgh, Edinburgh, UK
- Beijing Normal University, Beijing, China
| | | | | | | |
Collapse
|
4
|
Abassi E, Bognár A, de Gelder B, Giese M, Isik L, Lappe A, Mukovskiy A, Solanas MP, Taubert J, Vogels R. Neural Encoding of Bodies for Primate Social Perception. J Neurosci 2024; 44:e1221242024. [PMID: 39358024 PMCID: PMC11450534 DOI: 10.1523/jneurosci.1221-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 07/22/2024] [Accepted: 07/23/2024] [Indexed: 10/04/2024] Open
Abstract
Primates, as social beings, have evolved complex brain mechanisms to navigate intricate social environments. This review explores the neural bases of body perception in both human and nonhuman primates, emphasizing the processing of social signals conveyed by body postures, movements, and interactions. Early studies identified selective neural responses to body stimuli in macaques, particularly within and ventral to the superior temporal sulcus (STS). These regions, known as body patches, represent visual features that are present in bodies but do not appear to be semantic body detectors. They provide information about posture and viewpoint of the body. Recent research using dynamic stimuli has expanded the understanding of the body-selective network, highlighting its complexity and the interplay between static and dynamic processing. In humans, body-selective areas such as the extrastriate body area (EBA) and fusiform body area (FBA) have been implicated in the perception of bodies and their interactions. Moreover, studies on social interactions reveal that regions in the human STS are also tuned to the perception of dyadic interactions, suggesting a specialized social lateral pathway. Computational work developed models of body recognition and social interaction, providing insights into the underlying neural mechanisms. Despite advances, significant gaps remain in understanding the neural mechanisms of body perception and social interaction. Overall, this review underscores the importance of integrating findings across species to comprehensively understand the neural foundations of body perception and the interaction between computational modeling and neural recording.
Collapse
Affiliation(s)
- Etienne Abassi
- The Neuro, Montreal Neurological Institute-Hospital, McGill University, Montréal, QC H3A 2B4, Canada
| | - Anna Bognár
- Department of Neuroscience, KU Leuven, Leuven 3000, Belgium
- Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| | - Bea de Gelder
- Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, Netherlands
| | - Martin Giese
- Section Computational Sensomotorics, Hertie Institute for Clinical Brain Research & Centre for Integrative Neurocience, University Clinic Tuebingen, Tuebingen D-72076, Germany
| | - Leyla Isik
- Cognitive Science, Johns Hopkins University, Baltimore, Maryland 21218
| | - Alexander Lappe
- Section Computational Sensomotorics, Hertie Institute for Clinical Brain Research & Centre for Integrative Neurocience, University Clinic Tuebingen, Tuebingen D-72076, Germany
| | - Albert Mukovskiy
- Section Computational Sensomotorics, Hertie Institute for Clinical Brain Research & Centre for Integrative Neurocience, University Clinic Tuebingen, Tuebingen D-72076, Germany
| | - Marta Poyo Solanas
- Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, Netherlands
| | - Jessica Taubert
- The School of Psychology, University of Queensland, St Lucia, QLD 4072, Australia
| | - Rufin Vogels
- Department of Neuroscience, KU Leuven, Leuven 3000, Belgium
- Leuven Brain Institute, KU Leuven, Leuven 3000, Belgium
| |
Collapse
|
5
|
Cracco E, Papeo L, Wiersema JR. Evidence for a role of synchrony but not common fate in the perception of biological group movements. Eur J Neurosci 2024; 60:3557-3571. [PMID: 38706370 DOI: 10.1111/ejn.16356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/16/2024] [Accepted: 04/05/2024] [Indexed: 05/07/2024]
Abstract
Extensive research has shown that observers are able to efficiently extract summary information from groups of people. However, little is known about the cues that determine whether multiple people are represented as a social group or as independent individuals. Initial research on this topic has primarily focused on the role of static cues. Here, we instead investigate the role of dynamic cues. In two experiments with male and female human participants, we use EEG frequency tagging to investigate the influence of two fundamental Gestalt principles - synchrony and common fate - on the grouping of biological movements. In Experiment 1, we find that brain responses coupled to four point-light figures walking together are enhanced when they move in sync vs. out of sync, but only when they are presented upright. In contrast, we found no effect of movement direction (i.e., common fate). In Experiment 2, we rule out that synchrony takes precedence over common fate by replicating the null effect of movement direction while keeping synchrony constant. These results suggest that synchrony plays an important role in the processing of biological group movements. In contrast, the role of common fate is less clear and will require further research.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, Bron, France
| | - Jan R Wiersema
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
6
|
Tsantani M, Yon D, Cook R. Neural Representations of Observed Interpersonal Synchrony/Asynchrony in the Social Perception Network. J Neurosci 2024; 44:e2009222024. [PMID: 38527811 PMCID: PMC11097257 DOI: 10.1523/jneurosci.2009-22.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/19/2023] [Accepted: 01/10/2024] [Indexed: 03/27/2024] Open
Abstract
The visual perception of individuals is thought to be mediated by a network of regions in the occipitotemporal cortex that supports specialized processing of faces, bodies, and actions. In comparison, we know relatively little about the neural mechanisms that support the perception of multiple individuals and the interactions between them. The present study sought to elucidate the visual processing of social interactions by identifying which regions of the social perception network represent interpersonal synchrony. In an fMRI study with 32 human participants (26 female, 6 male), we used multivoxel pattern analysis to investigate whether activity in face-selective, body-selective, and interaction-sensitive regions across the social perception network supports the decoding of synchronous versus asynchronous head-nodding and head-shaking. Several regions were found to support significant decoding of synchrony/asynchrony, including extrastriate body area (EBA), face-selective and interaction-sensitive mid/posterior right superior temporal sulcus, and occipital face area. We also saw robust cross-classification across actions in the EBA, suggestive of movement-invariant representations of synchrony/asynchrony. Exploratory whole-brain analyses also identified a region of the right fusiform cortex that responded more strongly to synchronous than to asynchronous motion. Critically, perceiving interpersonal synchrony/asynchrony requires the simultaneous extraction and integration of dynamic information from more than one person. Hence, the representation of synchrony/asynchrony cannot be attributed to augmented or additive processing of individual actors. Our findings therefore provide important new evidence that social interactions recruit dedicated visual processing within the social perception network that extends beyond that engaged by the faces and bodies of the constituent individuals.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Daniel Yon
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Richard Cook
- School of Psychology, University of Leeds, Leeds LS2 9JU, United Kingdom
- Department of Psychology, University of York, York YO10 5DD, United Kingdom
| |
Collapse
|
7
|
Papeo L. What is abstract about seeing social interactions? Trends Cogn Sci 2024; 28:390-391. [PMID: 38632008 DOI: 10.1016/j.tics.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 02/06/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Liuba Papeo
- Institute of Cognitive Sciences Marc Jeannerod -UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, France.
| |
Collapse
|
8
|
Abassi E, Papeo L. Category-Selective Representation of Relationships in the Visual Cortex. J Neurosci 2024; 44:e0250232023. [PMID: 38124013 PMCID: PMC10860595 DOI: 10.1523/jneurosci.0250-23.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 09/29/2023] [Accepted: 10/14/2023] [Indexed: 12/23/2023] Open
Abstract
Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| |
Collapse
|
9
|
Gandolfo M, Abassi E, Balgova E, Downing PE, Papeo L, Koldewyn K. Converging evidence that left extrastriate body area supports visual sensitivity to social interactions. Curr Biol 2024; 34:343-351.e5. [PMID: 38181794 DOI: 10.1016/j.cub.2023.12.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/25/2023] [Accepted: 12/05/2023] [Indexed: 01/07/2024]
Abstract
Navigating our complex social world requires processing the interactions we observe. Recent psychophysical and neuroimaging studies provide parallel evidence that the human visual system may be attuned to efficiently perceive dyadic interactions. This work implies, but has not yet demonstrated, that activity in body-selective cortical regions causally supports efficient visual perception of interactions. We adopt a multi-method approach to close this important gap. First, using a large fMRI dataset (n = 92), we found that the left hemisphere extrastriate body area (EBA) responds more to face-to-face than non-facing dyads. Second, we replicated a behavioral marker of visual sensitivity to interactions: categorization of facing dyads is more impaired by inversion than non-facing dyads. Third, in a pre-registered experiment, we used fMRI-guided transcranial magnetic stimulation to show that online stimulation of the left EBA, but not a nearby control region, abolishes this selective inversion effect. Activity in left EBA, thus, causally supports the efficient perception of social interactions.
Collapse
Affiliation(s)
- Marco Gandolfo
- Donders Institute, Radboud University, Nijmegen 6525GD, the Netherlands; Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| | - Etienne Abassi
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Eva Balgova
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK; Department of Psychology, Aberystwyth University, Aberystwyth SY23 3UX, Ceredigion, UK
| | - Paul E Downing
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK
| | - Liuba Papeo
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Kami Koldewyn
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| |
Collapse
|
10
|
McMahon E, Bonner MF, Isik L. Hierarchical organization of social action features along the lateral visual pathway. Curr Biol 2023; 33:5035-5047.e8. [PMID: 37918399 PMCID: PMC10841461 DOI: 10.1016/j.cub.2023.10.015] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/01/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023]
Abstract
Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Michael F Bonner
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA; Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Suite 400 West, Wyman Park Building, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
11
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
12
|
Le DT, Tsuyuhara M, Kuwamura H, Kitano K, Nguyen TD, Duc Nguyen T, Fujita N, Watanabe T, Nishijo H, Mihara M, Urakawa S. Regional activity and effective connectivity within the frontoparietal network during precision walking with visual cueing: an fNIRS study. Cereb Cortex 2023; 33:11157-11169. [PMID: 37757479 DOI: 10.1093/cercor/bhad354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 09/01/2023] [Accepted: 09/02/2023] [Indexed: 09/29/2023] Open
Abstract
Precision walking (PW) incorporates precise step adjustments into regular walking patterns to navigate challenging surroundings. However, the brain processes involved in PW control, which encompass cortical regions and interregional interactions, are not fully understood. This study aimed to investigate the changes in regional activity and effective connectivity within the frontoparietal network associated with PW. Functional near-infrared spectroscopy data were recorded from adult subjects during treadmill walking tasks, including normal walking (NOR) and PW with visual cues, wherein the intercue distance was either fixed (FIX) or randomly varied (VAR) across steps. The superior parietal lobule (SPL), dorsal premotor area (PMd), supplementary motor area (SMA), and dorsolateral prefrontal cortex (dlPFC) were specifically targeted. The results revealed higher activities in SMA and left PMd, as well as left-to-right SPL connectivity, in VAR than in FIX. Activities in SMA and right dlPFC, along with dlPFC-to-SPL connectivity, were higher in VAR than in NOR. Overall, these findings provide insights into the roles of different brain regions and connectivity patterns within the frontoparietal network in facilitating gait control during PW, providing a useful baseline for further investigations into brain networks involved in locomotion.
Collapse
Affiliation(s)
- Duc Trung Le
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima 734-8553, Japan
- Department of Neurology, Vietnam Military Medical University, No. 261 Phung Hung Street, Ha Dong District, Hanoi 12108, Vietnam
| | - Masato Tsuyuhara
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima 734-8553, Japan
| | - Hiroki Kuwamura
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima 734-8553, Japan
| | - Kento Kitano
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima 734-8553, Japan
| | - Thu Dang Nguyen
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima 734-8553, Japan
| | - Thuan Duc Nguyen
- Department of Neurology, Vietnam Military Medical University, No. 261 Phung Hung Street, Ha Dong District, Hanoi 12108, Vietnam
| | - Naoto Fujita
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima 734-8553, Japan
| | - Tatsunori Watanabe
- Faculty of Health Sciences, Aomori University of Health and Welfare, 58-1 Mase, Hamadate, Aomori-city, Aomori 030-8505, Japan
| | - Hisao Nishijo
- Department of System Emotional Science, Graduate School of Medicine and Pharmaceutical Science, University of Toyama, Sugitani 2630, Toyama 930-0194, Japan
- Faculty of Human Sciences, University of East Asia, 2-12-1 Ichinomiya Gakuen-cho, Shimonoseki City, Yamaguchi 751-8503, Japan
| | - Masahito Mihara
- Department of Neurology, Kawasaki Medical School, 577 Matsushima, Kurashiki City, Okayama 701-0192, Japan
| | - Susumu Urakawa
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima City, Hiroshima 734-8553, Japan
| |
Collapse
|
13
|
Nudnou I, Post A, Saville A, Balas B. Putting people in context: ERP responses to bodies in natural scenes. PLoS One 2023; 18:e0283673. [PMID: 37883414 PMCID: PMC10602242 DOI: 10.1371/journal.pone.0283673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 03/13/2023] [Indexed: 10/28/2023] Open
Abstract
The N190 is a body-sensitive ERP component that responds to images of human bodies in different poses. In natural settings, bodies vary in posture and appear within complex, cluttered environments, frequently with other people. In many studies, however, such variability is absent. How does the N190 response change when observers see images that incorporate these sources of variability? In two experiments (N = 16 each), we varied the natural appearance of upright and inverted bodies to examine how the N190 amplitude, latency, and the Body-Inversion Effect (BIE) were affected by natural variability. In Experiment 1, we varied the number of people present in upright and inverted naturalistic scenes such that only one body, a subitizable number of bodies, or a "crowd" was present. In Experiment 2, we varied the natural body appearance by presenting bodies either as silhouettes or with photographic detail. Further, we varied the natural background appearance by either removing it or presenting individual bodies within a rich environment. Using component-based analyses of the N190, we found that the number of bodies in a scene reduced the N190 amplitude, but didn't affect the BIE (Experiment 1). Naturalistic body and background appearance (Experiment 2) also affected the N190, such that component amplitude was dramatically reduced by naturalistic appearance. To complement this analysis, we examined the contribution of spatiotemporal features (i.e., electrode × time point amplitude) via SVM decoding. This technique allows us to examine which timepoints across the entire waveform contribute the most to successful decoding of body orientation in each condition. This analysis revealed that later timepoints (after 300ms) contribute most to successful orientation decoding. These results demonstrate that natural appearance variability affects body processing at the N190 and that later ERP components may make important contributions to body processing in natural scenes.
Collapse
Affiliation(s)
- Ilya Nudnou
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States of America
| | - Abigail Post
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States of America
| | - Alyson Saville
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States of America
| | - Benjamin Balas
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States of America
| |
Collapse
|
14
|
Zhang M, Yu L, Zhang K, Du B, Zhan B, Jia S, Chen S, Han F, Li Y, Liu S, Yi X, Liu S, Luo W. Construction and validation of the Dalian emotional movement open-source set (DEMOS). Behav Res Methods 2023; 55:2353-2366. [PMID: 35931937 DOI: 10.3758/s13428-022-01887-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2022] [Indexed: 11/08/2022]
Abstract
Human body movements are important for emotion recognition and social communication and have received extensive attention from researchers. In this field, emotional biological motion stimuli, as depicted by point-light displays, are widely used. However, the number of stimuli in the existing material library is small, and there is a lack of standardized indicators, which subsequently limits experimental design and conduction. Therefore, based on our prior kinematic dataset, we constructed the Dalian Emotional Movement Open-source Set (DEMOS) using computational modeling. The DEMOS has three views (i.e., frontal 0°, left 45°, and left 90°) and in total comprises 2664 high-quality videos of emotional biological motion, each displaying happiness, sadness, anger, fear, disgust, and neutral. All stimuli were validated in terms of recognition accuracy, emotional intensity, and subjective movement. The objective movement for each expression was also calculated. The DEMOS can be downloaded for free from https://osf.io/83fst/ . To our knowledge, this is the largest multi-view emotional biological motion set based on the whole body. The DEMOS can be applied in many fields, including affective computing, social cognition, and psychiatry.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Keye Zhang
- School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, China
| | - Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Bin Zhan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Shuxin Jia
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shaohua Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Fengxu Han
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Xi Yi
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China.
| |
Collapse
|
15
|
Landsiedel J, Koldewyn K. Auditory dyadic interactions through the "eye" of the social brain: How visual is the posterior STS interaction region? IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2023; 1:1-20. [PMID: 37719835 PMCID: PMC10503480 DOI: 10.1162/imag_a_00003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 09/19/2023]
Abstract
Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.
Collapse
Affiliation(s)
- Julia Landsiedel
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Kami Koldewyn
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
16
|
Goupil N, Hochmann JR, Papeo L. Intermodulation responses show integration of interacting bodies in a new whole. Cortex 2023; 165:129-140. [PMID: 37279640 DOI: 10.1016/j.cortex.2023.04.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/31/2023] [Accepted: 04/30/2023] [Indexed: 06/08/2023]
Abstract
People are often seen among other people, relating to and interacting with one another. Recent studies suggest that socially relevant spatial relations between bodies, such as the face-to-face positioning, or facingness, change the visual representation of those bodies, relative to when the same items appear unrelated (e.g., back-to-back) or in isolation. The current study addresses the hypothesis that face-to-face bodies give rise to a new whole, an integrated representation of individual bodies in a new perceptual unit. Using frequency-tagging EEG, we targeted, as a measure of integration, an EEG correlate of the non-linear combination of the neural responses to each of two individual bodies presented either face-to-face as if interacting, or back-to-back. During EEG recording, participants (N = 32) viewed two bodies, either face-to-face or back-to-back, flickering at two different frequencies (F1 and F2), yielding two distinctive responses in the EEG signal. Spectral analysis examined the responses at the intermodulation frequencies (nF1±mF2), signaling integration of individual responses. An anterior intermodulation response was observed for face-to-face bodies, but not for back-to-back bodies, nor for face-to-face chairs and machines. These results show that interacting bodies are integrated into a representation that is more than the sum of its parts. This effect, specific to body dyads, may mark an early step in the transformation towards an integrated representation of a social event, from the visual representation of individual participants in that event.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| |
Collapse
|
17
|
Wang R, Lu X, Jiang Y. Distributed and hierarchical neural encoding of multidimensional biological motion attributes in the human brain. Cereb Cortex 2023; 33:8510-8522. [PMID: 37118887 PMCID: PMC10786095 DOI: 10.1093/cercor/bhad136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/31/2023] [Accepted: 04/01/2023] [Indexed: 04/30/2023] Open
Abstract
The human visual system can efficiently extract distinct physical, biological, and social attributes (e.g. facing direction, gender, and emotional state) from biological motion (BM), but how these attributes are encoded in the brain remains largely unknown. In the current study, we used functional magnetic resonance imaging to investigate this issue when participants viewed multidimensional BM stimuli. Using multiple regression representational similarity analysis, we identified distributed brain areas, respectively, related to the processing of facing direction, gender, and emotional state conveyed by BM. These brain areas are governed by a hierarchical structure in which the respective neural encoding of facing direction, gender, and emotional state is modulated by each other in descending order. We further revealed that a portion of the brain areas identified in representational similarity analysis was specific to the neural encoding of each attribute and correlated with the corresponding behavioral results. These findings unravel the brain networks for encoding BM attributes in consideration of their interactions, and highlight that the processing of multidimensional BM attributes is recurrently interactive.
Collapse
Affiliation(s)
- Ruidi Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| | - Xiqian Lu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| |
Collapse
|
18
|
Preißler L, Keck J, Krüger B, Munzert J, Schwarzer G. Recognition of emotional body language from dyadic and monadic point-light displays in 5-year-old children and adults. J Exp Child Psychol 2023; 235:105713. [PMID: 37331307 DOI: 10.1016/j.jecp.2023.105713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/13/2023] [Accepted: 05/16/2023] [Indexed: 06/20/2023]
Abstract
Most child studies on emotion perception used faces and speech as emotion stimuli, but little is known about children's perception of emotions conveyed by body movements, that is, emotional body language (EBL). This study aimed to investigate whether processing advantages for positive emotions in children and negative emotions in adults found in studies on emotional face and term perception also occur in EBL perception. We also aimed to uncover which specific movement features of EBL contribute to emotion perception from interactive dyads compared with noninteractive monads in children and adults. We asked 5-year-old children and adults to categorize happy and angry point-light displays (PLDs), presented as pairs (dyads) and single actors (monads), in a button-press task. By applying representational similarity analyses, we determined intra- and interpersonal movement features of the PLDs and their relation to the participants' emotional categorizations. Results showed significantly higher recognition of happy PLDs in 5-year-olds and of angry PLDs in adults in monads but not in dyads. In both age groups, emotion recognition depended significantly on kinematic and postural movement features such as limb contraction and vertical movement in monads and dyads, whereas in dyads recognition also relied on interpersonal proximity measures such as interpersonal distance. Thus, EBL processing in monads seems to undergo a similar developmental shift from a positivity bias to a negativity bias, as was previously found for emotional faces and terms. Despite these age-specific processing biases, children and adults seem to use similar movement features in EBL processing.
Collapse
Affiliation(s)
- Lucie Preißler
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394 Gießen, Germany.
| | - Johannes Keck
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Gudrun Schwarzer
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394 Gießen, Germany
| |
Collapse
|
19
|
Landsiedel J, Daughters K, Downing PE, Koldewyn K. The role of motion in the neural representation of social interactions in the posterior temporal cortex. Neuroimage 2022; 262:119533. [PMID: 35931309 PMCID: PMC9485464 DOI: 10.1016/j.neuroimage.2022.119533] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/15/2022] [Accepted: 08/01/2022] [Indexed: 11/30/2022] Open
Abstract
Humans are an inherently social species, with multiple focal brain regions sensitive to various visual social cues such as faces, bodies, and biological motion. More recently, research has begun to investigate how the brain responds to more complex, naturalistic social scenes, identifying a region in the posterior superior temporal sulcus (SI-pSTS; i.e., social interaction pSTS), amongst others, as an important region for processing social interaction. This research, however, has presented images or videos, and thus the contribution of motion to social interaction perception in these brain regions is not yet understood. In the current study, 22 participants viewed videos, image sequences, scrambled image sequences and static images of either social interactions or non-social independent actions. Combining univariate and multivariate analyses, we confirm that bilateral SI-pSTS plays a central role in dynamic social interaction perception but is much less involved when 'interactiveness' is conveyed solely with static cues. Regions in the social brain, including SI-pSTS and extrastriate body area (EBA), showed sensitivity to both motion and interactive content. While SI-pSTS is somewhat more tuned to video interactions than is EBA, both bilateral SI-pSTS and EBA showed a greater response to social interactions compared to non-interactions and both regions responded more strongly to videos than static images. Indeed, both regions showed higher responses to interactions than independent actions in videos and intact sequences, but not in other conditions. Exploratory multivariate regression analyses suggest that selectivity for simple visual motion does not in itself drive interactive sensitivity in either SI-pSTS or EBA. Rather, selectivity for interactions expressed in point-light animations, and selectivity for static images of bodies, make positive and independent contributions to this effect across the LOTC region. Our results strongly suggest that EBA and SI-pSTS work together during dynamic interaction perception, at least when interactive information is conveyed primarily via body information. As such, our results are also in line with proposals of a third visual stream supporting dynamic social scene perception.
Collapse
Affiliation(s)
| | | | - Paul E Downing
- School of Human and Behavioural Sciences, Bangor University
| | - Kami Koldewyn
- School of Human and Behavioural Sciences, Bangor University.
| |
Collapse
|
20
|
Yin J, Csibra G, Tatone D. Structural asymmetries in the representation of giving and taking events. Cognition 2022; 229:105248. [PMID: 35961163 DOI: 10.1016/j.cognition.2022.105248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 07/28/2022] [Accepted: 08/01/2022] [Indexed: 11/15/2022]
Abstract
Across languages, GIVE and TAKE verbs have different syntactic requirements: GIVE mandates a patient argument to be made explicit in the clause structure, whereas TAKE does not. Experimental evidence suggests that this asymmetry is rooted in prelinguistic assumptions about the minimal number of event participants that each action entails. The present study provides corroborating evidence for this proposal by investigating whether the observation of giving and taking actions modulates the inclusion of patients in the represented event. Participants were shown events featuring an agent (A) transferring an object to, or collecting it from, an animate target (B) or an inanimate target (a rock), and their sensitivity to changes in pair composition (AB vs. AC) and action role (AB vs. BA) was measured. Change sensitivity was affected by the type of target approached when the agent transferred the object (Experiment 1), but not when she collected it (Experiment 2), or when an outside force carried out the transfer (Experiment 3). Although these object-displacing actions could be equally interpreted as interactive (i.e., directed towards B), this construal was adopted only when B could be perceived as putative patient of a giving action. This evidence buttresses the proposal that structural asymmetries in giving and taking, as reflected in their syntactic requirements, may originate from prelinguistic assumptions about the minimal event participants required for each action to be teleologically well-formed.
Collapse
Affiliation(s)
- Jun Yin
- Department of Psychology, Ningbo University, Ningbo, PR China.
| | - Gergely Csibra
- Department of Cognitive Science, Central European University, Vienna, Austria; Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Denis Tatone
- Department of Cognitive Science, Central European University, Vienna, Austria.
| |
Collapse
|
21
|
Abassi E, Papeo L. Behavioral and neural markers of visual configural processing in social scene perception. Neuroimage 2022; 260:119506. [PMID: 35878724 DOI: 10.1016/j.neuroimage.2022.119506] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 07/18/2022] [Accepted: 07/21/2022] [Indexed: 11/19/2022] Open
Abstract
Research on face perception has revealed highly specialized visual mechanisms such as configural processing, and provided markers of interindividual differences -including disease risks and alterations- in visuo-perceptual abilities that traffic in social cognition. Is face perception unique in degree or kind of mechanisms, and in its relevance for social cognition? Combining functional MRI and behavioral methods, we address the processing of an uncharted class of socially relevant stimuli: minimal social scenes involving configurations of two bodies spatially close and face-to-face as if interacting (hereafter, facing dyads). We report category-specific activity for facing (vs. non-facing) dyads in visual cortex. That activity shows face-like signatures of configural processing -i.e., stronger response to facing (vs. non-facing) dyads, and greater susceptibility to stimulus inversion for facing (vs. non-facing) dyads-, and is predicted by performance-based measures of configural processing in visual perception of body dyads. Moreover, we observe that the individual performance in body-dyad perception is reliable, stable-over-time and correlated with the individual social sensitivity, coarsely captured by the Autism-Spectrum Quotient. Further analyses clarify the relationship between single-body and body-dyad perception. We propose that facing dyads are processed through highly specialized mechanisms -and brain areas-, analogously to other biologically and socially relevant stimuli such as faces. Like face perception, facing-dyad perception can reveal basic (visual) processes that lay the foundations for understanding others, their relationships and interactions.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, 67 Bd. Pinel, 69675 Bron France.
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, 67 Bd. Pinel, 69675 Bron France
| |
Collapse
|
22
|
Rolls ET, Deco G, Huang CC, Feng J. The human language effective connectome. Neuroimage 2022; 258:119352. [PMID: 35659999 DOI: 10.1016/j.neuroimage.2022.119352] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 05/31/2022] [Indexed: 01/07/2023] Open
Abstract
To advance understanding of brain networks involved in language, the effective connectivity between 26 cortical regions implicated in language by a community analysis and 360 cortical regions was measured in 171 humans from the Human Connectome Project, and complemented with functional connectivity and diffusion tractography, all using the HCP multimodal parcellation atlas. A (semantic) network (Group 1) involving inferior cortical regions of the superior temporal sulcus cortex (STS) with the adjacent inferior temporal visual cortex TE1a and temporal pole TG, and the connected parietal PGi region, has effective connectivity with inferior temporal visual cortex (TE) regions; with parietal PFm which also has visual connectivity; with posterior cingulate cortex memory-related regions; with the frontal pole, orbitofrontal cortex, and medial prefrontal cortex; with the dorsolateral prefrontal cortex; and with 44 and 45 for output regions. It is proposed that this system can build in its temporal lobe (STS and TG) and parietal parts (PGi and PGs) semantic representations of objects incorporating especially their visual and reward properties. Another (semantic) network (Group 3) involving superior regions of the superior temporal sulcus cortex and more superior temporal lobe regions including STGa, auditory A5, TPOJ1, the STV and the Peri-Sylvian Language area (PSL) has effective connectivity with auditory areas (A1, A4, A5, Pbelt); with relatively early visual areas involved in motion, e.g., MT and MST, and faces/words (FFC); with somatosensory regions (frontal opercular FOP, insula and parietal PF); with other TPOJ regions; and with the inferior frontal gyrus regions (IFJa and IFSp). It is proposed that this system builds semantic representations specialising in auditory and related facial motion information useful in theory of mind and somatosensory / body image information, with outputs directed not only to regions 44 and 45, but also to premotor 55b and midcingulate premotor cortex. Both semantic networks (Groups 1 and 3) have access to the hippocampal episodic memory system via parahippocampal TF. A third largely frontal network (Group 2) (44, 45, 47l; 55b; the Superior Frontal Language region SFL; and including temporal pole TGv) receives effective connectivity from the two semantic systems, and is implicated in syntax and speech output.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China.
| | - Gustavo Deco
- Department of Information and Communication Technologies, Center for Brain and Cognition, Computational Neuroscience Group, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain; Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain; Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
23
|
Dima DC, Tomita TM, Honey CJ, Isik L. Social-affective features drive human representations of observed actions. eLife 2022; 11:75027. [PMID: 35608254 PMCID: PMC9159752 DOI: 10.7554/elife.75027] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 05/24/2022] [Indexed: 11/13/2022] Open
Abstract
Humans observe actions performed by others in many different visual and social settings. What features do we extract and attend when we view such complex scenes, and how are they processed in the brain? To answer these questions, we curated two large-scale sets of naturalistic videos of everyday actions and estimated their perceived similarity in two behavioral experiments. We normed and quantified a large range of visual, action-related, and social-affective features across the stimulus sets. Using a cross-validated variance partitioning analysis, we found that social-affective features predicted similarity judgments better than, and independently of, visual and action features in both behavioral experiments. Next, we conducted an electroencephalography experiment, which revealed a sustained correlation between neural responses to videos and their behavioral similarity. Visual, action, and social-affective features predicted neural patterns at early, intermediate, and late stages, respectively, during this behaviorally relevant time window. Together, these findings show that social-affective features are important for perceiving naturalistic actions and are extracted at the final stage of a temporal gradient in the brain.
Collapse
Affiliation(s)
- Diana C Dima
- Department of Cognitive Science, Johns Hopkins University, Baltimore, United States
| | - Tyler M Tomita
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
| | - Christopher J Honey
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, United States
| |
Collapse
|
24
|
Affiliation(s)
- Ilenia Paparella
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| | - Liuba Papeo
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| |
Collapse
|
25
|
Pesquita A, Bernardet U, Richards BE, Jensen O, Shapiro K. Isolating Action Prediction from Action Integration in the Perception of Social Interactions. Brain Sci 2022; 12:432. [PMID: 35447965 PMCID: PMC9031105 DOI: 10.3390/brainsci12040432] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 03/08/2022] [Accepted: 03/21/2022] [Indexed: 02/01/2023] Open
Abstract
Previous research suggests that predictive mechanisms are essential in perceiving social interactions. However, these studies did not isolate action prediction (a priori expectations about how partners in an interaction react to one another) from action integration (a posteriori processing of both partner's actions). This study investigated action prediction during social interactions while controlling for integration confounds. Twenty participants viewed 3D animations depicting an action-reaction interaction between two actors. At the start of each action-reaction interaction, one actor performs a social action. Immediately after, instead of presenting the other actor's reaction, a black screen covers the animation for a short time (occlusion duration) until a still frame depicting a precise moment of the reaction is shown (reaction frame). The moment shown in the reaction frame is either temporally aligned with the occlusion duration or deviates by 150 ms or 300 ms. Fifty percent of the action-reaction trials were semantically congruent, and the remaining were incongruent, e.g., one actor offers to shake hands, and the other reciprocally shakes their hand (congruent action-reaction) versus one actor offers to shake hands, and the other leans down (incongruent action-reaction). Participants made fast congruency judgments. We hypothesized that judging the congruency of action-reaction sequences is aided by temporal predictions. The findings supported this hypothesis; linear speed-accuracy scores showed that congruency judgments were facilitated by a temporally aligned occlusion duration, and reaction frames compared to 300 ms deviations, thus suggesting that observers internally simulate the temporal unfolding of an observed social interction. Furthermore, we explored the link between participants with higher autistic traits and their sensitivity to temporal deviations. Overall, the study offers new evidence of prediction mechanisms underpinning the perception of social interactions in isolation from action integration confounds.
Collapse
Affiliation(s)
- Ana Pesquita
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Ulysses Bernardet
- Aston Institute of Urban Technology and the Environment (ASTUTE), Aston University, Birmingham B4 7ET, UK;
| | - Bethany E. Richards
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Kimron Shapiro
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| |
Collapse
|
26
|
Goupil N, Papeo L, Hochmann J. Visual perception grounding of social cognition in preverbal infants. INFANCY 2022; 27:210-231. [DOI: 10.1111/infa.12453] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 11/22/2021] [Accepted: 01/02/2022] [Indexed: 11/28/2022]
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Liuba Papeo
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Jean‐Rémy Hochmann
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| |
Collapse
|