1
|
Morgan AM, Devinsky O, Doyle WK, Dugan P, Friedman D, Flinker A. A magnitude-independent neural code for linguistic information during sentence production. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.06.20.599931. [PMID: 38948730 PMCID: PMC11212956 DOI: 10.1101/2024.06.20.599931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Humans are the only species with the ability to convey an unbounded number of novel thoughts by combining words into sentences. This process is guided by complex syntactic and semantic processes. Despite their centrality to human cognition, the neural mechanisms underlying these systems remain obscured by inherent limitations of non-invasive brain measures and a near total focus on comprehension paradigms. Here, we address these limitations with high-resolution neurosurgical recordings (electrocorticography) and a controlled sentence production experiment. We uncover distinct cortical networks encoding word-level, semantic, and syntactic information. These networks are broadly distributed across traditional language areas, but with focal sensitivity to syntactic structure in middle and inferior frontal gyri. In contrast to previous findings from comprehension studies, we find that these networks are largely non-overlapping, each specialized for just one of the three linguistic constructs we investigate. Most strikingly, our data reveal an unexpected property of syntax and semantics: it is encoded independent of neural activity levels. We hypothesize that this "magnitude-independent" coding scheme likely reflects the distributed nature of these higher-order cognitive constructs.
Collapse
Affiliation(s)
- Adam M. Morgan
- Neurology Department, NYU Grossman School of Medicine, 550 1st Ave, New York, 10016, NY, USA
| | - Orrin Devinsky
- Neurosurgery Department, NYU Grossman School of Medicine, 550 1st Ave, New York, 10016, NY, USA
| | - Werner K. Doyle
- Neurology Department, NYU Grossman School of Medicine, 550 1st Ave, New York, 10016, NY, USA
| | - Patricia Dugan
- Neurology Department, NYU Grossman School of Medicine, 550 1st Ave, New York, 10016, NY, USA
| | - Daniel Friedman
- Neurology Department, NYU Grossman School of Medicine, 550 1st Ave, New York, 10016, NY, USA
| | - Adeen Flinker
- Neurology Department, NYU Grossman School of Medicine, 550 1st Ave, New York, 10016, NY, USA
- Biomedical Engineering Department, NYU Tandon School of Engineering, 6 MetroTech Center Ave, Brooklyn, 11201, NY, USA
| |
Collapse
|
2
|
Reger M, Vrabie O, Volberg G, Lingnau A. Actions at a glance: The time course of action, object, and scene recognition in a free recall paradigm. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2025:10.3758/s13415-025-01272-6. [PMID: 40011402 DOI: 10.3758/s13415-025-01272-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/02/2025] [Indexed: 02/28/2025]
Abstract
Being able to quickly recognize other people's actions lies at the heart of our ability to efficiently interact with our environment. Action recognition has been suggested to rely on the analysis and integration of information from different perceptual subsystems, e.g., for the processing of objects and scenes. However, stimulus presentation times that are required to extract information about actions, objects, and scenes to our knowledge have not yet been directly compared. To address this gap in the literature, we compared the recognition thresholds for actions, objects, and scenes. First, 30 participants were presented with grayscale images depicting different actions at variable presentation times (33-500 ms) and provided written descriptions of each image. Next, ten naïve raters evaluated these descriptions with respect to the presence and accuracy of information related to actions, objects, scenes, and sensory information. Comparing thresholds across presentation times, we found that recognizing actions required shorter presentation times (from 60 ms onwards) than objects (68 ms) and scenes (84 ms). More specific actions required presentation times of approximately 100 ms. Moreover, thresholds were modulated by action category, with the lowest thresholds for locomotion and the highest thresholds for food-related actions. Together, our data suggest that perceptual evidence for actions, objects, and scenes is gathered in parallel when these are presented in the same scene but accumulates faster for actions that reflect static body posture recognition than for objects and scenes.
Collapse
Affiliation(s)
- Maximilian Reger
- Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| | - Oleg Vrabie
- Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| | - Gregor Volberg
- Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| | - Angelika Lingnau
- Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany.
| |
Collapse
|
3
|
Tan L, Qiu Y, Qiu L, Lin S, Li J, Liao J, Zhang Y, Zou W, Huang R. The medial and lateral orbitofrontal cortex jointly represent the cognitive map of task space. Commun Biol 2025; 8:163. [PMID: 39900714 PMCID: PMC11791032 DOI: 10.1038/s42003-025-07588-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Accepted: 01/21/2025] [Indexed: 02/05/2025] Open
Abstract
A cognitive map is an internal model of the world's causal structure, crucial for adaptive behaviors. The orbitofrontal cortex (OFC) is central node to decision-making and cognitive map representation. However, it remains unclear how the medial OFC (mOFC) and lateral OFC (lOFC) contribute to the formation of cognitive maps in humans. By performing a multi-step sequential task and multivariate analyses of functional magnetic resonance imaging (fMRI) data, we found that the mOFC and lOFC play complementary but dissociable roles in this process. Specifically, the mOFC represents all hidden task state components. The lOFC and dorsolateral prefrontal cortex (dlPFC) encode abstract rules governing structure knowledge across task states. Furthermore, the two orbitofrontal subregions are functionally connected to share state-hidden information for constructing a representation of the task structure. Collectively, these findings provide an account that can increase our understanding of how the brain constructs abstract cognitive maps in a task-relevant space.
Collapse
Affiliation(s)
- Liwei Tan
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Yidan Qiu
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Lixin Qiu
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Shuting Lin
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Jinhui Li
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Jiajun Liao
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Yuting Zhang
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Wei Zou
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Ruiwang Huang
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China.
| |
Collapse
|
4
|
Reilly J, Shain C, Borghesani V, Kuhnke P, Vigliocco G, Peelle JE, Mahon BZ, Buxbaum LJ, Majid A, Brysbaert M, Borghi AM, De Deyne S, Dove G, Papeo L, Pexman PM, Poeppel D, Lupyan G, Boggio P, Hickok G, Gwilliams L, Fernandino L, Mirman D, Chrysikou EG, Sandberg CW, Crutch SJ, Pylkkänen L, Yee E, Jackson RL, Rodd JM, Bedny M, Connell L, Kiefer M, Kemmerer D, de Zubicaray G, Jefferies E, Lynott D, Siew CSQ, Desai RH, McRae K, Diaz MT, Bolognesi M, Fedorenko E, Kiran S, Montefinese M, Binder JR, Yap MJ, Hartwigsen G, Cantlon J, Bi Y, Hoffman P, Garcea FE, Vinson D. What we mean when we say semantic: Toward a multidisciplinary semantic glossary. Psychon Bull Rev 2025; 32:243-280. [PMID: 39231896 PMCID: PMC11836185 DOI: 10.3758/s13423-024-02556-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/19/2024] [Indexed: 09/06/2024]
Abstract
Tulving characterized semantic memory as a vast repository of meaning that underlies language and many other cognitive processes. This perspective on lexical and conceptual knowledge galvanized a new era of research undertaken by numerous fields, each with their own idiosyncratic methods and terminology. For example, "concept" has different meanings in philosophy, linguistics, and psychology. As such, many fundamental constructs used to delineate semantic theories remain underspecified and/or opaque. Weak construct specificity is among the leading causes of the replication crisis now facing psychology and related fields. Term ambiguity hinders cross-disciplinary communication, falsifiability, and incremental theory-building. Numerous cognitive subdisciplines (e.g., vision, affective neuroscience) have recently addressed these limitations via the development of consensus-based guidelines and definitions. The project to follow represents our effort to produce a multidisciplinary semantic glossary consisting of succinct definitions, background, principled dissenting views, ratings of agreement, and subjective confidence for 17 target constructs (e.g., abstractness, abstraction, concreteness, concept, embodied cognition, event semantics, lexical-semantic, modality, representation, semantic control, semantic feature, simulation, semantic distance, semantic dimension). We discuss potential benefits and pitfalls (e.g., implicit bias, prescriptiveness) of these efforts to specify a common nomenclature that other researchers might index in specifying their own theoretical perspectives (e.g., They said X, but I mean Y).
Collapse
Affiliation(s)
| | - Cory Shain
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Philipp Kuhnke
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Leipzig University, Leipzig, Germany
| | | | | | | | - Laurel J Buxbaum
- Thomas Jefferson University, Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | | | | | | | | | - Guy Dove
- University of Louisville, Louisville, KY, USA
| | - Liuba Papeo
- Centre National de La Recherche Scientifique (CNRS), University Claude-Bernard Lyon, Lyon, France
| | | | | | | | - Paulo Boggio
- Universidade Presbiteriana Mackenzie, São Paulo, Brazil
| | | | | | | | | | | | | | | | | | - Eiling Yee
- University of Connecticut, Storrs, CT, USA
| | | | | | | | | | | | | | | | | | | | | | | | - Ken McRae
- Western University, London, ON, Canada
| | | | | | | | | | | | | | - Melvin J Yap
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- National University of Singapore, Singapore, Singapore
| | - Gesa Hartwigsen
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Leipzig University, Leipzig, Germany
| | | | - Yanchao Bi
- University of Edinburgh, Edinburgh, UK
- Beijing Normal University, Beijing, China
| | | | | | | |
Collapse
|
5
|
Han J, Chauhan V, Philip R, Taylor MK, Jung H, Halchenko YO, Gobbini MI, Haxby JV, Nastase SA. Behaviorally-relevant features of observed actions dominate cortical representational geometry in natural vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.26.624178. [PMID: 39651248 PMCID: PMC11623629 DOI: 10.1101/2024.11.26.624178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
We effortlessly extract behaviorally relevant information from dynamic visual input in order to understand the actions of others. In the current study, we develop and test a number of models to better understand the neural representational geometries supporting action understanding. Using fMRI, we measured brain activity as participants viewed a diverse set of 90 different video clips depicting social and nonsocial actions in real-world contexts. We developed five behavioral models using arrangement tasks: two models reflecting behavioral judgments of the purpose (transitivity) and the social content (sociality) of the actions depicted in the video stimuli; and three models reflecting behavioral judgments of the visual content (people, objects, and scene) depicted in still frames of the stimuli. We evaluated how well these models predict neural representational geometry and tested them against semantic models based on verb and nonverb embeddings and visual models based on gaze and motion energy. Our results revealed that behavioral judgments of similarity better reflect neural representational geometry than semantic or visual models throughout much of cortex. The sociality and transitivity models in particular captured a large portion of unique variance throughout the action observation network, extending into regions not typically associated with action perception, like ventral temporal cortex. Overall, our findings expand the action observation network and indicate that the social content and purpose of observed actions are predominant in cortical representation.
Collapse
|
6
|
Vlasceanu AM, de la Rosa S, Barraclough NE. Perceptual discrimination of action formidableness and friendliness and the impact of autistic traits. Sci Rep 2024; 14:25554. [PMID: 39462021 PMCID: PMC11513001 DOI: 10.1038/s41598-024-76488-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 10/14/2024] [Indexed: 10/28/2024] Open
Abstract
The ability to determine whether the actions of other individuals are friendly or formidable are key decisions we need to make to successfully navigate our complex social environment. In this study we measured perceptual performance when discriminating actions that vary in their friendliness or formidableness, and whether performance was related to the autistic traits of individuals. To do this, we developed an action morphing method to generate novel actions that lied along the action quality dimensions of formidableness and friendliness. In Experiment 1 we show that actions that vary along the formidableness or friendliness continua were rated as varying monotonically along the respective quality. In Experiment 2 we measured the ability of individuals with different levels of autistic traits to discriminate action formidableness and friendliness using adaptive 2-AFC procedures. We found considerable variation in perceptual thresholds when discriminating action formidableness (~ 540% interindividual variation) or friendliness (~ 1100% interindividual variation). Importantly, we found no evidence that autistic traits influenced perceptual discrimination of these action qualities. These results confirm that sensory enhancements with autistic traits are limited to lower level stimuli, and suggest that the perceptual processing of these complex social signals are not affected by autistic traits.
Collapse
Affiliation(s)
- Alessia M Vlasceanu
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Stephan de la Rosa
- Department of Social Sciences, IU University of Applied Sciences, Juri-Gagarin-Ring 152, 99084, Erfurt, Germany
| | - Nick E Barraclough
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK.
| |
Collapse
|
7
|
Dima DC, Janarthanan S, Culham JC, Mohsenzadeh Y. Shared representations of human actions across vision and language. Neuropsychologia 2024; 202:108962. [PMID: 39047974 DOI: 10.1016/j.neuropsychologia.2024.108962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 06/26/2024] [Accepted: 07/20/2024] [Indexed: 07/27/2024]
Abstract
Humans can recognize and communicate about many actions performed by others. How are actions organized in the mind, and is this organization shared across vision and language? We collected similarity judgments of human actions depicted through naturalistic videos and sentences, and tested four models of action categorization, defining actions at different levels of abstraction ranging from specific (action verb) to broad (action target: whether an action is directed towards an object, another person, or the self). The similarity judgments reflected a shared organization of action representations across videos and sentences, determined mainly by the target of actions, even after accounting for other semantic features. Furthermore, language model embeddings predicted the behavioral similarity of action videos and sentences, and captured information about the target of actions alongside unique semantic information. Together, our results show that action concepts are similarly organized in the mind across vision and language, and that this organization reflects socially relevant goals.
Collapse
Affiliation(s)
- Diana C Dima
- Dept of Computer Science, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada.
| | | | - Jody C Culham
- Dept of Psychology, Western University, London, Ontario, Canada
| | - Yalda Mohsenzadeh
- Dept of Computer Science, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Liu S, He W, Zhang M, Li Y, Ren J, Guan Y, Fan C, Li S, Gu R, Luo W. Emotional concepts shape the perceptual representation of body expressions. Hum Brain Mapp 2024; 45:e26789. [PMID: 39185719 PMCID: PMC11345699 DOI: 10.1002/hbm.26789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 06/25/2024] [Accepted: 07/03/2024] [Indexed: 08/27/2024] Open
Abstract
Emotion perception interacts with how we think and speak, including our concept of emotions. Body expression is an important way of emotion communication, but it is unknown whether and how its perception is modulated by conceptual knowledge. In this study, we employed representational similarity analysis and conducted three experiments combining semantic similarity, mouse-tracking task, and one-back behavioral task with electroencephalography and functional magnetic resonance imaging techniques, the results of which show that conceptual knowledge predicted the perceptual representation of body expressions. Further, this prediction effect occurred at approximately 170 ms post-stimulus. The neural encoding of body expressions in the fusiform gyrus and lingual gyrus was impacted by emotion concept knowledge. Taken together, our results indicate that conceptual knowledge of emotion categories shapes the configural representation of body expressions in the ventral visual cortex, which offers compelling evidence for the constructed emotion theory.
Collapse
Affiliation(s)
- Shuaicheng Liu
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Weiqi He
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Mingming Zhang
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Jie Ren
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yuanhao Guan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Cong Fan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Shuaixia Li
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Ruolei Gu
- Key Laboratory of Behavioral Science, Institute of PsychologyChinese Academy of SciencesBeijingChina
- Department of PsychologyUniversity of Chinese Academy of SciencesBeijingChina
| | - Wenbo Luo
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| |
Collapse
|
9
|
Simonelli F, Handjaras G, Benuzzi F, Bernardi G, Leo A, Duzzi D, Cecchetti L, Nichelli PF, Porro CA, Pietrini P, Ricciardi E, Lui F. Sensitivity and specificity of the action observation network to kinematics, target object, and gesture meaning. Hum Brain Mapp 2024; 45:e26762. [PMID: 39037079 PMCID: PMC11261593 DOI: 10.1002/hbm.26762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 05/23/2024] [Accepted: 06/02/2024] [Indexed: 07/23/2024] Open
Abstract
Hierarchical models have been proposed to explain how the brain encodes actions, whereby different areas represent different features, such as gesture kinematics, target object, action goal, and meaning. The visual processing of action-related information is distributed over a well-known network of brain regions spanning separate anatomical areas, attuned to specific stimulus properties, and referred to as action observation network (AON). To determine the brain organization of these features, we measured representational geometries during the observation of a large set of transitive and intransitive gestures in two independent functional magnetic resonance imaging experiments. We provided evidence for a partial dissociation between kinematics, object characteristics, and action meaning in the occipito-parietal, ventro-temporal, and lateral occipito-temporal cortex, respectively. Importantly, most of the AON showed low specificity to all the explored features, and representational spaces sharing similar information content were spread across the cortex without being anatomically adjacent. Overall, our results support the notion that the AON relies on overlapping and distributed coding and may act as a unique representational space instead of mapping features in a modular and segregated manner.
Collapse
Affiliation(s)
| | | | - Francesca Benuzzi
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| | | | - Andrea Leo
- IMT School for Advanced Studies LuccaLuccaItaly
| | - Davide Duzzi
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| | | | - Paolo F. Nichelli
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| | - Carlo A. Porro
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| | | | | | - Fausta Lui
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| |
Collapse
|
10
|
Zhu H, Ge Y, Bratch A, Yuille A, Kay K, Kersten D. Natural scenes reveal diverse representations of 2D and 3D body pose in the human brain. Proc Natl Acad Sci U S A 2024; 121:e2317707121. [PMID: 38830105 PMCID: PMC11181088 DOI: 10.1073/pnas.2317707121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/25/2024] [Indexed: 06/05/2024] Open
Abstract
Human pose, defined as the spatial relationships between body parts, carries instrumental information supporting the understanding of motion and action of a person. A substantial body of previous work has identified cortical areas responsive to images of bodies and different body parts. However, the neural basis underlying the visual perception of body part relationships has received less attention. To broaden our understanding of body perception, we analyzed high-resolution fMRI responses to a wide range of poses from over 4,000 complex natural scenes. Using ground-truth annotations and an application of three-dimensional (3D) pose reconstruction algorithms, we compared similarity patterns of cortical activity with similarity patterns built from human pose models with different levels of depth availability and viewpoint dependency. Targeting the challenge of explaining variance in complex natural image responses with interpretable models, we achieved statistically significant correlations between pose models and cortical activity patterns (though performance levels are substantially lower than the noise ceiling). We found that the 3D view-independent pose model, compared with two-dimensional models, better captures the activation from distinct cortical areas, including the right posterior superior temporal sulcus (pSTS). These areas, together with other pose-selective regions in the LOTC, form a broader, distributed cortical network with greater view-tolerance in more anterior patches. We interpret these findings in light of the computational complexity of natural body images, the wide range of visual tasks supported by pose structures, and possible shared principles for view-invariant processing between articulated objects and ordinary, rigid objects.
Collapse
Affiliation(s)
- Hongru Zhu
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD21218
| | - Yijun Ge
- Department of Psychology, University of Minnesota, Minneapolis, MN55455
- Laboratory for Consciousness, Riken Center for Brain Science, Wako, Saitama3510198, Japan
| | - Alexander Bratch
- Department of Psychology, University of Minnesota, Minneapolis, MN55455
| | - Alan Yuille
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD21218
| | - Kendrick Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN55455
| | - Daniel Kersten
- Department of Psychology, University of Minnesota, Minneapolis, MN55455
| |
Collapse
|
11
|
Kabulska Z, Zhuang T, Lingnau A. Overlapping representations of observed actions and action-related features. Hum Brain Mapp 2024; 45:e26605. [PMID: 38379447 PMCID: PMC10879913 DOI: 10.1002/hbm.26605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 12/21/2023] [Accepted: 01/12/2024] [Indexed: 02/22/2024] Open
Abstract
The lateral occipitotemporal cortex (LOTC) has been shown to capture the representational structure of a smaller range of actions. In the current study, we carried out an fMRI experiment in which we presented human participants with images depicting 100 different actions and used representational similarity analysis (RSA) to determine which brain regions capture the semantic action space established using judgments of action similarity. Moreover, to determine the contribution of a wide range of action-related features to the neural representation of the semantic action space we constructed an action feature model on the basis of ratings of 44 different features. We found that the semantic action space model and the action feature model are best captured by overlapping activation patterns in bilateral LOTC and ventral occipitotemporal cortex (VOTC). An RSA on eight dimensions resulting from principal component analysis carried out on the action feature model revealed partly overlapping representations within bilateral LOTC, VOTC, and the parietal lobe. Our results suggest spatially overlapping representations of the semantic action space of a wide range of actions and the corresponding action-related features. Together, our results add to our understanding of the kind of representations along the LOTC that support action understanding.
Collapse
Affiliation(s)
- Zuzanna Kabulska
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive NeuroscienceUniversity of RegensburgRegensburgGermany
| | - Tonghe Zhuang
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive NeuroscienceUniversity of RegensburgRegensburgGermany
| | - Angelika Lingnau
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive NeuroscienceUniversity of RegensburgRegensburgGermany
| |
Collapse
|
12
|
Vinton LC, Preston C, de la Rosa S, Mackie G, Tipper SP, Barraclough NE. Four fundamental dimensions underlie the perception of human actions. Atten Percept Psychophys 2024; 86:536-558. [PMID: 37188862 PMCID: PMC10185378 DOI: 10.3758/s13414-023-02709-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 05/17/2023]
Abstract
We evaluate the actions of other individuals based upon a variety of movements that reveal critical information to guide decision making and behavioural responses. These signals convey a range of information about the actor, including their goals, intentions and internal mental states. Although progress has been made to identify cortical regions involved in action processing, the organising principles underlying our representation of actions still remains unclear. In this paper we investigated the conceptual space that underlies action perception by assessing which qualities are fundamental to the perception of human actions. We recorded 240 different actions using motion-capture and used these data to animate a volumetric avatar that performed the different actions. 230 participants then viewed these actions and rated the extent to which each action demonstrated 23 different action characteristics (e.g., avoiding-approaching, pulling-pushing, weak-powerful). We analysed these data using Exploratory Factor Analysis to examine the latent factors underlying visual action perception. The best fitting model was a four-dimensional model with oblique rotation. We named the factors: friendly-unfriendly, formidable-feeble, planned-unplanned, and abduction-adduction. The first two factors of friendliness and formidableness explained approximately 22% of the variance each, compared to planned and abduction, which explained approximately 7-8% of the variance each; as such we interpret this representation of action space as having 2 + 2 dimensions. A closer examination of the first two factors suggests a similarity to the principal factors underlying our evaluation of facial traits and emotions, whilst the last two factors of planning and abduction appear unique to actions.
Collapse
Affiliation(s)
- Laura C Vinton
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Catherine Preston
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Stephan de la Rosa
- Department of Social Sciences, IU University of Applied Sciences, Juri-Gagarin-Ring 152, 99084, Erfurt, Germany
| | - Gabriel Mackie
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Steven P Tipper
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - Nick E Barraclough
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK.
| |
Collapse
|
13
|
Zheng XY, Hebart MN, Grill F, Dolan RJ, Doeller CF, Cools R, Garvert MM. Parallel cognitive maps for multiple knowledge structures in the hippocampal formation. Cereb Cortex 2024; 34:bhad485. [PMID: 38204296 PMCID: PMC10839836 DOI: 10.1093/cercor/bhad485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 11/27/2023] [Accepted: 11/30/2023] [Indexed: 01/12/2024] Open
Abstract
The hippocampal-entorhinal system uses cognitive maps to represent spatial knowledge and other types of relational information. However, objects can often be characterized by different types of relations simultaneously. How does the hippocampal formation handle the embedding of stimuli in multiple relational structures that differ vastly in their mode and timescale of acquisition? Does the hippocampal formation integrate different stimulus dimensions into one conjunctive map or is each dimension represented in a parallel map? Here, we reanalyzed human functional magnetic resonance imaging data from Garvert et al. (2017) that had previously revealed a map in the hippocampal formation coding for a newly learnt transition structure. Using functional magnetic resonance imaging adaptation analysis, we found that the degree of representational similarity in the bilateral hippocampus also decreased as a function of the semantic distance between presented objects. Importantly, while both map-like structures localized to the hippocampal formation, the semantic map was located in more posterior regions of the hippocampal formation than the transition structure and thus anatomically distinct. This finding supports the idea that the hippocampal-entorhinal system forms parallel cognitive maps that reflect the embedding of objects in diverse relational structures.
Collapse
Affiliation(s)
- Xiaochen Y Zheng
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN, Nijmegen, the Netherlands
| | - Martin N Hebart
- Max-Planck-Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
- Department of Medicine, Justus Liebig University, 35390, Giessen, Germany
| | - Filip Grill
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN, Nijmegen, the Netherlands
- Radboud University Medical Center, Department of Neurology, 6525 GA, Nijmegen, the Netherlands
| | - Raymond J Dolan
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, United Kingdom
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, University College London, London WC1B 5EH, United Kingdom
| | - Christian F Doeller
- Max-Planck-Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Jebsen Centre for Alzheimer's Disease, NTNU, 7491, Trondheim, Norway
- Wilhelm Wundt Institute of Psychology, Leipzig University, 04109, Leipzig, Germany
| | - Roshan Cools
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN, Nijmegen, the Netherlands
- Radboud University Medical Center, Department of Psychiatry, 6525 GA, Nijmegen, the Netherlands
| | - Mona M Garvert
- Max-Planck-Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, 14195, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany
- Faculty of Human Sciences, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| |
Collapse
|
14
|
McMahon E, Bonner MF, Isik L. Hierarchical organization of social action features along the lateral visual pathway. Curr Biol 2023; 33:5035-5047.e8. [PMID: 37918399 PMCID: PMC10841461 DOI: 10.1016/j.cub.2023.10.015] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/01/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023]
Abstract
Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Michael F Bonner
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA; Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Suite 400 West, Wyman Park Building, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
15
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
16
|
Zhuang T, Kabulska Z, Lingnau A. The Representation of Observed Actions at the Subordinate, Basic, and Superordinate Level. J Neurosci 2023; 43:8219-8230. [PMID: 37798129 PMCID: PMC10697398 DOI: 10.1523/jneurosci.0700-22.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 08/08/2023] [Accepted: 09/06/2023] [Indexed: 10/07/2023] Open
Abstract
Actions can be planned and recognized at different hierarchical levels, ranging from very specific (e.g., to swim backstroke) to very broad (e.g., locomotion). Understanding the corresponding neural representation is an important prerequisite to reveal how our brain flexibly assigns meaning to the world around us. To address this question, we conducted an event-related fMRI study in male and female human participants in which we examined distinct representations of observed actions at the subordinate, basic and superordinate level. Using multiple regression representational similarity analysis (RSA) in predefined regions of interest, we found that the three different taxonomic levels were best captured by patterns of activations in bilateral lateral occipitotemporal cortex (LOTC), showing the highest similarity with the basic level model. A whole-brain multiple regression RSA revealed that information unique to the basic level was captured by patterns of activation in dorsal and ventral portions of the LOTC and in parietal regions. By contrast, the unique information for the subordinate level was limited to bilateral occipitotemporal cortex, while no single cluster was obtained that captured unique information for the superordinate level. The behaviorally established action space was best captured by patterns of activation in the LOTC and superior parietal cortex, and the corresponding neural patterns of activation showed the highest similarity with patterns of activation corresponding to the basic level model. Together, our results suggest that occipitotemporal cortex shows a preference for the basic level model, with flexible access across the subordinate and the basic level.SIGNIFICANCE STATEMENT The human brain captures information at varying levels of abstraction. It is debated which brain regions host representations across different hierarchical levels, with some studies emphasizing parietal and premotor regions, while other studies highlight the role of the lateral occipitotemporal cortex (LOTC). To shed light on this debate, here we examined the representation of observed actions at the three taxonomic levels suggested by Rosch et al. (1976) Our results highlight the role of the LOTC, which hosts a shared representation across the subordinate and the basic level, with the highest similarity with the basic level model. These results shed new light on the hierarchical organization of observed actions and provide insights into the neural basis underlying the basic level advantage.
Collapse
Affiliation(s)
- Tonghe Zhuang
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive Neuroscience, University of Regensburg, 93053 Regensburg, Germany
| | - Zuzanna Kabulska
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive Neuroscience, University of Regensburg, 93053 Regensburg, Germany
| | - Angelika Lingnau
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive Neuroscience, University of Regensburg, 93053 Regensburg, Germany
| |
Collapse
|
17
|
Moreau Q, Parrotta E, Pesci UG, Era V, Candidi M. Early categorization of social affordances during the visual encoding of bodily stimuli. Neuroimage 2023; 274:120151. [PMID: 37191657 DOI: 10.1016/j.neuroimage.2023.120151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 04/27/2023] [Accepted: 04/30/2023] [Indexed: 05/17/2023] Open
Abstract
Interpersonal interactions rely on various communication channels, both verbal and non-verbal, through which information regarding one's intentions and emotions are perceived. Here, we investigated the neural correlates underlying the visual processing of hand postures conveying social affordances (i.e., hand-shaking), compared to control stimuli such as hands performing non-social actions (i.e., grasping) or showing no movement at all. Combining univariate and multivariate analysis on electroencephalography (EEG) data, our results indicate that occipito-temporal electrodes show early differential processing of stimuli conveying social information compared to non-social ones. First, the amplitude of the Early Posterior Negativity (EPN, an Event-Related Potential related to the perception of body parts) is modulated differently during the perception of social and non-social content carried by hands. Moreover, our multivariate classification analysis (MultiVariate Pattern Analysis - MVPA) expanded the univariate results by revealing early (<200 ms) categorization of social affordances over occipito-parietal sites. In conclusion, we provide new evidence suggesting that the encoding of socially relevant hand gestures is categorized in the early stages of visual processing.
Collapse
Affiliation(s)
- Q Moreau
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy.
| | - E Parrotta
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - U G Pesci
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - V Era
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - M Candidi
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy.
| |
Collapse
|
18
|
Zhou M, Gong Z, Dai Y, Wen Y, Liu Y, Zhen Z. A large-scale fMRI dataset for human action recognition. Sci Data 2023; 10:415. [PMID: 37369643 DOI: 10.1038/s41597-023-02325-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 06/21/2023] [Indexed: 06/29/2023] Open
Abstract
Human action recognition is a critical capability for our survival, allowing us to interact easily with the environment and others in everyday life. Although the neural basis of action recognition has been widely studied using a few action categories from simple contexts as stimuli, how the human brain recognizes diverse human actions in real-world environments still needs to be explored. Here, we present the Human Action Dataset (HAD), a large-scale functional magnetic resonance imaging (fMRI) dataset for human action recognition. HAD contains fMRI responses to 21,600 video clips from 30 participants. The video clips encompass 180 human action categories and offer a comprehensive coverage of complex activities in daily life. We demonstrate that the data are reliable within and across participants and, notably, capture rich representation information of the observed human actions. This extensive dataset, with its vast number of action categories and exemplars, has the potential to deepen our understanding of human action recognition in natural environments.
Collapse
Affiliation(s)
- Ming Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zhengxin Gong
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yuxuan Dai
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yushan Wen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Youyi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
19
|
Kabulska Z, Lingnau A. The cognitive structure underlying the organization of observed actions. Behav Res Methods 2023; 55:1890-1906. [PMID: 35788973 PMCID: PMC10250259 DOI: 10.3758/s13428-022-01894-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/26/2022] [Indexed: 11/08/2022]
Abstract
In daily life, we frequently encounter actions performed by other people. Here we aimed to examine the key categories and features underlying the organization of a wide range of actions in three behavioral experiments (N = 378 participants). In Experiment 1, we used a multi-arrangement task of 100 different actions. Inverse multidimensional scaling and hierarchical clustering revealed 11 action categories, including Locomotion, Communication, and Aggressive actions. In Experiment 2, we used a feature-listing paradigm to obtain a wide range of action features that were subsequently reduced to 59 key features and used in a rating study (Experiment 3). A direct comparison of the feature ratings obtained in Experiment 3 between actions belonging to the categories identified in Experiment 1 revealed a number of features that appear to be critical for the distinction between these categories, e.g., the features Harm and Noise for the category Aggressive actions, and the features Targeting a person and Contact with others for the category Interaction. Finally, we found that a part of the category-based organization is explained by a combination of weighted features, whereas a significant proportion of variability remained unexplained, suggesting that there are additional sources of information that contribute to the categorization of observed actions. The characterization of action categories and their associated features serves as an important extension of previous studies examining the cognitive structure of actions. Moreover, our results may serve as the basis for future behavioral, neuroimaging and computational modeling studies.
Collapse
Affiliation(s)
- Zuzanna Kabulska
- Department of Psychology, Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| | - Angelika Lingnau
- Department of Psychology, Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany.
| |
Collapse
|
20
|
Köster M, Meyer M. Down and up! Does the mu rhythm index a gating mechanism in the developing motor system? Dev Cogn Neurosci 2023; 60:101239. [PMID: 37030147 PMCID: PMC10113759 DOI: 10.1016/j.dcn.2023.101239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 03/20/2023] [Accepted: 04/03/2023] [Indexed: 04/10/2023] Open
Abstract
Developmental research on action processing in the motor cortex relies on a key neural marker - a decrease in 6-12 Hz activity (coined mu suppression). However, recent evidence points towards an increase in mu power, specific for the observation of others' actions. Complementing the findings on mu suppression, this raises the critical question for the functional role of the mu rhythm in the developing motor system. We here discuss a potential solution to this seeming controversy by suggesting a gating function of the mu rhythm: A decrease in mu power may index the facilitation, while an increase may index the inhibition of motor processes, which are critical during action observation. This account may advance our conception of action understanding in early brain development and points towards critical directions for future research.
Collapse
Affiliation(s)
- Moritz Köster
- University of Regensburg, Institute of Psychology, Sedanstraße 1, 93055 Regensburg, Germany.
| | - Marlene Meyer
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, the Netherlands; Department of Psychology, University of Chicago, USA.
| |
Collapse
|
21
|
Dima DC, Hebart MN, Isik L. A data-driven investigation of human action representations. Sci Rep 2023; 13:5171. [PMID: 36997625 PMCID: PMC10063663 DOI: 10.1038/s41598-023-32192-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 03/23/2023] [Indexed: 04/01/2023] Open
Abstract
Understanding actions performed by others requires us to integrate different types of information about people, scenes, objects, and their interactions. What organizing dimensions does the mind use to make sense of this complex action space? To address this question, we collected intuitive similarity judgments across two large-scale sets of naturalistic videos depicting everyday actions. We used cross-validated sparse non-negative matrix factorization to identify the structure underlying action similarity judgments. A low-dimensional representation, consisting of nine to ten dimensions, was sufficient to accurately reconstruct human similarity judgments. The dimensions were robust to stimulus set perturbations and reproducible in a separate odd-one-out experiment. Human labels mapped these dimensions onto semantic axes relating to food, work, and home life; social axes relating to people and emotions; and one visual axis related to scene setting. While highly interpretable, these dimensions did not share a clear one-to-one correspondence with prior hypotheses of action-relevant dimensions. Together, our results reveal a low-dimensional set of robust and interpretable dimensions that organize intuitive action similarity judgments and highlight the importance of data-driven investigations of behavioral representations.
Collapse
Affiliation(s)
- Diana C Dima
- Department of Cognitive Science, Johns Hopkins University, Baltimore, USA.
- Department of Computer Science, Western University, London, Canada.
| | - Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
22
|
Santavirta S, Karjalainen T, Nazari-Farsani S, Hudson M, Putkinen V, Seppälä K, Sun L, Glerean E, Hirvonen J, Karlsson HK, Nummenmaa L. Functional organization of social perception in the human brain. Neuroimage 2023; 272:120025. [PMID: 36958619 PMCID: PMC10112277 DOI: 10.1016/j.neuroimage.2023.120025] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 03/07/2023] [Accepted: 03/11/2023] [Indexed: 03/25/2023] Open
Abstract
Humans rapidly extract diverse and complex information from ongoing social interactions, but the perceptual and neural organization of the different aspects of social perception remains unresolved. We showed short movie clips with rich social content to 97 healthy participants while their haemodynamic brain activity was measured with fMRI. The clips were annotated moment-to-moment for a large set of social features and 45 of the features were evaluated reliably between annotators. Cluster analysis of the social features revealed that 13 dimensions were sufficient for describing the social perceptual space. Three different analysis methods were used to map the social perceptual processes in the human brain. Regression analysis mapped regional neural response profiles for different social dimensions. Multivariate pattern analysis then established the spatial specificity of the responses and intersubject correlation analysis connected social perceptual processing with neural synchronization. The results revealed a gradient in the processing of social information in the brain. Posterior temporal and occipital regions were broadly tuned to most social dimensions and the classifier revealed that these responses showed spatial specificity for social dimensions; in contrast Heschl gyri and parietal areas were also broadly associated with different social signals, yet the spatial patterns of responses did not differentiate social dimensions. Frontal and subcortical regions responded only to a limited number of social dimensions and the spatial response patterns did not differentiate social dimension. Altogether these results highlight the distributed nature of social processing in the brain.
Collapse
Affiliation(s)
- Severi Santavirta
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland.
| | - Tomi Karjalainen
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Sanaz Nazari-Farsani
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Matthew Hudson
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; School of Psychology, University of Plymouth, Plymouth, United Kingdom
| | - Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Kerttu Seppälä
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Lihua Sun
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Nuclear Medicine, Pudong Hospital, Fudan University, Shanghai, China
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Jussi Hirvonen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland; Medical Imaging Center, Department of Radiology, Tampere University and Tampere University Hospital, Tampere, Finland
| | - Henry K Karlsson
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Psychology, University of Turku, Turku, Finland
| |
Collapse
|
23
|
Yargholi E, Hossein-Zadeh GA, Vaziri-Pashkam M. Two distinct networks containing position-tolerant representations of actions in the human brain. Cereb Cortex 2023; 33:1462-1475. [PMID: 35511702 PMCID: PMC10310977 DOI: 10.1093/cercor/bhac149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 03/24/2022] [Accepted: 03/25/2022] [Indexed: 11/13/2022] Open
Abstract
Humans can recognize others' actions in the social environment. This action recognition ability is rarely hindered by the movement of people in the environment. The neural basis of this position tolerance for observed actions is not fully understood. Here, we aimed to identify brain regions capable of generalizing representations of actions across different positions and investigate the representational content of these regions. In a functional magnetic resonance imaging experiment, participants viewed point-light displays of different human actions. Stimuli were presented in either the upper or the lower visual field. Multivariate pattern analysis and a surface-based searchlight approach were employed to identify brain regions that contain position-tolerant action representation: Classifiers were trained with patterns in response to stimuli presented in one position and were tested with stimuli presented in another position. Results showed above-chance classification in the left and right lateral occipitotemporal cortices, right intraparietal sulcus, and right postcentral gyrus. Further analyses exploring the representational content of these regions showed that responses in the lateral occipitotemporal regions were more related to subjective judgments, while those in the parietal regions were more related to objective measures. These results provide evidence for two networks that contain abstract representations of human actions with distinct representational content.
Collapse
Affiliation(s)
- Elahé Yargholi
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran 1956836484, Iran
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, Leuven 3714, Belgium
| | - Gholam-Ali Hossein-Zadeh
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran 1956836484, Iran
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran 1439957131, Iran
| | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health (NIMH), Bethesda, MD 20814, United States
| |
Collapse
|
24
|
Zhang Y, Lemarchand R, Asyraff A, Hoffman P. Representation of motion concepts in occipitotemporal cortex: fMRI activation, decoding and connectivity analyses. Neuroimage 2022; 259:119450. [PMID: 35798252 DOI: 10.1016/j.neuroimage.2022.119450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 06/15/2022] [Accepted: 07/03/2022] [Indexed: 11/18/2022] Open
Abstract
Embodied theories of semantic cognition predict that brain regions involved in motion perception are engaged when people comprehend motion concepts expressed in language. Left lateral occipitotemporal cortex (LOTC) is implicated in both motion perception and motion concept processing but prior studies have produced mixed findings on which parts of this region are engaged by motion language. We scanned participants performing semantic judgements about sentences describing motion events and static events. We performed univariate analyses, multivariate pattern analyses (MVPA) and psychophysiological interaction (PPI) analyses to investigate the effect of motion on activity and connectivity in different parts of LOTC. In multivariate analyses that decoded whether a sentence described motion or not, the middle and posterior parts of LOTC showed above-chance level performance, with performance exceeding that of other brain regions. Univariate ROI analyses found the middle part of LOTC was more active for motion events than static ones. Finally, PPI analyses found that when processing motion events, the middle and posterior parts of LOTC (overlapping with motion perception regions), increased their connectivity with cognitive control regions. Taken together, these results indicate that the more posterior parts of LOTC, including motion perception cortex, respond differently to motion vs. static events. These findings are consistent with embodiment accounts of semantic processing, and suggest that understanding verbal descriptions of motion engages areas of the occipitotemporal cortex involved in perceiving motion.
Collapse
Affiliation(s)
- Yueyang Zhang
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, UK
| | - Rafael Lemarchand
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, UK
| | - Aliff Asyraff
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, UK
| | - Paul Hoffman
- School of Philosophy, Psychology & Language Sciences, University of Edinburgh, UK.
| |
Collapse
|
25
|
Dima DC, Tomita TM, Honey CJ, Isik L. Social-affective features drive human representations of observed actions. eLife 2022; 11:75027. [PMID: 35608254 PMCID: PMC9159752 DOI: 10.7554/elife.75027] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 05/24/2022] [Indexed: 11/13/2022] Open
Abstract
Humans observe actions performed by others in many different visual and social settings. What features do we extract and attend when we view such complex scenes, and how are they processed in the brain? To answer these questions, we curated two large-scale sets of naturalistic videos of everyday actions and estimated their perceived similarity in two behavioral experiments. We normed and quantified a large range of visual, action-related, and social-affective features across the stimulus sets. Using a cross-validated variance partitioning analysis, we found that social-affective features predicted similarity judgments better than, and independently of, visual and action features in both behavioral experiments. Next, we conducted an electroencephalography experiment, which revealed a sustained correlation between neural responses to videos and their behavioral similarity. Visual, action, and social-affective features predicted neural patterns at early, intermediate, and late stages, respectively, during this behaviorally relevant time window. Together, these findings show that social-affective features are important for perceiving naturalistic actions and are extracted at the final stage of a temporal gradient in the brain.
Collapse
Affiliation(s)
- Diana C Dima
- Department of Cognitive Science, Johns Hopkins University, Baltimore, United States
| | - Tyler M Tomita
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
| | - Christopher J Honey
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, United States
| |
Collapse
|
26
|
Zimmermann M, Lomoriello AS, Konvalinka I. Intra-individual behavioural and neural signatures of audience effects and interactions in a mirror-game paradigm. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211352. [PMID: 35223056 PMCID: PMC8847899 DOI: 10.1098/rsos.211352] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/20/2022] [Indexed: 06/14/2023]
Abstract
We often perform actions while observed by others, yet the behavioural and neural signatures of audience effects remain understudied. Performing actions while being observed has been shown to result in more emphasized movements in musicians and dancers, as well as during communicative actions. Here, we investigate the behavioural and neural mechanisms of observed actions in relation to individual actions in isolation and interactive joint actions. Movement kinematics and EEG were recorded in 42 participants (21 pairs) during a mirror-game paradigm, while participants produced improvised movements alone, while observed by a partner, or by synchronizing movements with the partner. Participants produced largest movements when being observed, and observed actors and dyads in interaction produced slower and less variable movements in contrast with acting alone. On a neural level, we observed increased mu suppression during interaction, as well as to a lesser extent during observed actions, relative to individual actions. Moreover, we observed increased widespread functional brain connectivity during observed actions relative to both individual and interactive actions, suggesting increased intra-individual monitoring and action-perception integration as a result of audience effects. These results suggest that observed actors take observers into account in their action plans by increasing self-monitoring; on a behavioural level, observed actions are similar to emergent interactive actions, characterized by slower and more predictable movements.
Collapse
Affiliation(s)
- Marius Zimmermann
- Section for Cognitive Systems, DTU Compute, Kongens Lyngby, Denmark
- Institute of Psychology, University of Regensburg, Regensburg, Germany
| | | | - Ivana Konvalinka
- Section for Cognitive Systems, DTU Compute, Kongens Lyngby, Denmark
| |
Collapse
|
27
|
Zhuang T, Lingnau A. The characterization of actions at the superordinate, basic and subordinate level. PSYCHOLOGICAL RESEARCH 2021; 86:1871-1891. [PMID: 34907466 PMCID: PMC9363348 DOI: 10.1007/s00426-021-01624-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 11/26/2021] [Indexed: 10/26/2022]
Abstract
Objects can be categorized at different levels of abstraction, ranging from the superordinate (e.g., fruit) and the basic (e.g., apple) to the subordinate level (e.g., golden delicious). The basic level is assumed to play a key role in categorization, e.g., in terms of the number of features used to describe these actions and the speed of processing. To which degree do these principles also apply to the categorization of observed actions? To address this question, we first selected a range of actions at the superordinate (e.g., locomotion), basic (e.g., to swim) and subordinate level (e.g., to swim breaststroke), using verbal material (Experiments 1-3). Experiments 4-6 aimed to determine the characteristics of these actions across the three taxonomic levels. Using a feature listing paradigm (Experiment 4), we determined the number of features that were provided by at least six out of twenty participants (common features), separately for the three different levels. In addition, we examined the number of shared (i.e., provided for more than one category) and distinct (i.e., provided for one category only) features. Participants produced the highest number of common features for actions at the basic level. Actions at the subordinate level shared more features with other actions at the same level than those at the superordinate level. Actions at the superordinate and basic level were described with more distinct features compared to those provided at the subordinate level. Using an auditory priming paradigm (Experiment 5), we observed that participants responded faster to action images preceded by a matching auditory cue corresponding to the basic and subordinate level, but not for superordinate level cues, suggesting that the basic level is the most abstract level at which verbal cues facilitate the processing of an upcoming action. Using a category verification task (Experiment 6), we found that participants were faster and more accurate to verify action categories (depicted as images) at the basic and subordinate level in comparison to the superordinate level. Together, in line with the object categorization literature, our results suggest that information about action categories is maximized at the basic level.
Collapse
Affiliation(s)
- Tonghe Zhuang
- Chair of Cognitive Neuroscience, Faculty of Human Sciences, Institute of Psychology, University of Regensburg, Universitätsstrasse 31, 93053, Regensburg, Germany
| | - Angelika Lingnau
- Chair of Cognitive Neuroscience, Faculty of Human Sciences, Institute of Psychology, University of Regensburg, Universitätsstrasse 31, 93053, Regensburg, Germany.
| |
Collapse
|
28
|
Tarhan L, De Freitas J, Konkle T. Behavioral and neural representations en route to intuitive action understanding. Neuropsychologia 2021; 163:108048. [PMID: 34653497 PMCID: PMC8649031 DOI: 10.1016/j.neuropsychologia.2021.108048] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Revised: 07/13/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
When we observe another person's actions, we process many kinds of information - from how their body moves to the intention behind their movements. What kinds of information underlie our intuitive understanding about how similar actions are to each other? To address this question, we measured the intuitive similarities among a large set of everyday action videos using multi-arrangement experiments, then used a modeling approach to predict this intuitive similarity space along three hypothesized properties. We found that similarity in the actors' inferred goals predicted the intuitive similarity judgments the best, followed by similarity in the actors' movements, with little contribution from the videos' visual appearance. In opportunistic fMRI analyses assessing brain-behavior correlations, we found suggestive evidence for an action processing hierarchy, in which these three kinds of action similarities are reflected in the structure of brain responses along a posterior-to-anterior gradient on the lateral surface of the visual cortex. Altogether, this work joins existing literature suggesting that humans are naturally tuned to process others' intentions, and that the visuo-motor cortex computes the perceptual precursors of the higher-level representations over which intuitive action perception operates.
Collapse
Affiliation(s)
- Leyla Tarhan
- Department of Psychology, Harvard University, USA
| | | | - Talia Konkle
- Department of Psychology, Harvard University, USA.
| |
Collapse
|
29
|
Wurm MF, Caramazza A. Two 'what' pathways for action and object recognition. Trends Cogn Sci 2021; 26:103-116. [PMID: 34702661 DOI: 10.1016/j.tics.2021.10.003] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 09/03/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
The ventral visual stream is conceived as a pathway for object recognition. However, we also recognize the actions an object can be involved in. Here, we show that action recognition critically depends on a pathway in lateral occipitotemporal cortex, partially overlapping and topographically aligned with object representations that are precursors for action recognition. By contrast, object features that are more relevant for object recognition, such as color and texture, are typically found in ventral occipitotemporal cortex. We argue that occipitotemporal cortex contains similarly organized lateral and ventral 'what' pathways for action and object recognition, respectively. This account explains a number of observed phenomena, such as the duplication of object domains and the specific representational profiles in lateral and ventral cortex.
Collapse
Affiliation(s)
- Moritz F Wurm
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy.
| | - Alfonso Caramazza
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy; Department of Psychology, Harvard University, 33 Kirkland St, Cambridge, MA 02138, USA
| |
Collapse
|
30
|
Kroczek LOH, Lingnau A, Schwind V, Wolff C, Mühlberger A. Angry facial expressions bias towards aversive actions. PLoS One 2021; 16:e0256912. [PMID: 34469494 PMCID: PMC8409676 DOI: 10.1371/journal.pone.0256912] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 08/19/2021] [Indexed: 11/19/2022] Open
Abstract
Social interaction requires fast and efficient processing of another person's intentions. In face-to-face interactions, aversive or appetitive actions typically co-occur with emotional expressions, allowing an observer to anticipate action intentions. In the present study, we investigated the influence of facial emotions on the processing of action intentions. Thirty-two participants were presented with video clips showing virtual agents displaying a facial emotion (angry vs. happy) while performing an action (punch vs. fist-bump) directed towards the observer. During each trial, video clips stopped at varying durations of the unfolding action, and participants had to recognize the presented action. Naturally, participants' recognition accuracy improved with increasing duration of the unfolding actions. Interestingly, while facial emotions did not influence accuracy, there was a significant influence on participants' action judgements. Participants were more likely to judge a presented action as a punch when agents showed an angry compared to a happy facial emotion. This effect was more pronounced in short video clips, showing only the beginning of an unfolding action, than in long video clips, showing near-complete actions. These results suggest that facial emotions influence anticipatory processing of action intentions allowing for fast and adaptive responses in social interactions.
Collapse
Affiliation(s)
- Leon O. H. Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
- * E-mail:
| | - Angelika Lingnau
- Department of Psychology, Cognitive Neuroscience, University of Regensburg, Regensburg, Germany
| | - Valentin Schwind
- Human Computer Interaction, University of Applied Sciences in Frankfurt a. M, Frankfurt a. M., Germany
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Christian Wolff
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Andreas Mühlberger
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
31
|
de Gelder B, Poyo Solanas M. A computational neuroethology perspective on body and expression perception. Trends Cogn Sci 2021; 25:744-756. [PMID: 34147363 DOI: 10.1016/j.tics.2021.05.010] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 04/22/2021] [Accepted: 05/24/2021] [Indexed: 01/17/2023]
Abstract
Survival prompts organisms to prepare adaptive behavior in response to environmental and social threat. However, what are the specific features of the appearance of a conspecific that trigger such adaptive behaviors? For social species, the prime candidates for triggering defense systems are the visual features of the face and the body. We propose a novel approach for studying the ability of the brain to gather survival-relevant information from seeing conspecific body features. Specifically, we propose that behaviorally relevant information from bodies and body expressions is coded at the levels of midlevel features in the brain. These levels are relatively independent from higher-order cognitive and conscious perception of bodies and emotions. Instead, our approach is embedded in an ethological framework and mobilizes computational models for feature discovery.
Collapse
Affiliation(s)
- Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200, MD, The Netherlands; Department of Computer Science, University College London, London WC1E 6BT, UK.
| | - Marta Poyo Solanas
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200, MD, The Netherlands
| |
Collapse
|
32
|
Expert Tool Users Show Increased Differentiation between Visual Representations of Hands and Tools. J Neurosci 2021; 41:2980-2989. [PMID: 33563728 PMCID: PMC8018880 DOI: 10.1523/jneurosci.2489-20.2020] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 12/20/2020] [Accepted: 12/23/2020] [Indexed: 11/21/2022] Open
Abstract
The idea that when we use a tool we incorporate it into the neural representation of our body (embodiment) has been a major inspiration for philosophy, science, and engineering. While theoretically appealing, there is little direct evidence for tool embodiment at the neural level. Using functional magnetic resonance imaging (fMRI) in male and female human subjects, we investigated whether expert tool users (London litter pickers: n = 7) represent their expert tool more like a hand (neural embodiment) or less like a hand (neural differentiation), as compared with a group of tool novices (n = 12). During fMRI scans, participants viewed first-person videos depicting grasps performed by either a hand, litter picker, or a non-expert grasping tool. Using representational similarity analysis (RSA), differences in the representational structure of hands and tools were measured within occipitotemporal cortex (OTC). Contrary to the neural embodiment theory, we find that the experts group represent their own tool less like a hand (not more) relative to novices. Using a case-study approach, we further replicated this effect, independently, in five of the seven individual expert litter pickers, as compared with the novices. An exploratory analysis in left parietal cortex, a region implicated in visuomotor representations of hands and tools, also indicated that experts do not visually represent their tool more similar to hands, compared with novices. Together, our findings suggest that extensive tool use leads to an increased neural differentiation between visual representations of hands and tools. This evidence provides an important alternative framework to the prominent tool embodiment theory.SIGNIFICANCE STATEMENT It is commonly thought that tool use leads to the assimilation of the tool into the neural representation of the body, a process referred to as embodiment. Here, we demonstrate that expert tool users (London litter pickers) neurally represent their own tool less like a hand (not more), compared with novices. Our findings advance our current understanding for how experience shapes functional organization in high-order visual cortex. Further, this evidence provides an alternative framework to the prominent tool embodiment theory, suggesting instead that experience with tools leads to more distinct, separable hand and tool representations.
Collapse
|
33
|
Levine SM, Schwarzbach JV. Individualizing Representational Similarity Analysis. Front Psychiatry 2021; 12:729457. [PMID: 34707520 PMCID: PMC8542717 DOI: 10.3389/fpsyt.2021.729457] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 09/10/2021] [Indexed: 11/13/2022] Open
Abstract
Representational similarity analysis (RSA) is a popular multivariate analysis technique in cognitive neuroscience that uses functional neuroimaging to investigate the informational content encoded in brain activity. As RSA is increasingly being used to investigate more clinically-geared questions, the focus of such translational studies turns toward the importance of individual differences and their optimization within the experimental design. In this perspective, we focus on two design aspects: applying individual vs. averaged behavioral dissimilarity matrices to multiple participants' neuroimaging data and ensuring the congruency between tasks when measuring behavioral and neural representational spaces. Incorporating these methods permits the detection of individual differences in representational spaces and yields a better-defined transfer of information from representational spaces onto multivoxel patterns. Such design adaptations are prerequisites for optimal translation of RSA to the field of precision psychiatry.
Collapse
Affiliation(s)
- Seth M Levine
- Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jens V Schwarzbach
- Department of Psychiatry and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
34
|
Sawamura H, Urgen BA, Corbo D, Orban GA. A parietal region processing numerosity of observed actions: An FMRI study. Eur J Neurosci 2020; 52:4732-4750. [PMID: 32745369 PMCID: PMC7818403 DOI: 10.1111/ejn.14930] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 07/15/2020] [Accepted: 07/26/2020] [Indexed: 11/29/2022]
Abstract
When observing others' behavior, it is important to perceive not only the identity of the observed actions (OAs), but also the number of times they were performed. Given the mounting evidence implicating posterior parietal cortex in action observation, and in particular that of manipulative actions, the aim of this study was to identify the parietal region, if any, that contributes to the processing of observed manipulative action (OMA) numerosity, using the functional magnetic resonance imaging technique. Twenty‐one right‐handed healthy volunteers performed two discrimination tasks while in the scanner, responding to video stimuli in which an actor performed manipulative actions on colored target balls that appeared four times consecutively. The subjects discriminated between two small numerosities of either OMAs (“Action” condition) or colors of balls (“Ball” condition). A significant difference between the “Action” and “Ball” conditions was observed in occipito‐temporal cortex and the putative human anterior intraparietal sulcus (phAIP) area as well as the third topographic map of numerosity‐selective neurons at the post‐central sulcus (NPC3) of the left parietal cortex. A further region of interest analysis of the group‐average data showed that at the single voxel level the latter area, more than any other parietal or occipito‐temporal numerosity map, favored numerosity of OAs. These results suggest that phAIP processes the identity of OMAs, while neighboring NPC3 likely processes the numerosity of the identified OAs.
Collapse
Affiliation(s)
- Hiromasa Sawamura
- Department of Medicine and Surgery, University of Parma, Parma, Italy.,Department of Ophthalmology, the University of Tokyo Graduate School of Medicine, Tokyo, Japan
| | - Burcu A Urgen
- Department of Medicine and Surgery, University of Parma, Parma, Italy.,Department of Psychology, Bilkent University, Ankara, Turkey.,Interdisciplinary Neuroscience Program, Bilkent University, Ankara, Turkey.,Aysel Sabuncu Brain Research Center and National Magnetic Resonance Research Center, Bilkent University (UMRAM), Ankara, Turkey
| | - Daniele Corbo
- Department of Medicine and Surgery, University of Parma, Parma, Italy.,Neuroradiology Unit, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Guy A Orban
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| |
Collapse
|
35
|
Investigating Emotional Similarity: A Comment on Riberto, Pobric, and Talmi (2019). Brain Topogr 2020; 33:285-287. [PMID: 32253572 DOI: 10.1007/s10548-020-00766-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 04/01/2020] [Indexed: 10/24/2022]
Abstract
A recent review of the neuroimaging literature on emotional similarity brought to light some of the drawbacks of the latest studies. The authors discussed important methodological considerations for future work in this field, which predominantly involved stimulus selection. In general, we feel that their suggestions are valuable, but we hold that, depending on the specific scientific question(s) under investigation (e.g., individual differences), some of the suggestions may not meaningfully contribute to the scope of the study and might even introduce artificial constraints that could reduce the researchers' ability to discover effects of interest. Here we indicate one way to potentially circumvent such stimulus-related issues in neuroimaging studies and furthermore present a few scenarios in which additional controlling of the stimulus set may not be necessary or possible when investigating individual differences. This commentary serves to supplement the important methodological points raised by the authors by providing a caveat in potentially applying such points to all future experiments investigating emotional similarity.
Collapse
|