1
|
Rolls ET, Turova TS. Visual cortical networks for "What" and "Where" to the human hippocampus revealed with dynamical graphs. Cereb Cortex 2025; 35:bhaf106. [PMID: 40347158 DOI: 10.1093/cercor/bhaf106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Revised: 04/09/2025] [Accepted: 04/10/2025] [Indexed: 05/12/2025] Open
Abstract
Key questions for understanding hippocampal function in memory and navigation in humans are the type and source of visual information that reaches the human hippocampus. We measured bidirectional pairwise effective connectivity with functional magnetic resonance imaging between 360 cortical regions while 956 Human Connectome Project participants viewed scenes, faces, tools, or body parts. We developed a method using deterministic dynamical graphs to define whole cortical networks and the flow in both directions between their cortical regions over timesteps after signal is applied to V1. We revealed that a ventromedial cortical visual "Where" network from V1 via the retrosplenial and medial parahippocampal scene areas reaches the hippocampus when scenes are viewed. A ventrolateral "What" visual cortical network reaches the hippocampus from V1 via V2-V4, the fusiform face cortex, and lateral parahippocampal region TF when faces/objects are viewed. There are major implications for understanding the computations of the human vs rodent hippocampus in memory and navigation: primates with their fovea and highly developed cortical visual processing networks process information about the location of faces, objects, and landmarks in viewed scenes, whereas in rodents the representations in the hippocampal system are mainly about the place where the individual is located and self-motion between places.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute for the Science and Technology of Brain Inspired Intelligence, Fudan University, China
| | | |
Collapse
|
2
|
Georgiev C, Legrand T, Mongold SJ, Fiedler-Valenta M, Guittard F, Bourguignon M. An open-access database of video stimuli for action observation research in neuroimaging settings: psychometric evaluation and motion characterization. Front Psychol 2024; 15:1407458. [PMID: 39386138 PMCID: PMC11461298 DOI: 10.3389/fpsyg.2024.1407458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 09/09/2024] [Indexed: 10/12/2024] Open
Abstract
Video presentation has become ubiquitous in paradigms investigating the neural and behavioral responses to observed actions. In spite of the great interest in uncovering the processing of observed bodily movements and actions in neuroscience and cognitive science, at present, no standardized set of video stimuli for action observation research in neuroimaging settings exists. To facilitate future action observation research, we developed an open-access database of 135 high-definition videos of a male actor performing object-oriented actions. Actions from 3 categories: kinematically natural and goal-intact (Normal), kinematically unnatural and goal-intact (How), or kinematically natural and goal-violating (What), directed toward 15 different objects were filmed from 3 angles. Psychometric evaluation of the database revealed high video recognition accuracy (Mean accuracy = 88.61 %) and substantial inter-rater agreement (Fleiss' Kappa = 0.702), establishing excellent validity and reliability. Videos' exact timing of motion onset was identified using a custom motion detection frame-differencing procedure. Based on its outcome, the videos were edited to assure that motion begins at the second frame of each video. The videos' timing of category recognition was also identified using a novel behavioral up-down staircase procedure. The identified timings can be incorporated in future experimental designs to counteract jittered stimulus onsets, thus vastly improving the sensitivity of neuroimaging experiments. All videos, their psychometric evaluations, and the timing of their frame of category recognition, as well as our custom programs for performing these evaluations on our, or on other similar video databases, are available at the Open Science Framework (https://osf.io/zexc4/).
Collapse
Affiliation(s)
- Christian Georgiev
- Laboratory of Neurophysiology and Movement Biomechanics, UNI – ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Thomas Legrand
- Laboratory of Neurophysiology and Movement Biomechanics, UNI – ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Scott J. Mongold
- Laboratory of Neurophysiology and Movement Biomechanics, UNI – ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Manoa Fiedler-Valenta
- Laboratory of Neurophysiology and Movement Biomechanics, UNI – ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Frédéric Guittard
- Laboratory of Neurophysiology and Movement Biomechanics, UNI – ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Mathieu Bourguignon
- Laboratory of Neurophysiology and Movement Biomechanics, UNI – ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
- Laboratoire de Neuroanatomie et Neuroimagerie Translationnelles, UNI – ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
- BCBL, Basque Center on Cognition, Brain and Language, Donostia-San Sebastian, Spain
| |
Collapse
|
3
|
Dima DC, Janarthanan S, Culham JC, Mohsenzadeh Y. Shared representations of human actions across vision and language. Neuropsychologia 2024; 202:108962. [PMID: 39047974 DOI: 10.1016/j.neuropsychologia.2024.108962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 06/26/2024] [Accepted: 07/20/2024] [Indexed: 07/27/2024]
Abstract
Humans can recognize and communicate about many actions performed by others. How are actions organized in the mind, and is this organization shared across vision and language? We collected similarity judgments of human actions depicted through naturalistic videos and sentences, and tested four models of action categorization, defining actions at different levels of abstraction ranging from specific (action verb) to broad (action target: whether an action is directed towards an object, another person, or the self). The similarity judgments reflected a shared organization of action representations across videos and sentences, determined mainly by the target of actions, even after accounting for other semantic features. Furthermore, language model embeddings predicted the behavioral similarity of action videos and sentences, and captured information about the target of actions alongside unique semantic information. Together, our results show that action concepts are similarly organized in the mind across vision and language, and that this organization reflects socially relevant goals.
Collapse
Affiliation(s)
- Diana C Dima
- Dept of Computer Science, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada.
| | | | - Jody C Culham
- Dept of Psychology, Western University, London, Ontario, Canada
| | - Yalda Mohsenzadeh
- Dept of Computer Science, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Rolls ET, Feng J, Zhang R. Selective activations and functional connectivities to the sight of faces, scenes, body parts and tools in visual and non-visual cortical regions leading to the human hippocampus. Brain Struct Funct 2024; 229:1471-1493. [PMID: 38839620 PMCID: PMC11176242 DOI: 10.1007/s00429-024-02811-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 05/22/2024] [Indexed: 06/07/2024]
Abstract
Connectivity maps are now available for the 360 cortical regions in the Human Connectome Project Multimodal Parcellation atlas. Here we add function to these maps by measuring selective fMRI activations and functional connectivity increases to stationary visual stimuli of faces, scenes, body parts and tools from 956 HCP participants. Faces activate regions in the ventrolateral visual cortical stream (FFC), in the superior temporal sulcus (STS) visual stream for face and head motion; and inferior parietal visual (PGi) and somatosensory (PF) regions. Scenes activate ventromedial visual stream VMV and PHA regions in the parahippocampal scene area; medial (7m) and lateral parietal (PGp) regions; and the reward-related medial orbitofrontal cortex. Body parts activate the inferior temporal cortex object regions (TE1p, TE2p); but also visual motion regions (MT, MST, FST); and the inferior parietal visual (PGi, PGs) and somatosensory (PF) regions; and the unpleasant-related lateral orbitofrontal cortex. Tools activate an intermediate ventral stream area (VMV3, VVC, PHA3); visual motion regions (FST); somatosensory (1, 2); and auditory (A4, A5) cortical regions. The findings add function to cortical connectivity maps; and show how stationary visual stimuli activate other cortical regions related to their associations, including visual motion, somatosensory, auditory, semantic, and orbitofrontal cortex value-related, regions.
Collapse
Affiliation(s)
- Edmund T Rolls
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, 200403, China.
- Oxford Centre for Computational Neuroscience, Oxford, UK.
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, 200403, China
| | - Ruohan Zhang
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.
| |
Collapse
|
5
|
Rolls ET. Two what, two where, visual cortical streams in humans. Neurosci Biobehav Rev 2024; 160:105650. [PMID: 38574782 DOI: 10.1016/j.neubiorev.2024.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/25/2024] [Accepted: 03/31/2024] [Indexed: 04/06/2024]
Abstract
ROLLS, E. T. Two What, Two Where, Visual Cortical Streams in Humans. NEUROSCI BIOBEHAV REV 2024. Recent cortical connectivity investigations lead to new concepts about 'What' and 'Where' visual cortical streams in humans, and how they connect to other cortical systems. A ventrolateral 'What' visual stream leads to the inferior temporal visual cortex for object and face identity, and provides 'What' information to the hippocampal episodic memory system, the anterior temporal lobe semantic system, and the orbitofrontal cortex emotion system. A superior temporal sulcus (STS) 'What' visual stream utilising connectivity from the temporal and parietal visual cortex responds to moving objects and faces, and face expression, and connects to the orbitofrontal cortex for emotion and social behaviour. A ventromedial 'Where' visual stream builds feature combinations for scenes, and provides 'Where' inputs via the parahippocampal scene area to the hippocampal episodic memory system that are also useful for landmark-based navigation. The dorsal 'Where' visual pathway to the parietal cortex provides for actions in space, but also provides coordinate transforms to provide inputs to the parahippocampal scene area for self-motion update of locations in scenes in the dark or when the view is obscured.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China.
| |
Collapse
|
6
|
McLeod J, Chavan A, Lee H, Sattari S, Kurry S, Wake M, Janmohamed Z, Hodges NJ, Virji-Babul N. Distinct Effects of Brain Activation Using tDCS and Observational Practice: Implications for Motor Rehabilitation. Brain Sci 2024; 14:175. [PMID: 38391749 PMCID: PMC10886768 DOI: 10.3390/brainsci14020175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 02/06/2024] [Accepted: 02/08/2024] [Indexed: 02/24/2024] Open
Abstract
Complex motor skills can be acquired while observing a model without physical practice. Transcranial direct-current stimulation (tDCS) applied to the primary motor cortex (M1) also facilitates motor learning. However, the effectiveness of observational practice for bimanual coordination skills is debated. We compared the behavioural and brain causal connectivity patterns following three interventions: primary motor cortex stimulation (M1-tDCS), action-observation (AO) and a combined group (AO+M1-tDCS) when acquiring a bimanual, two-ball juggling skill. Thirty healthy young adults with no juggling experience were randomly assigned to either video observation of a skilled juggler, anodal M1-tDCS or video observation combined with M1-tDCS. Thirty trials of juggling were performed and scored after the intervention. Resting-state EEG data were collected before and after the intervention. Information flow rate was applied to EEG source data to measure causal connectivity. The two observation groups were more accurate than the tDCS alone group. In the AO condition, there was strong information exchange from (L) parietal to (R) parietal regions, strong bidirectional information exchange between (R) parietal and (R) occipital regions and an extensive network of activity that was (L) lateralized. The M1-tDCS condition was characterized by bilateral long-range connections with the strongest information exchange from the (R) occipital region to the (R) temporal and (L) occipital regions. AO+M1-tDCS induced strong bidirectional information exchange in occipital and temporal regions in both hemispheres. Uniquely, it was the only condition that was characterized by information exchange between the (R) frontal and central regions. This study provides new results about the distinct network dynamics of stimulating the brain for skill acquisition, providing insights for motor rehabilitation.
Collapse
Affiliation(s)
- Julianne McLeod
- Rehabilitation Science, Faculty of Medicine, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
| | - Anuj Chavan
- Electronics and Telecommunication Engineering, Sardar Patel Institute of Technology, Mumbai 400058, India
| | - Harvey Lee
- Schulich School of Medicine & Dentistry, Western University, London, ON N6A 5C1, Canada
| | - Sahar Sattari
- Biomedical Engineering, Faculty of Applied Science and Faculty of Medicine, University of British Columbia, Vancouver, BC V6T 2B9, Canada
| | - Simrut Kurry
- Neuroscience, Faculty of Science, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
| | - Miku Wake
- Neuroscience, Faculty of Science, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
| | - Zia Janmohamed
- Neuroscience, Faculty of Science, McGill University, Montreal, QC H3A 2B4, Canada
| | - Nicola Jane Hodges
- School of Kinesiology, Faculty of Education, University of British Columbia, Vancouver, BC V6T 1Z1, Canada
| | - Naznin Virji-Babul
- Physical Therapy, Faculty of Medicine, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
- Djavad Mowafaghian Centre for Brain Health, Vancouver, BC V6T 1Z3, Canada
| |
Collapse
|
7
|
Nizamoglu H, Urgen BA. Neural processing of bottom-up perception of biological motion under attentional load. Vision Res 2024; 214:108328. [PMID: 37926626 DOI: 10.1016/j.visres.2023.108328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/07/2023]
Abstract
Considering its importance for one's survival and social significance, biological motion (BM) perception is assumed to occur automatically. Previous behavioral results showed that task-irrelevant BM in the periphery interfered with task performance at the fovea. Under selective attention, BM perception is supported by a network of regions including the occipito-temporal (OTC), parietal, and premotor cortices. Retinotopy studies that use BM stimulus showed distinct maps for its processing under and away from selective attention. Based on these findings, we investigated how bottom-up perception of BM would be processed in the human brain under attentional load when it was shown away from the focus of attention as a task-irrelevant stimulus. Participants (N = 31) underwent an fMRI study in which they performed an attentionally demanding visual detection task at the fovea while intact or scrambled point light displays of BM were shown at the periphery. Our results showed the main effect of attentional load in fronto-parietal regions and both univariate activity maps and multivariate pattern analysis results support the attentional load modulation on the task-irrelevant peripheral stimuli. However, this effect was not specific to intact BM stimuli and was generalized to motion stimuli as evidenced by the motion-sensitive OTC involvement during the presence of dynamic stimuli in the periphery. These results confirm and extend previous work by showing that task-irrelevant distractors can be processed by stimulus-specific regions when there are enough attentional resources available. We discussed the implications of these results for future studies.
Collapse
Affiliation(s)
- Hilal Nizamoglu
- Interdisciplinary Neuroscience Program, Bilkent University, Ankara, Turkey; Department of Psychology, Justus Liebig University in Giessen, Giessen, Germany.
| | - Burcu A Urgen
- Interdisciplinary Neuroscience Program, Bilkent University, Ankara, Turkey; Department of Psychology, Bilkent University, Ankara, Turkey; Aysel Sabuncu Brain Research Center and National Magnetic Resonance Imaging Center, Bilkent University, Ankara, Turkey.
| |
Collapse
|
8
|
Rolls ET, Deco G, Huang CC, Feng J. The human posterior parietal cortex: effective connectome, and its relation to function. Cereb Cortex 2023; 33:3142-3170. [PMID: 35834902 PMCID: PMC10401905 DOI: 10.1093/cercor/bhac266] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/10/2022] [Accepted: 06/11/2022] [Indexed: 01/04/2023] Open
Abstract
The effective connectivity between 21 regions in the human posterior parietal cortex, and 360 cortical regions was measured in 171 Human Connectome Project (HCP) participants using the HCP atlas, and complemented with functional connectivity and diffusion tractography. Intraparietal areas LIP, VIP, MIP, and AIP have connectivity from early cortical visual regions, and to visuomotor regions such as the frontal eye fields, consistent with functions in eye saccades and tracking. Five superior parietal area 7 regions receive from similar areas and from the intraparietal areas, but also receive somatosensory inputs and connect with premotor areas including area 6, consistent with functions in performing actions to reach for, grasp, and manipulate objects. In the anterior inferior parietal cortex, PFop, PFt, and PFcm are mainly somatosensory, and PF in addition receives visuo-motor and visual object information, and is implicated in multimodal shape and body image representations. In the posterior inferior parietal cortex, PFm and PGs combine visuo-motor, visual object, and reward input and connect with the hippocampal system. PGi in addition provides a route to motion-related superior temporal sulcus regions involved in social interactions. PGp has connectivity with intraparietal regions involved in coordinate transforms and may be involved in idiothetic update of hippocampal visual scene representations.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| | - Gustavo Deco
- Computational Neuroscience Group, Department of Information and Communication Technologies, Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain
- Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, Institute of Brain and Education Innovation, East China Normal University, Shanghai 200602, China
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
9
|
Yargholi E, Hossein-Zadeh GA, Vaziri-Pashkam M. Two distinct networks containing position-tolerant representations of actions in the human brain. Cereb Cortex 2023; 33:1462-1475. [PMID: 35511702 PMCID: PMC10310977 DOI: 10.1093/cercor/bhac149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 03/24/2022] [Accepted: 03/25/2022] [Indexed: 11/13/2022] Open
Abstract
Humans can recognize others' actions in the social environment. This action recognition ability is rarely hindered by the movement of people in the environment. The neural basis of this position tolerance for observed actions is not fully understood. Here, we aimed to identify brain regions capable of generalizing representations of actions across different positions and investigate the representational content of these regions. In a functional magnetic resonance imaging experiment, participants viewed point-light displays of different human actions. Stimuli were presented in either the upper or the lower visual field. Multivariate pattern analysis and a surface-based searchlight approach were employed to identify brain regions that contain position-tolerant action representation: Classifiers were trained with patterns in response to stimuli presented in one position and were tested with stimuli presented in another position. Results showed above-chance classification in the left and right lateral occipitotemporal cortices, right intraparietal sulcus, and right postcentral gyrus. Further analyses exploring the representational content of these regions showed that responses in the lateral occipitotemporal regions were more related to subjective judgments, while those in the parietal regions were more related to objective measures. These results provide evidence for two networks that contain abstract representations of human actions with distinct representational content.
Collapse
Affiliation(s)
- Elahé Yargholi
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran 1956836484, Iran
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, Leuven 3714, Belgium
| | - Gholam-Ali Hossein-Zadeh
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran 1956836484, Iran
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran 1439957131, Iran
| | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health (NIMH), Bethesda, MD 20814, United States
| |
Collapse
|
10
|
Rolls ET, Wirth S, Deco G, Huang C, Feng J. The human posterior cingulate, retrosplenial, and medial parietal cortex effective connectome, and implications for memory and navigation. Hum Brain Mapp 2023; 44:629-655. [PMID: 36178249 PMCID: PMC9842927 DOI: 10.1002/hbm.26089] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
The human posterior cingulate, retrosplenial, and medial parietal cortex are involved in memory and navigation. The functional anatomy underlying these cognitive functions was investigated by measuring the effective connectivity of these Posterior Cingulate Division (PCD) regions in the Human Connectome Project-MMP1 atlas in 171 HCP participants, and complemented with functional connectivity and diffusion tractography. First, the postero-ventral parts of the PCD (31pd, 31pv, 7m, d23ab, and v23ab) have effective connectivity with the temporal pole, inferior temporal visual cortex, cortex in the superior temporal sulcus implicated in auditory and semantic processing, with the reward-related vmPFC and pregenual anterior cingulate cortex, with the inferior parietal cortex, and with the hippocampal system. This connectivity implicates it in hippocampal episodic memory, providing routes for "what," reward and semantic schema-related information to access the hippocampus. Second, the antero-dorsal parts of the PCD (especially 31a and 23d, PCV, and also RSC) have connectivity with early visual cortical areas including those that represent spatial scenes, with the superior parietal cortex, with the pregenual anterior cingulate cortex, and with the hippocampal system. This connectivity implicates it in the "where" component for hippocampal episodic memory and for spatial navigation. The dorsal-transitional-visual (DVT) and ProStriate regions where the retrosplenial scene area is located have connectivity from early visual cortical areas to the parahippocampal scene area, providing a ventromedial route for spatial scene information to reach the hippocampus. These connectivities provide important routes for "what," reward, and "where" scene-related information for human hippocampal episodic memory and navigation. The midcingulate cortex provides a route from the anterior dorsal parts of the PCD and the supracallosal part of the anterior cingulate cortex to premotor regions.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
- Institute of Science and Technology for Brain Inspired IntelligenceFudan UniversityShanghaiChina
- Key Laboratory of Computational Neuroscience and Brain Inspired IntelligenceFudan University, Ministry of EducationShanghaiChina
- Fudan ISTBI—ZJNU Algorithm Centre for Brain‐Inspired IntelligenceZhejiang Normal UniversityJinhuaChina
| | - Sylvia Wirth
- Institut des Sciences Cognitives Marc Jeannerod, UMR 5229CNRS and University of LyonBronFrance
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication TechnologiesUniversitat Pompeu FabraBarcelonaSpain
- Brain and CognitionPompeu Fabra UniversityBarcelonaSpain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA)Universitat Pompeu FabraBarcelonaSpain
| | - Chu‐Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive ScienceEast China Normal UniversityShanghaiChina
| | - Jianfeng Feng
- Department of Computer ScienceUniversity of WarwickCoventryUK
- Institute of Science and Technology for Brain Inspired IntelligenceFudan UniversityShanghaiChina
- Key Laboratory of Computational Neuroscience and Brain Inspired IntelligenceFudan University, Ministry of EducationShanghaiChina
- Fudan ISTBI—ZJNU Algorithm Centre for Brain‐Inspired IntelligenceZhejiang Normal UniversityJinhuaChina
| |
Collapse
|
11
|
A Large Video Set of Natural Human Actions for Visual and Cognitive Neuroscience Studies and Its Validation with fMRI. Brain Sci 2022; 13:brainsci13010061. [PMID: 36672043 PMCID: PMC9856703 DOI: 10.3390/brainsci13010061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/14/2022] [Accepted: 12/22/2022] [Indexed: 12/31/2022] Open
Abstract
The investigation of the perception of others' actions and underlying neural mechanisms has been hampered by the lack of a comprehensive stimulus set covering the human behavioral repertoire. To fill this void, we present a video set showing 100 human actions recorded in natural settings, covering the human repertoire except for emotion-driven (e.g., sexual) actions and those involving implements (e.g., tools). We validated the set using fMRI and showed that observation of the 100 actions activated the well-established action observation network. We also quantified the videos' low-level visual features (luminance, optic flow, and edges). Thus, this comprehensive video set is a valuable resource for perceptual and neuronal studies.
Collapse
|
12
|
Watanabe R, Kim Y, Kuruma H, Takahashi H. Imitation encourages empathic capacity toward other individuals with physical disabilities. Neuroimage 2022; 264:119710. [PMID: 36283544 DOI: 10.1016/j.neuroimage.2022.119710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 10/20/2022] [Accepted: 10/21/2022] [Indexed: 11/09/2022] Open
Abstract
Many people have difficulty empathizing with others who have dissimilar characteristics, such as physical disabilities. We hypothesized that people with no disabilities imitating the movements of individuals with disabilities could improve the empathic capacity toward their difficulties. To evaluate this hypothesis, we used functional magnetic resonance imaging to measure the neural activity patterns of 26 healthy participants while they felt the difficulties of individuals with hemiplegia by adopting their perspective. The participants initially either imitated or observed hemiplegic hand movements shown in video clips. Subsequently, the videos were rewatched and their difficulties were rated. Analysis of the subjective rating scores indicated that after imitating the hemiplegic movements, the participants felt into the difficulties of hemiplegia better than if they simply observed them. The cross-validation approach of multivoxel pattern analyses demonstrated that the information regarding the effect of imitation on empathizing with the difficulties was represented in specific activation patterns of brain regions involved in the mirror neuron system and cognitive empathy by comparing to other conditions that did not contain the information. The cross-classification approach detected distinct activation patterns in the brain regions involved in affective and cognitive empathy, commonly while imitating the hemiplegic movements and subsequently feeling them. This indicated that the common representation related to these two types of empathy existed between imitating and feeling the hemiplegic movements. Furthermore, representational similarity analysis revealed that activity patterns in the anterior cingulate cortex linked to affective empathy tuned to the subjective assessment of hemiplegic movements. Our findings indicate that imitating the movements of individuals with hemiplegia triggered the affective empathic response and improved the cognitive empathic response toward them. The affective empathic response also linked the subjective assessment to the difficulties of hemiplegia, which was especially modulated by the experience of imitation. Imitating the movements of individuals with disabilities likely encourages empathic capacity from both affective and cognitive aspects, resulting in people with no disabilities precisely feeling what they are feeling.
Collapse
Affiliation(s)
- Rui Watanabe
- Department of Psychiatry and Behavioral Sciences, Graduate School of Medical and Dental Sciences Tokyo Medical and Dental University, 1-5-45 Yusima, Bunkyo-ku, Tokyo 113-8549, Japan; Department of Physical Therapy Science, Division of Human Health Science, Graduate School of Tokyo Metropolitan University, 7-2-10 Higashiogu, Arakawa-ku, Tokyo 116-8551, Japan.
| | - Yuri Kim
- Department of Diagnistics and Theraputics for brain Diseases, Molecular Neuroscience Research Center, Shiga University of Medical Science, Setatsukinowacho, Otsu, Shiga 520-2121 Japan
| | - Hironobu Kuruma
- Department of Physical Therapy Science, Division of Human Health Science, Graduate School of Tokyo Metropolitan University, 7-2-10 Higashiogu, Arakawa-ku, Tokyo 116-8551, Japan
| | - Hidehiko Takahashi
- Department of Psychiatry and Behavioral Sciences, Graduate School of Medical and Dental Sciences Tokyo Medical and Dental University, 1-5-45 Yusima, Bunkyo-ku, Tokyo 113-8549, Japan
| |
Collapse
|
13
|
Shahdloo M, Çelik E, Urgen BA, Gallant JL, Çukur T. Task-Dependent Warping of Semantic Representations during Search for Visual Action Categories. J Neurosci 2022; 42:6782-6799. [PMID: 35863889 PMCID: PMC9436022 DOI: 10.1523/jneurosci.1372-21.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 06/29/2022] [Accepted: 07/06/2022] [Indexed: 11/21/2022] Open
Abstract
Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied brain activity recorded from five subjects (one female) via functional magnetic resonance imaging while they viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity toward target actions and that tuning shifts are a general feature of conceptual representations in the brain.SIGNIFICANCE STATEMENT The ability to swiftly perceive the actions and intentions of others is a crucial skill for humans that relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. However, little is known about the nature of high-level semantic representations during natural visual search for action categories. Here, we provide the first evidence showing that attention significantly warps semantic representations by inducing tuning shifts in single cortical voxels, broadly spread across occipitotemporal, parietal, prefrontal, and cingulate cortices. This dynamic attentional mechanism can facilitate action perception by efficiently allocating neural resources to accentuate the representation of task-relevant action categories.
Collapse
Affiliation(s)
- Mo Shahdloo
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, Oxford OX3 9DU, United Kingdom
- National Magnetic Resonance Research Centre, Bilkent University, 06800 Ankara, Turkey
- Departments of Electrical and Electronics Engineering and
| | - Emin Çelik
- National Magnetic Resonance Research Centre, Bilkent University, 06800 Ankara, Turkey
- Neuroscience Program, Aysel Sabuncu Brain Research Centre, Bilkent University, 06800 Ankara, Turkey
| | - Burcu A Urgen
- National Magnetic Resonance Research Centre, Bilkent University, 06800 Ankara, Turkey
- Psychology, Bilkent University, 06800 Ankara, Turkey
- Neuroscience Program, Aysel Sabuncu Brain Research Centre, Bilkent University, 06800 Ankara, Turkey
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California 94720
| | - Tolga Çukur
- National Magnetic Resonance Research Centre, Bilkent University, 06800 Ankara, Turkey
- Departments of Electrical and Electronics Engineering and
- Neuroscience Program, Aysel Sabuncu Brain Research Centre, Bilkent University, 06800 Ankara, Turkey
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California 94720
| |
Collapse
|
14
|
Michalowski B, Buchwald M, Klichowski M, Ras M, Kroliczak G. Action goals and the praxis network: an fMRI study. Brain Struct Funct 2022; 227:2261-2284. [PMID: 35731447 PMCID: PMC9418102 DOI: 10.1007/s00429-022-02520-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 05/30/2022] [Indexed: 01/09/2023]
Abstract
The praxis representation network (PRN) of the left cerebral hemisphere is typically linked to the control of functional interactions with familiar tools. Surprisingly, little is known about the PRN engagement in planning and execution of tool-directed actions motivated by non-functional but purposeful action goals. Here we used functional neuroimaging to perform both univariate and multi-voxel pattern analyses (MVPA) in 20 right-handed participants who planned and later executed, with their dominant and non-dominant hands, disparate grasps of tools for different goals, including: (1) planning simple vs. demanding functional grasps of conveniently vs. inconveniently oriented tools with an intention to immediately use them, (2) planning simple—but non-functional—grasps of inconveniently oriented tools with a goal to pass them to a different person, (3) planning reaching movements directed at such tools with an intention to move/push them with the back of the hand, and (4) pantomimed execution of the earlier planned tasks. While PRN contributed to the studied interactions with tools, the engagement of its critical nodes, and/or complementary right hemisphere processing, was differently modulated by task type. E.g., planning non-functional/structural grasp-to-pass movements of inconveniently oriented tools, regardless of the hand, invoked the left parietal and prefrontal nodes significantly more than simple, non-demanding functional grasps. MVPA corroborated decoding capabilities of critical PRN areas and some of their right hemisphere counterparts. Our findings shed new lights on how performance of disparate action goals influences the extraction of object affordances, and how or to what extent it modulates the neural activity within the parieto-frontal brain networks.
Collapse
Affiliation(s)
- Bartosz Michalowski
- Action and Cognition Laboratory, Faculty of Psychology and Cognitive Science, Adam Mickiewicz University, Wydział Psychologii i Kognitywistyki UAM, ul. Szamarzewskiego 89, 60-568, Poznan, Poland
| | - Mikolaj Buchwald
- Action and Cognition Laboratory, Faculty of Psychology and Cognitive Science, Adam Mickiewicz University, Wydział Psychologii i Kognitywistyki UAM, ul. Szamarzewskiego 89, 60-568, Poznan, Poland
| | - Michal Klichowski
- Action and Cognition Laboratory, Faculty of Psychology and Cognitive Science, Adam Mickiewicz University, Wydział Psychologii i Kognitywistyki UAM, ul. Szamarzewskiego 89, 60-568, Poznan, Poland.,Learning Laboratory, Faculty of Educational Studies, Adam Mickiewicz University, Poznan, Poland
| | - Maciej Ras
- Action and Cognition Laboratory, Faculty of Psychology and Cognitive Science, Adam Mickiewicz University, Wydział Psychologii i Kognitywistyki UAM, ul. Szamarzewskiego 89, 60-568, Poznan, Poland
| | - Gregory Kroliczak
- Action and Cognition Laboratory, Faculty of Psychology and Cognitive Science, Adam Mickiewicz University, Wydział Psychologii i Kognitywistyki UAM, ul. Szamarzewskiego 89, 60-568, Poznan, Poland.
| |
Collapse
|