1
|
McEwan J, Kritikos A, Zeljko M. Crossmodal correspondence of elevation/pitch and size/pitch is driven by real-world features. Atten Percept Psychophys 2024; 86:2821-2833. [PMID: 39461934 DOI: 10.3758/s13414-024-02975-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/09/2024] [Indexed: 10/28/2024]
Abstract
Crossmodal correspondences are consistent associations between sensory features from different modalities, with some theories suggesting they may either reflect environmental correlations or stem from innate neural structures. This study investigates this question by examining whether retinotopic or representational features of stimuli induce crossmodal congruency effects. Participants completed an auditory pitch discrimination task paired with visual stimuli varying in their sensory (retinotopic) or representational (scene integrated) nature, for both the elevation/pitch and size/pitch correspondences. Results show that only representational visual stimuli produced crossmodal congruency effects on pitch discrimination. These results support an environmental statistics hypothesis, suggesting crossmodal correspondences rely on real-world features rather than on sensory representations.
Collapse
Affiliation(s)
- John McEwan
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4072, Australia.
| | - Ada Kritikos
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4072, Australia
| | - Mick Zeljko
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4072, Australia
| |
Collapse
|
2
|
Yang L, Jin M, Zhang C, Qian N, Zhang M. Distributions of Visual Receptive Fields from Retinotopic to Craniotopic Coordinates in the Lateral Intraparietal Area and Frontal Eye Fields of the Macaque. Neurosci Bull 2024; 40:171-181. [PMID: 37573519 PMCID: PMC10838878 DOI: 10.1007/s12264-023-01097-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 04/16/2023] [Indexed: 08/15/2023] Open
Abstract
Even though retinal images of objects change their locations following each eye movement, we perceive a stable and continuous world. One possible mechanism by which the brain achieves such visual stability is to construct a craniotopic coordinate by integrating retinal and extraretinal information. There have been several proposals on how this may be done, including eye-position modulation (gain fields) of retinotopic receptive fields (RFs) and craniotopic RFs. In the present study, we investigated coordinate systems used by RFs in the lateral intraparietal (LIP) cortex and frontal eye fields (FEF) and compared the two areas. We mapped the two-dimensional RFs of neurons in detail under two eye fixations and analyzed how the RF of a given neuron changes with eye position to determine its coordinate representation. The same recording and analysis procedures were applied to the two brain areas. We found that, in both areas, RFs were distributed from retinotopic to craniotopic representations. There was no significant difference between the distributions in the LIP and FEF. Only a small fraction of neurons was fully craniotopic, whereas most neurons were between the retinotopic and craniotopic representations. The distributions were strongly biased toward the retinotopic side but with significant craniotopic shifts. These results suggest that there is only weak evidence for craniotopic RFs in the LIP and FEF, and that transformation from retinotopic to craniotopic coordinates in these areas must rely on other factors such as gain fields.
Collapse
Affiliation(s)
- Lin Yang
- Key Laboratory of Cognitive Neuroscience and Learning, Division of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Min Jin
- Key Laboratory of Cognitive Neuroscience and Learning, Division of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Cong Zhang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Ning Qian
- Department of Neuroscience and Zuckerman Institute, Columbia University, New York, 10027, USA
| | - Mingsha Zhang
- Key Laboratory of Cognitive Neuroscience and Learning, Division of Psychology, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
3
|
Rolls ET, Deco G, Huang CC, Feng J. The human posterior parietal cortex: effective connectome, and its relation to function. Cereb Cortex 2023; 33:3142-3170. [PMID: 35834902 PMCID: PMC10401905 DOI: 10.1093/cercor/bhac266] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/10/2022] [Accepted: 06/11/2022] [Indexed: 01/04/2023] Open
Abstract
The effective connectivity between 21 regions in the human posterior parietal cortex, and 360 cortical regions was measured in 171 Human Connectome Project (HCP) participants using the HCP atlas, and complemented with functional connectivity and diffusion tractography. Intraparietal areas LIP, VIP, MIP, and AIP have connectivity from early cortical visual regions, and to visuomotor regions such as the frontal eye fields, consistent with functions in eye saccades and tracking. Five superior parietal area 7 regions receive from similar areas and from the intraparietal areas, but also receive somatosensory inputs and connect with premotor areas including area 6, consistent with functions in performing actions to reach for, grasp, and manipulate objects. In the anterior inferior parietal cortex, PFop, PFt, and PFcm are mainly somatosensory, and PF in addition receives visuo-motor and visual object information, and is implicated in multimodal shape and body image representations. In the posterior inferior parietal cortex, PFm and PGs combine visuo-motor, visual object, and reward input and connect with the hippocampal system. PGi in addition provides a route to motion-related superior temporal sulcus regions involved in social interactions. PGp has connectivity with intraparietal regions involved in coordinate transforms and may be involved in idiothetic update of hippocampal visual scene representations.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| | - Gustavo Deco
- Computational Neuroscience Group, Department of Information and Communication Technologies, Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain
- Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, Institute of Brain and Education Innovation, East China Normal University, Shanghai 200602, China
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
4
|
Rolls ET, Wirth S, Deco G, Huang C, Feng J. The human posterior cingulate, retrosplenial, and medial parietal cortex effective connectome, and implications for memory and navigation. Hum Brain Mapp 2023; 44:629-655. [PMID: 36178249 PMCID: PMC9842927 DOI: 10.1002/hbm.26089] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
The human posterior cingulate, retrosplenial, and medial parietal cortex are involved in memory and navigation. The functional anatomy underlying these cognitive functions was investigated by measuring the effective connectivity of these Posterior Cingulate Division (PCD) regions in the Human Connectome Project-MMP1 atlas in 171 HCP participants, and complemented with functional connectivity and diffusion tractography. First, the postero-ventral parts of the PCD (31pd, 31pv, 7m, d23ab, and v23ab) have effective connectivity with the temporal pole, inferior temporal visual cortex, cortex in the superior temporal sulcus implicated in auditory and semantic processing, with the reward-related vmPFC and pregenual anterior cingulate cortex, with the inferior parietal cortex, and with the hippocampal system. This connectivity implicates it in hippocampal episodic memory, providing routes for "what," reward and semantic schema-related information to access the hippocampus. Second, the antero-dorsal parts of the PCD (especially 31a and 23d, PCV, and also RSC) have connectivity with early visual cortical areas including those that represent spatial scenes, with the superior parietal cortex, with the pregenual anterior cingulate cortex, and with the hippocampal system. This connectivity implicates it in the "where" component for hippocampal episodic memory and for spatial navigation. The dorsal-transitional-visual (DVT) and ProStriate regions where the retrosplenial scene area is located have connectivity from early visual cortical areas to the parahippocampal scene area, providing a ventromedial route for spatial scene information to reach the hippocampus. These connectivities provide important routes for "what," reward, and "where" scene-related information for human hippocampal episodic memory and navigation. The midcingulate cortex provides a route from the anterior dorsal parts of the PCD and the supracallosal part of the anterior cingulate cortex to premotor regions.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
- Institute of Science and Technology for Brain Inspired IntelligenceFudan UniversityShanghaiChina
- Key Laboratory of Computational Neuroscience and Brain Inspired IntelligenceFudan University, Ministry of EducationShanghaiChina
- Fudan ISTBI—ZJNU Algorithm Centre for Brain‐Inspired IntelligenceZhejiang Normal UniversityJinhuaChina
| | - Sylvia Wirth
- Institut des Sciences Cognitives Marc Jeannerod, UMR 5229CNRS and University of LyonBronFrance
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication TechnologiesUniversitat Pompeu FabraBarcelonaSpain
- Brain and CognitionPompeu Fabra UniversityBarcelonaSpain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA)Universitat Pompeu FabraBarcelonaSpain
| | - Chu‐Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive ScienceEast China Normal UniversityShanghaiChina
| | - Jianfeng Feng
- Department of Computer ScienceUniversity of WarwickCoventryUK
- Institute of Science and Technology for Brain Inspired IntelligenceFudan UniversityShanghaiChina
- Key Laboratory of Computational Neuroscience and Brain Inspired IntelligenceFudan University, Ministry of EducationShanghaiChina
- Fudan ISTBI—ZJNU Algorithm Centre for Brain‐Inspired IntelligenceZhejiang Normal UniversityJinhuaChina
| |
Collapse
|
5
|
Zhao Z, Ahissar E, Victor JD, Rucci M. Inferring visual space from ultra-fine extra-retinal knowledge of gaze position. Nat Commun 2023; 14:269. [PMID: 36650146 PMCID: PMC9845343 DOI: 10.1038/s41467-023-35834-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 01/03/2023] [Indexed: 01/18/2023] Open
Abstract
It has long been debated how humans resolve fine details and perceive a stable visual world despite the incessant fixational motion of their eyes. Current theories assume these processes to rely solely on the visual input to the retina, without contributions from motor and/or proprioceptive sources. Here we show that contrary to this widespread assumption, the visual system has access to high-resolution extra-retinal knowledge of fixational eye motion and uses it to deduce spatial relations. Building on recent advances in gaze-contingent display control, we created a spatial discrimination task in which the stimulus configuration was entirely determined by oculomotor activity. Our results show that humans correctly infer geometrical relations in the absence of spatial information on the retina and accurately combine high-resolution extraretinal monitoring of gaze displacement with retinal signals. These findings reveal a sensory-motor strategy for encoding space, in which fine oculomotor knowledge is used to interpret the fixational input to the retina.
Collapse
Affiliation(s)
- Zhetuo Zhao
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ehud Ahissar
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY, USA
| | - Michele Rucci
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
6
|
Jovanovic L, McGraw PV, Roach NW, Johnston A. The spatial properties of adaptation-induced distance compression. J Vis 2022; 22:7. [PMID: 36223110 PMCID: PMC9583746 DOI: 10.1167/jov.22.11.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Exposure to a dynamic texture reduces the perceived separation between objects, altering the mapping between physical relations in the environment and their neural representations. Here we investigated the spatial tuning and spatial frame of reference of this aftereffect to understand the stage(s) of processing where adaptation-induced changes occur. In Experiment 1, we measured apparent separation at different positions relative to the adapted area, revealing a strong but tightly tuned compression effect. We next tested the spatial frame of reference of the effect, either by introducing a gaze shift between adaptation and test phase (Experiment 2) or by decoupling the spatial selectivity of adaptation in retinotopic and world-centered coordinates (Experiment 3). Results across the two experiments indicated that both retinotopic and world-centered adaptation effects can occur independently. Spatial attention to the location of the adaptor alone could not account for the world-centered transfer we observed, and retinotopic adaptation did not transfer to world-centered coordinates after a saccade (Experiment 4). Finally, we found that aftereffects in different reference frames have a similar, narrow spatial tuning profile (Experiment 5). Together, our results suggest that the neural representation of local separation resides early in the visual cortex, but it can also be modulated by activity in higher visual areas.
Collapse
Affiliation(s)
| | - Paul V McGraw
- School of Psychology, University of Nottingham, Nottingham, UK.,
| | - Neil W Roach
- School of Psychology, University of Nottingham, Nottingham, UK.,
| | - Alan Johnston
- School of Psychology, University of Nottingham, Nottingham, UK.,
| |
Collapse
|
7
|
The posterior parietal area V6A: an attentionally-modulated visuomotor region involved in the control of reach-toF-grasp action. Neurosci Biobehav Rev 2022; 141:104823. [PMID: 35961383 DOI: 10.1016/j.neubiorev.2022.104823] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 07/15/2022] [Accepted: 08/08/2022] [Indexed: 11/23/2022]
Abstract
In the macaque, the posterior parietal area V6A is involved in the control of all phases of reach-to-grasp actions: the transport phase, given that reaching neurons are sensitive to the direction and amplitude of arm movement, and the grasping phase, since reaching neurons are also sensitive to wrist orientation and hand shaping. Reaching and grasping activity are corollary discharges which, together with the somatosensory and visual signals related to the same movement, allow V6A to act as a state estimator that signals discrepancies during the motor act in order to maintain consistency between the ongoing movement and the desired one. Area V6A is also able to encode the target of an action because of gaze-dependent visual neurons and real-position cells. Here, we advance the hypothesis that V6A also uses the spotlight of attention to guide goal-directed movements of the hand, and hosts a priority map that is specific for the guidance of reaching arm movement, combining bottom-up inputs such as visual responses with top-down signals such as reaching plans.
Collapse
|
8
|
Vision for action: thalamic and cortical inputs to the macaque superior parietal lobule. Brain Struct Funct 2021; 226:2951-2966. [PMID: 34524542 PMCID: PMC8541979 DOI: 10.1007/s00429-021-02377-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 08/31/2021] [Indexed: 12/27/2022]
Abstract
The dorsal visual stream, the cortical circuit that in the primate brain is mainly dedicated to the visual control of actions, is split into two routes, a lateral and a medial one, both involved in coding different aspects of sensorimotor control of actions. The lateral route, named "lateral grasping network", is mainly involved in the control of the distal part of prehension, namely grasping and manipulation. The medial route, named "reach-to-grasp network", is involved in the control of the full deployment of prehension act, from the direction of arm movement to the shaping of the hand according to the object to be grasped. In macaque monkeys, the reach-to-grasp network (the target of this review) includes areas of the superior parietal lobule (SPL) that hosts visual and somatosensory neurons well suited to control goal-directed limb movements toward stationary as well as moving objects. After a brief summary of the neuronal functional properties of these areas, we will analyze their cortical and thalamic inputs thanks to retrograde neuronal tracers separately injected into the SPL areas V6, V6A, PEc, and PE. These areas receive visual and somatosensory information distributed in a caudorostral, visuosomatic trend, and some of them are directly connected with the dorsal premotor cortex. This review is particularly focused on the origin and type of visual information reaching the SPL, and on the functional role this information can play in guiding limb interaction with objects in structured and dynamic environments.
Collapse
|
9
|
The Neural Bases of Egocentric Spatial Representation for Extracorporeal and Corporeal Tasks: An fMRI Study. Brain Sci 2021; 11:brainsci11080963. [PMID: 34439582 PMCID: PMC8394366 DOI: 10.3390/brainsci11080963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/04/2021] [Accepted: 07/19/2021] [Indexed: 12/02/2022] Open
Abstract
(1) Background: Humans use reference frames to elaborate the spatial representations needed for all space-oriented behaviors such as postural control, walking, or grasping. We investigated the neural bases of two egocentric tasks: the extracorporeal subjective straight-ahead task (SSA) and the corporeal subjective longitudinal body plane task (SLB) in healthy participants using functional magnetic resonance imaging (fMRI). This work was an ancillary part of a study involving stroke patients. (2) Methods: Seventeen healthy participants underwent a 3T fMRI examination. During the SSA, participants had to divide the extracorporeal space into two equal parts. During the SLB, they had to divide their body along the midsagittal plane. (3) Results: Both tasks elicited a parieto-occipital network encompassing the superior and inferior parietal lobules and lateral occipital cortex, with a right hemispheric dominance. Additionally, the SLB > SSA contrast revealed activations of the left angular and premotor cortices. These areas, involved in attention and motor imagery suggest a greater complexity of corporeal processes engaging body representation. (4) Conclusions: This was the first fMRI study to explore the SLB-related activity and its complementarity with the SSA. Our results pave the way for the exploration of spatial cognitive impairment in patients.
Collapse
|
10
|
Masselink J, Lappe M. Visuomotor learning from postdictive motor error. eLife 2021; 10:64278. [PMID: 33687328 PMCID: PMC8057815 DOI: 10.7554/elife.64278] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 03/04/2021] [Indexed: 01/02/2023] Open
Abstract
Sensorimotor learning adapts motor output to maintain movement accuracy. For saccadic eye movements, learning also alters space perception, suggesting a dissociation between the performed saccade and its internal representation derived from corollary discharge (CD). This is critical since learning is commonly believed to be driven by CD-based visual prediction error. We estimate the internal saccade representation through pre- and trans-saccadic target localization, showing that it decouples from the actual saccade during learning. We present a model that explains motor and perceptual changes by collective plasticity of spatial target percept, motor command, and a forward dynamics model that transforms CD from motor into visuospatial coordinates. We show that learning does not follow visual prediction error but instead a postdictive update of space after saccade landing. We conclude that trans-saccadic space perception guides motor learning via CD-based postdiction of motor error under the assumption of a stable world.
Collapse
Affiliation(s)
- Jana Masselink
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| |
Collapse
|
11
|
Breveglieri R, Bosco A, Borgomaneri S, Tessari A, Galletti C, Avenanti A, Fattori P. Transcranial Magnetic Stimulation Over the Human Medial Posterior Parietal Cortex Disrupts Depth Encoding During Reach Planning. Cereb Cortex 2021; 31:267-280. [PMID: 32995831 DOI: 10.1093/cercor/bhaa224] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 07/01/2020] [Accepted: 07/23/2020] [Indexed: 11/12/2022] Open
Abstract
Accumulating evidence supports the view that the medial part of the posterior parietal cortex (mPPC) is involved in the planning of reaching, but while plenty of studies investigated reaching performed toward different directions, only a few studied different depths. Here, we investigated the causal role of mPPC (putatively, human area V6A-hV6A) in encoding depth and direction of reaching. Specifically, we applied single-pulse transcranial magnetic stimulation (TMS) over the left hV6A at different time points while 15 participants were planning immediate, visually guided reaching by using different eye-hand configurations. We found that TMS delivered over hV6A 200 ms after the Go signal affected the encoding of the depth of reaching by decreasing the accuracy of movements toward targets located farther with respect to the gazed position, but only when they were also far from the body. The effectiveness of both retinotopic (farther with respect to the gaze) and spatial position (far from the body) is in agreement with the presence in the monkey V6A of neurons employing either retinotopic, spatial, or mixed reference frames during reach plan. This work provides the first causal evidence of the critical role of hV6A in the planning of visually guided reaching movements in depth.
Collapse
Affiliation(s)
- Rossella Breveglieri
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy
| | - Annalisa Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy
| | - Sara Borgomaneri
- Center for studies and research in Cognitive Neuroscience, University of Bologna, 47521 Cesena, Italy.,IRCCS, Santa Lucia Foundation, 00179 Rome, Italy
| | - Alessia Tessari
- Department of Psychology, University of Bologna, 40127 Bologna, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy
| | - Alessio Avenanti
- Center for studies and research in Cognitive Neuroscience, University of Bologna, 47521 Cesena, Italy.,Center for research in Neuropsychology and Cognitive Neurosciences, Catholic University of Maule, 3460000 Talca, Chile
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy
| |
Collapse
|
12
|
Greulich RS, Adam R, Everling S, Scherberger H. Shared functional connectivity between the dorso-medial and dorso-ventral streams in macaques. Sci Rep 2020; 10:18610. [PMID: 33122655 PMCID: PMC7596572 DOI: 10.1038/s41598-020-75219-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 10/07/2020] [Indexed: 12/04/2022] Open
Abstract
Manipulation of an object requires us to transport our hand towards the object (reach) and close our digits around that object (grasp). In current models, reach-related information is propagated in the dorso-medial stream from posterior parietal area V6A to medial intraparietal area, dorsal premotor cortex, and primary motor cortex. Grasp-related information is processed in the dorso-ventral stream from the anterior intraparietal area to ventral premotor cortex and the hand area of primary motor cortex. However, recent studies have cast doubt on the validity of this separation in separate processing streams. We investigated in 10 male rhesus macaques the whole-brain functional connectivity of these areas using resting state fMRI at 7-T. Although we found a clear separation between dorso-medial and dorso-ventral network connectivity in support of the two-stream hypothesis, we also found evidence of shared connectivity between these networks. The dorso-ventral network was distinctly correlated with high-order somatosensory areas and feeding related areas, whereas the dorso-medial network with visual areas and trunk/hindlimb motor areas. Shared connectivity was found in the superior frontal and precentral gyrus, central sulcus, intraparietal sulcus, precuneus, and insular cortex. These results suggest that while sensorimotor processing streams are functionally separated, they can access information through shared areas.
Collapse
Affiliation(s)
- R Stefan Greulich
- Deutsches Primatenzentrum GmbH, Kellnerweg 4, 37077, Göttingen, Germany. .,Faculty of Biology and Psychology, University of Goettingen, Göttingen, Germany.
| | - Ramina Adam
- Robarts Research Institute, University of Western Ontario, London, Canada.,Graduate Program in Neuroscience, University of Western Ontario, London, Canada
| | - Stefan Everling
- Robarts Research Institute, University of Western Ontario, London, Canada.,Department of Physiology and Pharmacology, University of Western Ontario, London, Canada
| | - Hansjörg Scherberger
- Deutsches Primatenzentrum GmbH, Kellnerweg 4, 37077, Göttingen, Germany. .,Faculty of Biology and Psychology, University of Goettingen, Göttingen, Germany.
| |
Collapse
|
13
|
Diomedi S, Vaccari FE, Filippini M, Fattori P, Galletti C. Mixed Selectivity in Macaque Medial Parietal Cortex during Eye-Hand Reaching. iScience 2020; 23:101616. [PMID: 33089104 PMCID: PMC7559278 DOI: 10.1016/j.isci.2020.101616] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 06/18/2020] [Accepted: 09/23/2020] [Indexed: 01/07/2023] Open
Abstract
The activity of neurons of the medial posterior parietal area V6A in macaque monkeys is modulated by many aspects of reach task. In the past, research was mostly focused on modulating the effect of single parameters upon the activity of V6A cells. Here, we used Generalized Linear Models (GLMs) to simultaneously test the contribution of several factors upon V6A cells during a fix-to-reach task. This approach resulted in the definition of a representative “functional fingerprint” for each neuron. We first studied how the features are distributed in the population. Our analysis highlighted the virtual absence of units strictly selective for only one factor and revealed that most cells are characterized by “mixed selectivity.” Then, exploiting our GLM framework, we investigated the dynamics of spatial parameters encoded within V6A. We found that the tuning is not static, but changed along the trial, indicating the sequential occurrence of visuospatial transformations helpful to guide arm movement. The parietal cortex integrates a variety of sensorimotor inputs to guide reaching GLM disentangled the effect of various reaching parameters upon cell activity V6A neurons were not functionally clustered, but characterized by mixed selectivity Spatial selectivity was dynamic and reached its peak during the movement phase
Collapse
Affiliation(s)
- Stefano Diomedi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Francesco E. Vaccari
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Corresponding author
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Corresponding author
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
14
|
Freud E, Behrmann M, Snow JC. What Does Dorsal Cortex Contribute to Perception? Open Mind (Camb) 2020; 4:40-56. [PMID: 33225195 PMCID: PMC7672309 DOI: 10.1162/opmi_a_00033] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 05/20/2020] [Indexed: 01/26/2023] Open
Abstract
According to the influential "Two Visual Pathways" hypothesis, the cortical visual system is segregated into two pathways, with the ventral, occipitotemporal pathway subserving object perception, and the dorsal, occipitoparietal pathway subserving the visuomotor control of action. However, growing evidence suggests that the dorsal pathway also plays a functional role in object perception. In the current article, we present evidence that the dorsal pathway contributes uniquely to the perception of a range of visuospatial attributes that are not redundant with representations in ventral cortex. We describe how dorsal cortex is recruited automatically during perception, even when no explicit visuomotor response is required. Importantly, we propose that dorsal cortex may selectively process visual attributes that can inform the perception of potential actions on objects and environments, and we consider plausible developmental and cognitive mechanisms that might give rise to these representations. As such, we consider whether naturalistic stimuli, such as real-world solid objects, might engage dorsal cortex more so than simplified or artificial stimuli such as images that do not afford action, and how the use of suboptimal stimuli might limit our understanding of the functional contribution of dorsal cortex to visual perception.
Collapse
Affiliation(s)
- Erez Freud
- Department of Psychology and the Centre for Vision Research, York University
| | - Marlene Behrmann
- Department of Psychology and the Neuroscience Institute, Carnegie Mellon University
| | | |
Collapse
|
15
|
Neupane S, Guitton D, Pack CC. Perisaccadic remapping: What? How? Why? Rev Neurosci 2020; 31:505-520. [PMID: 32242834 DOI: 10.1515/revneuro-2019-0097] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 12/31/2019] [Indexed: 11/15/2022]
Abstract
About 25 years ago, the discovery of receptive field (RF) remapping in the parietal cortex of nonhuman primates revealed that visual RFs, widely assumed to have a fixed retinotopic organization, can change position before every saccade. Measuring such changes can be deceptively difficult. As a result, studies that followed have generated a fascinating but somewhat confusing picture of the phenomenon. In this review, we describe how observations of RF remapping depend on the spatial and temporal sampling of visual RFs and saccade directions. Further, we summarize some of the theories of how remapping might occur in neural circuitry. Finally, based on neurophysiological and psychophysical observations, we discuss the ways in which remapping information might facilitate computations in downstream brain areas.
Collapse
Affiliation(s)
- Sujaya Neupane
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Daniel Guitton
- Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3A2B4, Canada
| | - Christopher C Pack
- Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3A2B4, Canada
| |
Collapse
|
16
|
Hilo-Merkovich R, Yuval-Greenberg S. The coordinate system of endogenous spatial attention during smooth pursuit. J Vis 2020; 20:26. [PMID: 32720972 PMCID: PMC7424112 DOI: 10.1167/jov.20.7.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 03/18/2020] [Indexed: 11/24/2022] Open
Abstract
A central question in vision is whether spatial attention is represented in an eye-centered (retinotopic) or world-centered (spatiotopic) reference-frame. Most previous studies on this question focused on how coordinates are modulated across saccades. In the present study, we investigated the reference-frame of attention across smooth pursuit eye-movements using a goal-directed saccade task. In two experiments, participants were asked to pursue a moving target while attending to one or two grating stimuli. On each trial, one stimulus was constant in its retinal position and the other was constant in its spatial position. Upon detection of a slight change in stimulus orientation, participants were asked to stop pursuing and perform a fast saccade toward the modified stimulus. In the focused attention condition, they attended one, predefined, stimulus, and in the divided attention condition they attended both. In Experiment 1 the angle of the orientation change marking the target event was constant across participants and conditions. In Experiment 2, the angle was individually adapted to equate performance across participants and conditions. Findings of the two experiments were consistent and showed that the enhancement of mean visual sensitivity in the focused relative to the divided attention condition was similar in magnitude for both retinotopic and spatiotopic targets. This indicates that during smooth pursuit, endogenous attention was proportionally divided between targets in retinotopic and spatiotopic frames of reference.
Collapse
Affiliation(s)
| | - Shlomit Yuval-Greenberg
- School of Psychological Sciences, Tel-Aviv University, Tel-Aviv, Israel
- Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel
| |
Collapse
|
17
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
18
|
Navarro DM, Smithson HE, Stringer SM. A Modeling Study of the Emergence of Eye Position Gain Fields Modulating the Responses of Visual Neurons in the Brain. Front Neural Circuits 2020; 14:30. [PMID: 32528255 PMCID: PMC7264117 DOI: 10.3389/fncir.2020.00030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 04/28/2020] [Indexed: 11/13/2022] Open
Abstract
The responses of many cortical neurons to visual stimuli are modulated by the position of the eye. This form of gain modulation by eye position does not change the retinotopic selectivity of the responses, but only changes the amplitude of the responses. Particularly in the case of cortical responses, this form of eye position gain modulation has been observed to be multiplicative. Multiplicative gain modulated responses are crucial to encode information that is relevant to high-level visual functions, such as stable spatial awareness, eye movement planning, visual-motor behaviors, and coordinate transformation. Here we first present a hardwired model of different functional forms of gain modulation, including peaked and monotonic modulation by eye position. We use a biologically realistic Gaussian function to model the influence of the position of the eye on the internal activation of visual neurons. Next we show how different functional forms of gain modulation by eye position may develop in a self-organizing neural network model of visual neurons. A further contribution of our work is the investigation of the influence of the width of the eye position tuning curve on the development of a variety of forms of eye position gain modulation. Our simulation results show how the width of the eye position tuning curve affects the development of different forms of gain modulation of visual responses by the position of the eye.
Collapse
Affiliation(s)
- Daniel M Navarro
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.,Oxford Perception Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Hannah E Smithson
- Oxford Perception Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Simon M Stringer
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
19
|
Zhu SD, Zhang LA, von der Heydt R. Searching for object pointers in the visual cortex. J Neurophysiol 2020; 123:1979-1994. [PMID: 32292110 DOI: 10.1152/jn.00112.2020] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We perceive objects as permanent and stable despite frequent occlusions and eye movements, but their representation in the visual cortex is neither permanent nor stable. Feature selective cells respond only as long as objects are visible, and their responses depend on eye position. We explored the hypothesis that the system maintains object pointers that provide permanence and stability. Pointers should send facilitatory signals to the feature cells of an object, and these signals should persist across temporary occlusions and remap to compensate for image displacements caused by saccades. Here, we searched for such signals in monkey areas V2 and V4 (Macaca mulatta). We developed a new paradigm in which a monkey freely inspects an array of objects in search for reward while some of the objects are being occluded temporarily by opaque drifting strips. Two types of objects were used to manipulate attention. The results were as follows. 1) Eye movements indicated a robust representation of location and type of the occluded objects; 2) in neurons of V4, but not V2, occluded objects produced elevated activity relative to blank condition; 3) the elevation of activity was reduced for objects that had been fixated immediately before the current fixation ('inhibition of return'); and 4) when attended, or when the target of a saccade, visible objects produced enhanced responses in V4, but occluded objects produced no modulation. Although results 1-3 confirm the hypothesis, the absence of modulation under occlusion is not consistent. Further experiments are needed to resolve this discrepancy.NEW & NOTEWORTHY The way we perceive objects as permanent contrasts with the short-lived responses of visual cortical neurons. A theory postulates pointers that give objects continuity, predicting a class of neurons that respond not only to visual objects but also when an occluded object moves into their receptive field. Here, we tested this theory with a novel paradigm in which a monkey freely scans an array of objects while some of them are transiently occluded.
Collapse
Affiliation(s)
- Shude D Zhu
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, Maryland
| | - Li Alex Zhang
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, Maryland
| | - Rüdiger von der Heydt
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, Maryland.,Department of Neuroscience, Johns Hopkins University, School of Medicine, Baltimore, Maryland
| |
Collapse
|
20
|
Karnath HO, Kriechel I, Tesch J, Mohler BJ, Mölbert SC. Caloric vestibular stimulation has no effect on perceived body size. Sci Rep 2019; 9:11411. [PMID: 31388079 PMCID: PMC6684593 DOI: 10.1038/s41598-019-47897-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 07/25/2019] [Indexed: 12/04/2022] Open
Abstract
It has been suggested that the vestibular system not only plays a role for our sense of balance and postural control but also might modulate higher-order body representations, such as the perceived shape and size of our body. Recent findings using virtual reality (VR) to realistically manipulate the length of whole extremities of first person biometric avatars under vestibular stimulation did not support this assumption. It has been discussed that these negative findings were due to the availability of visual feedback on the subjects' virtual arms and legs. The present study tested this hypothesis by excluding the latter information. A newly recruited group of healthy subjects had to adjust the position of blocks in 3D space of a VR scenario such that they had the feeling that they could just touch them with their left/right hand/heel. Caloric vestibular stimulation did not alter perceived size of own extremities. Findings suggest that vestibular signals do not serve to scale the internal representation of (large parts of) our body's metric properties. This is in obvious contrast to the egocentric representation of our body midline which allows us to perceive and adjust the position of our body with respect to the surroundings. These two qualia appear to belong to different systems of body representation in humans.
Collapse
Affiliation(s)
- Hans-Otto Karnath
- Centre of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.
- Department of Psychology, University of South Carolina, Columbia, SC, 29208, USA.
| | - Isabel Kriechel
- Centre of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Joachim Tesch
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Betty J Mohler
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Technical University Darmstadt, Institute of Sports Science, Darmstadt, Germany
| | - Simone Claire Mölbert
- Centre of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Medical University Hospital Tübingen, Dept. of Psychosomatic Medicine and Psychotherapy, University of Tübingen, Tübingen, Germany
| |
Collapse
|
21
|
Abstract
Our vision depends upon shifting our high-resolution fovea to objects of interest in the visual field. Each saccade displaces the image on the retina, which should produce a chaotic scene with jerks occurring several times per second. It does not. This review examines how an internal signal in the primate brain (a corollary discharge) contributes to visual continuity across saccades. The article begins with a review of evidence for a corollary discharge in the monkey and evidence from inactivation experiments that it contributes to perception. The next section examines a specific neuronal mechanism for visual continuity, based on corollary discharge that is referred to as visual remapping. Both the basic characteristics of this anticipatory remapping and the factors that control it are enumerated. The last section considers hypotheses relating remapping to the perceived visual continuity across saccades, including remapping's contribution to perceived visual stability across saccades.
Collapse
Affiliation(s)
- Robert H Wurtz
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, Maryland 20892-4435, USA;
| |
Collapse
|
22
|
Ugolini G, Prevosto V, Graf W. Ascending vestibular pathways to parietal areas MIP and LIPv and efference copy inputs from the medial reticular formation: Functional frameworks for body representations updating and online movement guidance. Eur J Neurosci 2019; 50:2988-3013. [DOI: 10.1111/ejn.14426] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 03/25/2019] [Accepted: 04/04/2019] [Indexed: 11/28/2022]
Affiliation(s)
- Gabriella Ugolini
- Paris‐Saclay Institute of Neuroscience (UMR9197) CNRS ‐ Université Paris‐Sud Université Paris‐Saclay Gif‐sur‐Yvette France
| | - Vincent Prevosto
- Paris‐Saclay Institute of Neuroscience (UMR9197) CNRS ‐ Université Paris‐Sud Université Paris‐Saclay Gif‐sur‐Yvette France
- Department of Biomedical Engineering Pratt School of Engineering Durham North Carolina
- Department of Neurobiology Duke School of Medicine Duke University Durham North Carolina
| | - Werner Graf
- Department of Physiology and Biophysics Howard University Washington District of Columbia
| |
Collapse
|
23
|
Navarro DM, Mender BMW, Smithson HE, Stringer SM. Self-organising coordinate transformation with peaked and monotonic gain modulation in the primate dorsal visual pathway. PLoS One 2018; 13:e0207961. [PMID: 30496225 PMCID: PMC6264903 DOI: 10.1371/journal.pone.0207961] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 11/08/2018] [Indexed: 11/20/2022] Open
Abstract
We study a self-organising neural network model of how visual representations in the primate dorsal visual pathway are transformed from an eye-centred to head-centred frame of reference. The model has previously been shown to robustly develop head-centred output neurons with a standard trace learning rule, but only under limited conditions. Specifically it fails when incorporating visual input neurons with monotonic gain modulation by eye-position. Since eye-centred neurons with monotonic gain modulation are so common in the dorsal visual pathway, it is an important challenge to show how efferent synaptic connections from these neurons may self-organise to produce head-centred responses in a subpopulation of postsynaptic neurons. We show for the first time how a variety of modified, yet still biologically plausible, versions of the standard trace learning rule enable the model to perform a coordinate transformation from eye-centred to head-centred reference frames when the visual input neurons have monotonic gain modulation by eye-position.
Collapse
Affiliation(s)
- Daniel M. Navarro
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
- Oxford Perception Lab, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
| | - Bedeho M. W. Mender
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
| | - Hannah E. Smithson
- Oxford Perception Lab, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
| | - Simon M. Stringer
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, Oxfordshire, United Kingdom
| |
Collapse
|
24
|
Schindler A, Bartels A. Human V6 Integrates Visual and Extra-Retinal Cues during Head-Induced Gaze Shifts. iScience 2018; 7:191-197. [PMID: 30267680 PMCID: PMC6153141 DOI: 10.1016/j.isci.2018.09.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 07/13/2018] [Accepted: 09/04/2018] [Indexed: 11/18/2022] Open
Abstract
A key question in vision research concerns how the brain compensates for self-induced eye and head movements to form the world-centered, spatiotopic representations we perceive. Although human V3A and V6 integrate eye movements with vision, it is unclear which areas integrate head motion signals with visual retinotopic representations, as fMRI typically prevents head movement executions. Here we examined whether human early visual cortex V3A and V6 integrate these signals. A previously introduced paradigm allowed participant head movement during trials, but stabilized the head during data acquisition utilizing the delay between blood-oxygen-level-dependent (BOLD) and neural signals. Visual stimuli simulated either a stable environment or one with arbitrary head-coupled visual motion. Importantly, both conditions were matched in retinal and head motion. Contrasts revealed differential responses in human V6. Given the lack of vestibular responses in primate V6, these results suggest multi-modal integration of visual with neck efference copy signals or proprioception in V6. Setup with head-mounted goggles and head movement during fMRI Simulation of forward flow in stable or unstable world during head rotation Human V6 integrates visual self-motion with head motion signals Likely mediated by efference copy or proprioception as V6 lacks vestibular input
Collapse
Affiliation(s)
- Andreas Schindler
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25, Tübingen 72076, Germany; Department of Psychology, University of Tübingen, Tübingen 72076, Germany; Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany; Centre for Integrative Neuroscience & MEG Center, University of Tübingen, Tübingen 72076, Germany.
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25, Tübingen 72076, Germany; Department of Psychology, University of Tübingen, Tübingen 72076, Germany; Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany.
| |
Collapse
|
25
|
Garcea FE, Chen Q, Vargas R, Narayan DA, Mahon BZ. Task- and domain-specific modulation of functional connectivity in the ventral and dorsal object-processing pathways. Brain Struct Funct 2018; 223:2589-2607. [PMID: 29536173 PMCID: PMC6252262 DOI: 10.1007/s00429-018-1641-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2017] [Accepted: 03/01/2018] [Indexed: 01/08/2023]
Abstract
A whole-brain network of regions collectively supports the ability to recognize and use objects-the Tool Processing Network. Little is known about how functional interactions within the Tool Processing Network are modulated in a task-dependent manner. We designed an fMRI experiment in which participants were required to either generate object pantomimes or to carry out a picture matching task over the same images of tools, while holding all aspects of stimulus presentation constant across the tasks. The Tool Processing Network was defined with an independent functional localizer, and functional connectivity within the network was measured during the pantomime and picture matching tasks. Relative to tool picture matching, tool pantomiming led to an increase in functional connectivity between ventral stream regions and left parietal and frontal-motor areas; in contrast, the matching task was associated with an increase in functional connectivity among regions in ventral temporo-occipital cortex, and between ventral temporal regions and the left inferior parietal lobule. Graph-theory analyses over the functional connectivity data indicated that the left premotor cortex and left lateral occipital complex were hub-like (exhibited high betweenness centrality) during tool pantomiming, while ventral stream regions (left medial fusiform gyrus and left posterior middle temporal gyrus) were hub-like during the picture matching task. These results demonstrate task-specific modulation of functional interactions among a common set of regions, and indicate dynamic coupling of anatomically remote regions in task-dependent manner.
Collapse
Affiliation(s)
- Frank E Garcea
- Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY, 14627-0268, USA
- Center for Visual Science, University of Rochester, Rochester, USA
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | - Quanjing Chen
- Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY, 14627-0268, USA
| | - Roger Vargas
- School of Mathematical Sciences, Rochester Institute of Technology, Rochester, USA
| | - Darren A Narayan
- School of Mathematical Sciences, Rochester Institute of Technology, Rochester, USA
| | - Bradford Z Mahon
- Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY, 14627-0268, USA.
- Center for Visual Science, University of Rochester, Rochester, USA.
- Department of Neurosurgery, University of Rochester Medical Center, Rochester, USA.
- Department of Neurology, University of Rochester Medical Center, Rochester, USA.
| |
Collapse
|
26
|
Representing the location of manipulable objects in shape-selective occipitotemporal cortex: Beyond retinotopic reference frames? Cortex 2018; 106:132-150. [PMID: 29940399 DOI: 10.1016/j.cortex.2018.05.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2017] [Revised: 03/05/2018] [Accepted: 05/03/2018] [Indexed: 10/16/2022]
Abstract
When interacting with objects, we have to represent their location relative to our bodies. To facilitate bodily reactions, location may be encoded in the brain not just with respect to the retina (retinotopic reference frame), but also in relation to the head, trunk or arm (collectively spatiotopic reference frames). While spatiotopic reference frames for location encoding can be found in brain areas for action planning, such as parietal areas, there is debate about the existence of spatiotopic reference frames in higher-level occipitotemporal visual areas. In an extensive multi-voxel pattern analysis (MVPA) fMRI study using faces, headless bodies and scenes stimuli, Golomb and Kanwisher (2012) did not find evidence for spatiotopic reference frames in shape-selective occipitotemporal cortex. This finding is important for theories of how stimulus location is encoded in the brain. It is possible, however, that their failure to find spatiotopic reference frames is related to their stimuli: we typically do not manipulate faces, headless bodies or scenes. It is plausible that we only represent body-centred location when viewing objects that are typically manipulated. Here, we tested for object location encoding in shape-selective occipitotemporal cortex using manipulable object stimuli (balls and cups) in a MVPA fMRI study. We employed Bayesian analyses to determine sample size and evaluate the sensitivity of our data to test the hypothesis that location can be encoded in a spatiotopic reference frame in shape-selective occipitotemporal cortex over the null hypothesis of no spatiotopic location encoding. We found strong evidence for retinotopic location encoding consistent with previous findings that retinotopic reference frames are common neural representations of object location. In contrast, when testing for spatiotopic encoding, we found evidence that object location information for small manipulable objects is not decodable in relation to the body in shape-selective occipitotemporal cortex. Post-hoc exploratory analyses suggested that spatiotopic aspects might modulate retinotopic location encoding. Overall, our findings provide evidence that there is no spatiotopic encoding that is independent of retinotopic location in shape-selective occipitotemporal cortex.
Collapse
|
27
|
Erlikhman G, Caplovitz GP, Gurariy G, Medina J, Snow JC. Towards a unified perspective of object shape and motion processing in human dorsal cortex. Conscious Cogn 2018; 64:106-120. [PMID: 29779844 DOI: 10.1016/j.concog.2018.04.016] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Revised: 04/20/2018] [Accepted: 04/26/2018] [Indexed: 01/06/2023]
Abstract
Although object-related areas were discovered in human parietal cortex a decade ago, surprisingly little is known about the nature and purpose of these representations, and how they differ from those in the ventral processing stream. In this article, we review evidence for the unique contribution of object areas of dorsal cortex to three-dimensional (3-D) shape representation, the localization of objects in space, and in guiding reaching and grasping actions. We also highlight the role of dorsal cortex in form-motion interaction and spatiotemporal integration, possible functional relationships between 3-D shape and motion processing, and how these processes operate together in the service of supporting goal-directed actions with objects. Fundamental differences between the nature of object representations in the dorsal versus ventral processing streams are considered, with an emphasis on how and why dorsal cortex supports veridical (rather than invariant) representations of objects to guide goal-directed hand actions in dynamic visual environments.
Collapse
Affiliation(s)
| | | | - Gennadiy Gurariy
- Department of Psychology, University of Nevada, Reno, USA; Department of Psychology, University of Wisconsin, Milwaukee, USA
| | - Jared Medina
- Department of Psychological and Brain Sciences, University of Delaware, USA
| | | |
Collapse
|
28
|
Seidel Malkinson T, Bartolomeo P. Fronto-parietal organization for response times in inhibition of return: The FORTIOR model. Cortex 2018; 102:176-192. [DOI: 10.1016/j.cortex.2017.11.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 09/10/2017] [Accepted: 11/07/2017] [Indexed: 10/18/2022]
|
29
|
Julian JB, Keinath AT, Frazzetta G, Epstein RA. Human entorhinal cortex represents visual space using a boundary-anchored grid. Nat Neurosci 2018; 21:191-194. [PMID: 29311745 PMCID: PMC5801057 DOI: 10.1038/s41593-017-0049-1] [Citation(s) in RCA: 87] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 11/22/2017] [Indexed: 12/22/2022]
Abstract
When participants performed a visual search task, fMRI responses in entorhinal cortex (EC) exhibited a 6-fold periodic modulation by gaze movement direction. The orientation of this modulation was determined by the shape and orientation of the bounded search space. These results indicate that human EC represents visual space using a boundary-anchored grid, analogous to that used to represent navigable space in rodents.
Collapse
Affiliation(s)
- Joshua B Julian
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
| | | | - Giulia Frazzetta
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Russell A Epstein
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
30
|
Gamberini M, Dal Bò G, Breveglieri R, Briganti S, Passarelli L, Fattori P, Galletti C. Sensory properties of the caudal aspect of the macaque's superior parietal lobule. Brain Struct Funct 2017; 223:1863-1879. [PMID: 29260370 DOI: 10.1007/s00429-017-1593-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 12/12/2017] [Indexed: 11/26/2022]
Abstract
In the superior parietal lobule (SPL), the anterior part (area PE) is known to process somatosensory information, while the caudalmost part (areas V6Av and V6) processes visual information. Here we studied the visual and somatosensory properties of the areas PEc and V6Ad located in between the somatosensory and visual domains of SPL. About 1500 neurons were extracellularly recorded in 19 hemispheres of 12 monkeys (Macaca fascicularis). Visual and somatosensory properties of single neurons were generally studied separately, while in a subpopulation of neurons, both the sensory properties were tested. Visual neurons were more represented in V6Ad and somatosensory neurons in PEc. The visual neurons of these two areas showed similar properties and represented a large part of the contralateral visual field, mostly the lower part. In contrast, somatosensory neurons showed remarkable differences. The arms were overrepresented in both the areas, but V6Ad represented only the upper limbs, whereas PEc both the upper and lower limbs. Interestingly, we found that in both the areas, bimodal visual-somatosensory cells represented the proximal part of the arms. We suggest that PEc is involved in locomotion and in the control of hand/foot interaction with the objects of the environment, while V6Ad is in the control of the object prehension specifically performed with the upper limbs. Neuroimaging and lesion studies from literature support a strict homology with humans.
Collapse
Affiliation(s)
- Michela Gamberini
- Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
- Biomedical and Neuromotor Sciences, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Giulia Dal Bò
- Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Rossella Breveglieri
- Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
- Biomedical and Neuromotor Sciences, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Sofia Briganti
- Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Lauretta Passarelli
- Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
- Biomedical and Neuromotor Sciences, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy.
- Biomedical and Neuromotor Sciences, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy.
| |
Collapse
|
31
|
Abstract
UNLABELLED The ability to perceive the visual world around us as spatially stable despite frequent eye movements is one of the long-standing mysteries of neuroscience. The existence of neural mechanisms processing spatiotopic information is indispensable for a successful interaction with the external world. However, how the brain handles spatiotopic information remains a matter of debate. We here combined behavioral and fMRI adaptation to investigate the coding of spatiotopic information in the human brain. Subjects were adapted by a prolonged presentation of a tilted grating. Thereafter, they performed a saccade followed by the brief presentation of a probe. This procedure allowed dissociating adaptation aftereffects at retinal and spatiotopic positions. We found significant behavioral and functional adaptation in both retinal and spatiotopic positions, indicating information transfer into a spatiotopic coordinate system. The brain regions involved were located in ventral visual areas V3, V4, and VO. Our findings suggest that spatiotopic representations involved in maintaining visual stability are constructed by dynamically remapping visual feature information between retinotopic regions within early visual areas. SIGNIFICANCE STATEMENT Why do we perceive the visual world as stable, although we constantly perform saccadic eye movements? We investigated how the visual system codes object locations in spatiotopic (i.e., external world) coordinates. We combined visual adaptation, in which the prolonged exposure to a specific visual feature alters perception, with fMRI adaptation, where the repeated presentation of a stimulus leads to a reduction in the BOLD amplitude. Functionally, adaptation was found in visual areas representing the retinal location of an adaptor but also at representations corresponding to its spatiotopic position. The results suggest that an active dynamic shift transports information in visual cortex to counteract the retinal displacement associated with saccade eye movements.
Collapse
|
32
|
Affiliation(s)
- M. W. Spratling
- Department of Informatics, King's College London, London, UK
| |
Collapse
|
33
|
Chen Y, Crawford JD. Cortical Activation during Landmark-Centered vs. Gaze-Centered Memory of Saccade Targets in the Human: An FMRI Study. Front Syst Neurosci 2017; 11:44. [PMID: 28690501 PMCID: PMC5481872 DOI: 10.3389/fnsys.2017.00044] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Accepted: 06/06/2017] [Indexed: 11/13/2022] Open
Abstract
A remembered saccade target could be encoded in egocentric coordinates such as gaze-centered, or relative to some external allocentric landmark that is independent of the target or gaze (landmark-centered). In comparison to egocentric mechanisms, very little is known about such a landmark-centered representation. Here, we used an event-related fMRI design to identify brain areas supporting these two types of spatial coding (i.e., landmark-centered vs. gaze-centered) for target memory during the Delay phase where only target location, not saccade direction, was specified. The paradigm included three tasks with identical display of visual stimuli but different auditory instructions: Landmark Saccade (remember target location relative to a visual landmark, independent of gaze), Control Saccade (remember original target location relative to gaze fixation, independent of the landmark), and a non-spatial control, Color Report (report target color). During the Delay phase, the Control and Landmark Saccade tasks activated overlapping areas in posterior parietal cortex (PPC) and frontal cortex as compared to the color control, but with higher activation in PPC for target coding in the Control Saccade task and higher activation in temporal and occipital cortex for target coding in Landmark Saccade task. Gaze-centered directional selectivity was observed in superior occipital gyrus and inferior occipital gyrus, whereas landmark-centered directional selectivity was observed in precuneus and midposterior intraparietal sulcus. During the Response phase after saccade direction was specified, the parietofrontal network in the left hemisphere showed higher activation for rightward than leftward saccades. Our results suggest that cortical activation for coding saccade target direction relative to a visual landmark differs from gaze-centered directional selectivity for target memory, from the mechanisms for other types of allocentric tasks, and from the directionally selective mechanisms for saccade planning and execution.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada
| | - J D Crawford
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada.,Vision: Science to Applications Program, York University, TorontoON, Canada
| |
Collapse
|
34
|
Nishimoto S, Huth AG, Bilenko NY, Gallant JL. Eye movement-invariant representations in the human visual system. J Vis 2017; 17:11. [PMID: 28114479 PMCID: PMC5256465 DOI: 10.1167/17.1.11] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
Collapse
Affiliation(s)
- Shinji Nishimoto
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USACenter for Information and Neural Networks, NICT and Osaka University, Osaka,
| | - Alexander G Huth
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Natalia Y Bilenko
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USADepartment of Psychology, University of California, Berkeley, CA, USA
| |
Collapse
|
35
|
Li D, Rorden C, Karnath HO. "Nonspatial" Attentional Deficits Interact with Spatial Position in Neglect. J Cogn Neurosci 2017; 29:911-918. [PMID: 28129062 PMCID: PMC11731523 DOI: 10.1162/jocn_a_01101] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A widely debated question concerns whether or not spatial and nonspatial components of visual attention interact in attentional performance. Spatial neglect is a common consequence of brain injury where individuals fail to respond to stimuli presented on their contralesional side. It has been argued that, beyond the spatial bias, these individuals also tend to exhibit nonspatial perceptual deficits. Here we demonstrate that the "nonspatial" deficits affecting the temporal dynamics of attentional deployment are in fact modulated by spatial position. Specifically, we observed that the pathological attentional blink of chronic neglect is enhanced when stimuli are presented on the contralesional side of the trunk while keeping retinal and head-centered coordinates constant. We did not find this pattern in right brain-damaged patients without neglect or in patients who had recovered from neglect. Our work suggests that the nonspatial attentional deficits observed in neglect are heavily modulated by egocentric spatial position. This provides strong evidence against models that suggest independent modules for spatial and nonspatial attentional functions while also providing strong evidence that trunk position plays an important role in neglect.
Collapse
|
36
|
Shafer-Skelton A, Kupitz CN, Golomb JD. Object-location binding across a saccade: A retinotopic spatial congruency bias. Atten Percept Psychophys 2017; 79:765-781. [PMID: 28070793 PMCID: PMC5354979 DOI: 10.3758/s13414-016-1263-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating - a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the "spatial congruency bias," to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the spatial congruency bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic congruency bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates.
Collapse
Affiliation(s)
- Anna Shafer-Skelton
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA
| | - Colin N Kupitz
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, OH, 43210, USA.
| |
Collapse
|
37
|
Galletti C, Fattori P. The dorsal visual stream revisited: Stable circuits or dynamic pathways? Cortex 2017; 98:203-217. [PMID: 28196647 DOI: 10.1016/j.cortex.2017.01.009] [Citation(s) in RCA: 111] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2016] [Revised: 01/05/2017] [Accepted: 01/05/2017] [Indexed: 11/29/2022]
Abstract
In both macaque and human brain, information regarding visual motion flows from the extrastriate area V6 along two different paths: a dorsolateral one towards areas MT/V5, MST, V3A, and a dorsomedial one towards the visuomotor areas of the superior parietal lobule (V6A, MIP, VIP). The dorsolateral visual stream is involved in many aspects of visual motion analysis, including the recognition of object motion and self motion. The dorsomedial stream uses visual motion information to continuously monitor the spatial location of objects while we are looking and/or moving around, to allow skilled reaching for and grasping of the objects in structured, dynamically changing environments. Grasping activity is present in two areas of the dorsal stream, AIP and V6A. Area AIP is more involved than V6A in object recognition, V6A in encoding vision for action. We suggest that V6A is involved in the fast control of prehension and plays a critical role in biomechanically selecting appropriate postures during reach to grasp behaviors. In everyday life, numerous functional networks, often involving the same cortical areas, are continuously in action in the dorsal visual stream, with each network dynamically activated or inhibited according to the context. The dorsolateral and dorsomedial streams represent only two examples of these networks. Many others streams have been described in the literature, but it is worthwhile noting that the same cortical area, and even the same neurons within an area, are not specific for just one functional property, being part of networks that encode multiple functional aspects. Our proposal is to conceive the cortical streams not as fixed series of interconnected cortical areas in which each area belongs univocally to one stream and is strictly involved in only one function, but as interconnected neuronal networks, often involving the same neurons, that are involved in a number of functional processes and whose activation changes dynamically according to the context.
Collapse
Affiliation(s)
- Claudio Galletti
- Department of Pharmacy and Biotechnology, University of Bologna, 40126, Bologna, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology, University of Bologna, 40126, Bologna, Italy.
| |
Collapse
|
38
|
The reference frame of the tilt aftereffect measured by differential Pavlovian conditioning. Sci Rep 2017; 7:40525. [PMID: 28094321 PMCID: PMC5240094 DOI: 10.1038/srep40525] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 12/07/2016] [Indexed: 11/08/2022] Open
Abstract
We used a differential Pavlovian conditioning paradigm to measure tilt aftereffect (TAE) strength. Gabor patches, rotated clockwise and anticlockwise, were used as conditioned stimuli (CSs), one of which (CS+) was followed by the unconditioned stimulus (UCS), whereas the other (CS−) appeared alone. The UCS was an air puff delivered to the left eye. In addition to the CS+ and CS−, the vertical test patch was also presented for the clockwise and anticlockwise adapters. The vertical patch was not followed by the UCS. After participants acquired differential conditioning, eyeblink conditioned responses (CRs) were observed for the vertical patch when it appeared to be tilted in the same direction as the CS+ owing to the TAE. The effect was observed not only when the adapter and test stimuli were presented in the same retinotopic position but also when they were presented in the same spatiotopic position, although spatiotopic TAE was weak—it occurred approximately half as often as the full effect. Furthermore, spatiotopic TAE decayed as the time after saccades increased, but did not decay as the time before saccades increased. These results suggest that the time before the performance of saccadic eye movements is needed to compute the spatiotopic representation.
Collapse
|
39
|
Mikellidou K, Turi M, Burr DC. Spatiotopic coding during dynamic head tilt. J Neurophysiol 2016; 117:808-817. [PMID: 27903636 DOI: 10.1152/jn.00508.2016] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Accepted: 11/29/2016] [Indexed: 11/22/2022] Open
Abstract
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.
Collapse
Affiliation(s)
- Kyriaki Mikellidou
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy;
| | - Marco Turi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.,Fondazione Stella Maris Mediterraneo, Chiaromonte, Potenza, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy; and.,Neuroscience Institute, National Research Council (CNR), Pisa, Italy
| |
Collapse
|
40
|
Activity in superior parietal cortex during training by observation predicts asymmetric learning levels across hands. Sci Rep 2016; 6:32133. [PMID: 27535179 PMCID: PMC4989445 DOI: 10.1038/srep32133] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 08/03/2016] [Indexed: 11/20/2022] Open
Abstract
A dominant concept in motor cognition associates action observation with motor control. Previous studies have shown that passive action observation can result in significant performance gains in humans. Nevertheless, it is unclear whether the neural mechanism subserving such learning codes abstract aspects of the action (e.g. goal) or low level aspects such as effector identity. Eighteen healthy subjects learned to perform sequences of finger movements by passively observing right or left hand performing the same sequences in egocentric view. Using functional magnetic resonance imaging we show that during passive observation, activity in the superior parietal lobule (SPL) contralateral to the identity of the observed hand (right\left), predicts subsequent performance gains in individual subjects. Behaviorally, left hand observation resulted in positively correlated performance gains of the two hands. Conversely right hand observation yielded negative correlation - individuals with high performance gains in one hand exhibited low gains in the other. Such behavioral asymmetry is reflected by activity in contralateral SPL during short-term training in the absence of overt physical practice and demonstrates the role of observed hand identity in learning. These results shed new light on the coding level in SPL and have implications for optimizing motor skill learning.
Collapse
|
41
|
Abstract
Lesions of the posterior parietal cortex have long been known to produce visuospatial deficits in both humans and monkeys. Yet there is no known "map" of space in the parietal cortex. The posterior parietal cortex projects to a number of other areas that are involved in specialized spatial functions. In these areas, space is represented at the level of single neurons and, in many of them, there is a topographically organized map of space. These extraparietal areas include the premotor cortex and the putamen, involved in visuomotor space, the frontal eye fields and the superior colliculus, involved in oculomotor space, the hippocampus, involved in environmental space, and the dorsolateral prefrontal cortex, involved in mnemonic space. In many of these areas, space is represented by means of a coordinate system that is fixed to a particular body part. Thus, the processing of space is not unitary but is divided among several brain areas and several coordinate systems, in addition to those in the posterior parietal cortex. The Neuroscientist 1:43-50, 1995
Collapse
Affiliation(s)
- Charles G. Gross
- Department of Psychology Princeton University Princeton,
New Jersey
| | | |
Collapse
|
42
|
Affiliation(s)
- M.S.A. Graziano
- Doctoral candidate and a Professor of Psychology, both at Princeton University
| | - C. G. Gross
- Doctoral candidate and a Professor of Psychology, both at Princeton University
| |
Collapse
|
43
|
Abstract
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Collapse
|
44
|
Abstract
In macaque, it has long been known since the late nineties that the medial parieto-occipital sulcus (POS) contains two regions, V6 and V6A, important for visual motion and action. While V6 is a retinotopically organized extrastriate area, V6A is a broadly retinotopically organized visuomotor area constituted by a ventral and dorsal subdivision (V6Av and V6Ad), both containing arm movement-related cells active during spatially directed reaching movements. In humans, these areas have been mapped only in recent years thanks to neuroimaging methods. In a series of brain mapping studies, by using a combination of functional magnetic resonance imaging methods such as wide-field retinotopy and task-evoked activity, we mapped human areas V6 (Pitzalis et al., 2006) and V6Av (Pitzalis et al., 2013 d) retinotopically and defined human V6Ad functionally as a pointing-selective region situated anteriorly in the close proximity of V6Av (Tosoni et al., 2014). Like in macaque, human V6 is a motion area (e.g., Pitzalis et al., 2010, 2012, 2013 a, b , c ), while V6Av and V6Ad respond to pointing movements (Tosoni et al., 2014). The retinotopic organization (when present), anatomical position, neighbor relations, and functional properties of these three areas closely resemble those reported for macaque V6 (Galletti et al., 1996, 1999 a), V6Av, and V6Ad (Galletti et al., 1999 b; Gamberini et al., 2011). We suggest that information on objects in depth which are translating in space, because of the self-motion, is processed in V6 and conveyed to V6A for evaluating object distance in a dynamic condition such as that created by self-motion, so to orchestrate the eye and arm movements necessary to reach or avoid static and moving objects in the environment.
Collapse
|
45
|
Fourtassi M, Rode G, Tilikete C, Pisella L. Spontaneous ocular positioning during visual imagery in patients with hemianopia and/or hemineglect. Neuropsychologia 2016; 86:141-52. [PMID: 27129436 DOI: 10.1016/j.neuropsychologia.2016.04.024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 04/16/2016] [Accepted: 04/25/2016] [Indexed: 11/28/2022]
Abstract
Spontaneous eye movements during imagery are not random and can be used to study and reveal mental visualization processes (Fourtassi et al., 2013; Johansson et al. 2006). For example, we previously showed that during memory recall of French towns via imagery healthy individuals looks straight ahead when recalling Paris and their subsequent gaze positions are significantly correlated with the real GPS coordinates of the recalled towns. This correlation suggests that memory retrieval is done via depictive representations as it is never found when the towns are recalled using verbal fluency. In the present paper we added to this finding by showing that the mental image is spontaneously centered on the head or body midline. In order to investigate the capacities of visual imagery in patients, and by extension, the role of primary visual cortex and fronto-parietal cortex in spatial visual imagery, we recorded gaze positions during memory recall of French towns in an imagery task, a non-imagery task (verbal fluency), and a visually-guided task in five patients with left or right hemianopia and in four patients with hemineglect (two with left hemianopia and two without). The correlation between gaze position and real GPS coordinates of the recalled towns was significant in all hemianopic patients, but in patients with hemineglect this was only the case for towns located on the right half of the map of France. This suggests hemianopic patients can perform spatially consistent mental imagery despite direct or indirect unilateral lesions of the primary visual cortex. In contrast, the left-sided towns recalled by hemineglect patients, revealed that they have some spatial inconsistency or representational difficulty. Hemianopic patients positioned and maintained their gaze in their contralesional hemispace, suggesting that their mental map was not centered on their head or body midline. This contralesional gaze positioning appeared to be a general compensation strategy and was not observed in patients with neglect (with or without hemianopia). Instead, neglect patients positioned their gaze in their ipsilesional hemispace and only when performing the visual imagery task. These findings are discussed in the context of the role of occipital and fronto-parietal cortices in the neuroanatomical model of visual imagery developed by Kosslyn et al. (2006).
Collapse
Affiliation(s)
- Maryam Fourtassi
- INSERM, U1028, CNRS, UMR5292, Lyon Neuroscience Research Center, ImpAct, 16 Avenue du Doyen Lépine, 69676 Bron cedex, France; Université Mohamed Premier, Oujda, Morocco
| | - Gilles Rode
- INSERM, U1028, CNRS, UMR5292, Lyon Neuroscience Research Center, ImpAct, 16 Avenue du Doyen Lépine, 69676 Bron cedex, France; Université Lyon1, Villeurbanne, France; Hospices Civils de Lyon, Hôpital Henry Gabrielle, Mouvement et Handicap, F-69000 Lyon, France
| | - Caroline Tilikete
- INSERM, U1028, CNRS, UMR5292, Lyon Neuroscience Research Center, ImpAct, 16 Avenue du Doyen Lépine, 69676 Bron cedex, France; Université Lyon1, Villeurbanne, France; Hospices Civils de Lyon, Unité de Neuro-ophtalmologie, Hôpital Neurologique, F-69000 Lyon, France
| | - Laure Pisella
- INSERM, U1028, CNRS, UMR5292, Lyon Neuroscience Research Center, ImpAct, 16 Avenue du Doyen Lépine, 69676 Bron cedex, France; Université Lyon1, Villeurbanne, France
| |
Collapse
|
46
|
Abstract
UNLABELLED Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. SIGNIFICANCE STATEMENT To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception.
Collapse
|
47
|
Bonaiuto J, Arbib MA. Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model. BIOLOGICAL CYBERNETICS 2015; 109:639-69. [PMID: 26585965 PMCID: PMC4656720 DOI: 10.1007/s00422-015-0666-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Accepted: 10/29/2015] [Indexed: 06/05/2023]
Abstract
The activity of certain parietal neurons has been interpreted as encoding affordances (directly perceivable opportunities) for grasping. Separate computational models have been developed for infant grasp learning and affordance learning, but no single model has yet combined these processes in a neurobiologically plausible way. We present the Integrated Learning of Grasps and Affordances (ILGA) model that simultaneously learns grasp affordances from visual object features and motor parameters for planning grasps using trial-and-error reinforcement learning. As in the Infant Learning to Grasp Model, we model a stage of infant development prior to the onset of sophisticated visual processing of hand-object relations, but we assume that certain premotor neurons activate neural populations in primary motor cortex that synergistically control different combinations of fingers. The ILGA model is able to extract affordance representations from visual object features, learn motor parameters for generating stable grasps, and generalize its learned representations to novel objects.
Collapse
Affiliation(s)
- James Bonaiuto
- Sobell Department of Motor Neuroscience and Movement Disorders, University College London, London, WC1N3BG, UK.
- Neuroscience Program, University of Southern California, Los Angeles, CA, 90089-2520, USA.
- USC Brain Project, University of Southern California, Los Angeles, CA, 90089-2520, USA.
| | - Michael A Arbib
- Neuroscience Program, University of Southern California, Los Angeles, CA, 90089-2520, USA
- USC Brain Project, University of Southern California, Los Angeles, CA, 90089-2520, USA
- Computer Science Department, University of Southern California, Los Angeles, CA, 90089-2520, USA
| |
Collapse
|
48
|
Neural correlates of spatial working memory manipulation in a sequential Vernier discrimination task. Neuroreport 2015; 25:1418-23. [PMID: 25350139 DOI: 10.1097/wnr.0000000000000280] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Visuospatial working memory refers to the short-term storage and manipulation of visuospatial information. To study the neural bases of these processes, 17 participants took part in a modified sequential Vernier task while they were being scanned using an event-related functional MRI protocol. During each trial, participants retained the spatial position of a line during a delay period to later evaluate if it was presented aligned to a second line. This design allowed testing the manipulation of the spatial information from memory. During encoding, there was a larger parietal and cingulate activation under the experimental condition, whereas the opposite was true for the occipital cortex. Throughout the delay period of the experimental condition there was significant bilateral activation in the caudal superior frontal sulcus/middle frontal gyrus, as well as the insular and superior parietal lobes, which confirms the findings from previous studies. During manipulation of spatial memory, the analysis showed higher activation in the lingual gyrus. This increase of activity in visual areas during the manipulation phase fits with the hypothesis that information stored in sensory cortices becomes reactivated once the information is needed to be utilized.
Collapse
|
49
|
Spatial attention systems in spatial neglect. Neuropsychologia 2015; 75:61-73. [DOI: 10.1016/j.neuropsychologia.2015.05.019] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2015] [Revised: 05/15/2015] [Accepted: 05/20/2015] [Indexed: 11/23/2022]
|
50
|
Mirpour K, Bisley JW. Remapping, Spatial Stability, and Temporal Continuity: From the Pre-Saccadic to Postsaccadic Representation of Visual Space in LIP. Cereb Cortex 2015; 26:3183-95. [PMID: 26142462 DOI: 10.1093/cercor/bhv153] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
As our eyes move, we have a strong percept that the world is stable in space and time; however, the signals in cortex coming from the retina change with each eye movement. It is not known how this changing input produces the visual percept we experience, although the predictive remapping of receptive fields has been described as a likely candidate. To explain how remapping accounts for perceptual stability, we examined responses of neurons in the lateral intraparietal area while animals performed a visual foraging task. When a stimulus was brought into the response field of a neuron that exhibited remapping, the onset of the postsaccadic representation occurred shortly after the saccade ends. Whenever a stimulus was taken out of the response field, the presaccadic representation abruptly ended shortly after the eyes stopped moving. In the 38% (20/52) of neurons that exhibited remapping, there was no more than 30 ms between the end of the presaccadic representation and the start of the postsaccadic representation and, in some neurons, and the population as a whole, it was continuous. We conclude by describing how this seamless shift from a presaccadic to postsaccadic representation could contribute to spatial stability and temporal continuity.
Collapse
Affiliation(s)
| | - James W Bisley
- Department of Neurobiology Jules Stein Eye Institute, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA Department of Psychology and the Brain Research Institute, UCLA, Los Angeles, CA 90095, USA Center for Interdisciplinary Research (ZiF), Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|