1
|
Zhu SL, Lakshminarasimhan KJ, Angelaki DE. Computational cross-species views of the hippocampal formation. Hippocampus 2023; 33:586-599. [PMID: 37038890 PMCID: PMC10947336 DOI: 10.1002/hipo.23535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/12/2023]
Abstract
The discovery of place cells and head direction cells in the hippocampal formation of freely foraging rodents has led to an emphasis of its role in encoding allocentric spatial relationships. In contrast, studies in head-fixed primates have additionally found representations of spatial views. We review recent experiments in freely moving monkeys that expand upon these findings and show that postural variables such as eye/head movements strongly influence neural activity in the hippocampal formation, suggesting that the function of the hippocampus depends on where the animal looks. We interpret these results in the light of recent studies in humans performing challenging navigation tasks which suggest that depending on the context, eye/head movements serve one of two roles-gathering information about the structure of the environment (active sensing) or externalizing the contents of internal beliefs/deliberation (embodied cognition). These findings prompt future experimental investigations into the information carried by signals flowing between the hippocampal formation and the brain regions controlling postural variables, and constitute a basis for updating computational theories of the hippocampal system to accommodate the influence of eye/head movements.
Collapse
Affiliation(s)
- Seren L Zhu
- Center for Neural Science, New York University, New York, New York, USA
| | - Kaushik J Lakshminarasimhan
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, New York, USA
- Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, New York, New York, USA
| |
Collapse
|
2
|
Corrigan BW, Gulli RA, Doucet G, Mahmoudian B, Abbass M, Roussy M, Luna R, Sachs AJ, Martinez‐Trujillo JC. View cells in the hippocampus and prefrontal cortex of macaques during virtual navigation. Hippocampus 2023; 33:573-585. [PMID: 37002559 DOI: 10.1002/hipo.23534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/03/2023]
Abstract
Cells selectively activated by a particular view of an environment have been found in the primate hippocampus (HPC). Whether view cells are present in other brain areas, and how view selectivity interacts with other variables such as object features and place remain unclear. Here, we explore these issues by recording the responses of neurons in the HPC and the lateral prefrontal cortex (LPFC) of rhesus macaques performing a task in which they learn new context-object associations while navigating a virtual environment using a joystick. We measured neuronal responses at different locations in a virtual maze where animals freely directed gaze to different regions of the visual scenes. We show that specific views containing task relevant objects selectively activated a proportion of HPC units, and an even higher proportion of LPFC units. Place selectivity was scarce and generally dependent on view. Many view cells were not affected by changing the object color or the context cue, two task relevant features. However, a small proportion of view cells showed selectivity for these two features. Our results show that during navigation in a virtual environment with complex and dynamic visual stimuli, view cells are found in both the HPC and the LPFC. View cells may have developed as a multiarea specialization in diurnal primates to encode the complexities and layouts of the environment through gaze exploration which ultimately enables building cognitive maps of space that guide navigation.
Collapse
Affiliation(s)
- Benjamin W. Corrigan
- Department of Physiology and Pharmacology University of Western Ontario London Ontario Canada
| | - Roberto A. Gulli
- Zuckerman Mind Brain Behavior Institute Columbia University New York New York USA
- Center for Theoretical Neuroscience Columbia University New York New York USA
| | - Guillaume Doucet
- The Ottawa Hospital University of Ottawa Ottawa Ontario Canada
- Realize Medical Ottawa Ontario Canada
| | - Borna Mahmoudian
- Department of Physiology and Pharmacology University of Western Ontario London Ontario Canada
| | - Mohamad Abbass
- Western University Department of Clinical Neurological Sciences, London Health Sciences Centre Western University London Ontario Canada
| | - Megan Roussy
- Department of Physiology and Pharmacology University of Western Ontario London Ontario Canada
- National Science and Engineering Research Council Ottawa Ontario Canada
| | - Rogelio Luna
- Department of Physiology and Pharmacology University of Western Ontario London Ontario Canada
- Facultad de Medicina y Ciencias Biomédicas Universidad Autónoma de Chihuahua Chihuahua City Mexico
| | - Adam J. Sachs
- The Ottawa Hospital University of Ottawa Ottawa Ontario Canada
| | - Julio C. Martinez‐Trujillo
- Department of Physiology and Pharmacology University of Western Ontario London Ontario Canada
- Department of Psychiatry, Schulich School of Medicine and Dentistry University of Western Ontario London Ontario Canada
- Western Institute for Neuroscience University of Western Ontario London Ontario Canada
| |
Collapse
|
3
|
Yang C, Chen H, Naya Y. Allocentric information represented by self-referenced spatial coding in the primate medial temporal lobe. Hippocampus 2023; 33:522-532. [PMID: 36728411 DOI: 10.1002/hipo.23501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/16/2022] [Accepted: 12/30/2022] [Indexed: 02/03/2023]
Abstract
For living organisms, the ability to acquire information regarding the external space around them is critical for future actions. While the information must be stored in an allocentric frame to facilitate its use in various spatial contexts, each case of use requires the information to be represented in a particular self-referenced frame. Previous studies have explored neural substrates responsible for the linkage between self-referenced and allocentric spatial representations based on findings in rodents. However, the behaviors of rodents are different from those of primates in several aspects; for example, rodents mainly explore their environments through locomotion, while primates use eye movements. In this review, we discuss the brain mechanisms responsible for the linkage in nonhuman primates. Based on recent physiological studies, we propose that two types of neural substrates link the first-person perspective with allocentric coding. The first is the view-center background signal, which represents an image of the background surrounding the current position of fixation on the retina. This perceptual signal is transmitted from the ventral visual pathway to the hippocampus (HPC) via the perirhinal cortex and parahippocampal cortex. Because images that share the same objective-position in the environment tend to appear similar when seen from different self-positions, the view-center background signals are easily associated with one another in the formation of allocentric position coding and storage. The second type of neural substrate is the HPC neurons' dynamic activity that translates the stored location memory to the first-person perspective depending on the current spatial context.
Collapse
Affiliation(s)
- Cen Yang
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - He Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yuji Naya
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
4
|
Abstract
The hippocampus has been extensively implicated in spatial navigation in rodents and more recently in bats. Numerous studies have revealed that various kinds of spatial information are encoded across hippocampal regions. In contrast, investigations of spatial behavioral correlates in the primate hippocampus are scarce and have been mostly limited to head-restrained subjects during virtual navigation. However, recent advances made in freely-moving primates suggest marked differences in spatial representations from rodents, albeit some similarities. Here, we review empirical studies examining the neural correlates of spatial navigation in the primate (including human) hippocampus at the levels of local field potentials and single units. The lower frequency theta oscillations are often intermittent. Single neuron responses are highly mixed and task-dependent. We also discuss neuronal selectivity in the eye and head coordinates. Finally, we propose that future studies should focus on investigating both intrinsic and extrinsic population activity and examining spatial coding properties in large-scale hippocampal-neocortical networks across tasks.
Collapse
Affiliation(s)
- Dun Mao
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
5
|
Chen H, Naya Y. Reunification of Object and View-Center Background Information in the Primate Medial Temporal Lobe. Front Behav Neurosci 2021; 15:756801. [PMID: 34938164 PMCID: PMC8685287 DOI: 10.3389/fnbeh.2021.756801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 11/15/2021] [Indexed: 11/13/2022] Open
Abstract
Recent work has shown that the medial temporal lobe (MTL), including the hippocampus (HPC) and its surrounding limbic cortices, plays a role in scene perception in addition to episodic memory. The two basic factors of scene perception are the object (“what”) and location (“where”). In this review, we first summarize the anatomical knowledge related to visual inputs to the MTL and physiological studies examining object-related information processed along the ventral pathway briefly. Thereafter, we discuss the space-related information, the processing of which was unclear, presumably because of its multiple aspects and a lack of appropriate task paradigm in contrast to object-related information. Based on recent electrophysiological studies using non-human primates and the existing literature, we proposed the “reunification theory,” which explains brain mechanisms which construct object-location signals at each gaze. In this reunification theory, the ventral pathway signals a large-scale background image of the retina at each gaze position. This view-center background signal reflects the first person’s perspective and specifies the allocentric location in the environment by similarity matching between images. The spatially invariant object signal and view-center background signal, both of which are derived from the same retinal image, are integrated again (i.e., reunification) along the ventral pathway-MTL stream, particularly in the perirhinal cortex. The conjunctive signal, which represents a particular object at a particular location, may play a role in scene perception in the HPC as a key constituent element of an entire scene.
Collapse
Affiliation(s)
- He Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Yuji Naya
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Beijing Key Laboratory of Behavioral and Mental Health, Faculty of Science, College of Psychology and Cognitive Sciences, Peking University, Beijing, China
| |
Collapse
|
6
|
Han Z, Sereno A. Modeling the Ventral and Dorsal Cortical Visual Pathways Using Artificial Neural Networks. Neural Comput 2021; 34:138-171. [PMID: 34758483 DOI: 10.1162/neco_a_01456] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Accepted: 08/02/2021] [Indexed: 11/04/2022]
Abstract
Although in conventional models of cortical processing, object recognition and spatial properties are processed separately in ventral and dorsal cortical visual pathways respectively, some recent studies have shown that representations associated with both objects' identity (of shape) and space are present in both visual pathways. However, it is still unclear whether the presence of identity and spatial properties in both pathways have functional roles. In our study, we have tried to answer this question through computational modeling. Our simulation results show that both a model ventral and dorsal pathway, separately trained to do object and spatial recognition, respectively, each actively retained information about both identity and space. In addition, we show that these networks retained different amounts and kinds of identity and spatial information. As a result, our modeling suggests that two separate cortical visual pathways for identity and space (1) actively retain information about both identity and space (2) retain information about identity and space differently and (3) that this differently retained information about identity and space in the two pathways may be necessary to accurately and optimally recognize and localize objects. Further, modeling results suggests these findings are robust and do not strongly depend on the specific structures of the neural networks.
Collapse
Affiliation(s)
- Zhixian Han
- Department of Psychological Sciences, Purdue University, West Lafayette, IN 47907, U.S.A.
| | - Anne Sereno
- Department of Psychological Sciences and Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, U.S.A.
| |
Collapse
|
7
|
Mao D, Avila E, Caziot B, Laurens J, Dickman JD, Angelaki DE. Spatial modulation of hippocampal activity in freely moving macaques. Neuron 2021; 109:3521-3534.e6. [PMID: 34644546 DOI: 10.1016/j.neuron.2021.09.032] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 07/30/2021] [Accepted: 09/14/2021] [Indexed: 02/08/2023]
Abstract
The hippocampal formation is linked to spatial navigation, but there is little corroboration from freely moving primates with concurrent monitoring of head and gaze stances. We recorded neural activity across hippocampal regions in rhesus macaques during free foraging in an open environment while tracking their head and eye. Theta activity was intermittently present at movement onset and modulated by saccades. Many neurons were phase-locked to theta, with few showing phase precession. Most neurons encoded a mixture of spatial variables beyond place and grid tuning. Spatial representations were dominated by facing location and allocentric direction, mostly in head, rather than gaze, coordinates. Importantly, eye movements strongly modulated neural activity in all regions. These findings reveal that the macaque hippocampal formation represents three-dimensional (3D) space using a multiplexed code, with head orientation and eye movement properties being dominant during free exploration.
Collapse
Affiliation(s)
- Dun Mao
- Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA.
| | - Eric Avila
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Baptiste Caziot
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Jean Laurens
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - J David Dickman
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Tandon School of Engineering, New York University, New York, NY 11201, USA.
| |
Collapse
|
8
|
Chen H, Naya Y. Forward Processing of Object-Location Association from the Ventral Stream to Medial Temporal Lobe in Nonhuman Primates. Cereb Cortex 2020; 30:1260-1271. [PMID: 31408097 DOI: 10.1093/cercor/bhz164] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 06/25/2019] [Accepted: 06/25/2019] [Indexed: 12/13/2022] Open
Abstract
While the hippocampus (HPC) is a prime candidate combining object identity and location due to its strong connections to the ventral and dorsal pathways via surrounding medial temporal lobe (MTL) areas, recent physiological studies have reported spatial information in the ventral pathway and its downstream target in MTL. However, it remains unknown whether the object-location association proceeds along the ventral MTL pathway before HPC. To address this question, we recorded neuronal activity from MTL and area anterior inferotemporal cortex (TE) of two macaques gazing at an object to retain its identity and location in each trial. The results showed significant effects of object-location association at a single-unit level in TE, perirhinal cortex (PRC), and HPC, but not in the parahippocampal cortex. Notably, a clear area difference emerged in the association form: 1) representations of object identity were added to those of subjects' viewing location in TE; 2) PRC signaled both the additive form and the conjunction of the two inputs; and 3) HPC signaled only the conjunction signal. These results suggest that the object and location signals are combined stepwise at TE and PRC each time primates view an object, and PRC may provide HPC with the conjunctional signal, which might be used for encoding episodic memory.
Collapse
Affiliation(s)
- He Chen
- Center for Life Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China.,Academy for Advanced Interdisciplinary Studies, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China
| | - Yuji Naya
- School of Psychological and Cognitive Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China.,Center for Life Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China.,IDG/McGovern Institute for Brain Research, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China.,Beijing Key Laboratory of Behavior and Mental Health, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China.,Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University, Hangzhou 310029, China
| |
Collapse
|
9
|
Ryan JD, Shen K, Liu Z. The intersection between the oculomotor and hippocampal memory systems: empirical developments and clinical implications. Ann N Y Acad Sci 2020; 1464:115-141. [PMID: 31617589 PMCID: PMC7154681 DOI: 10.1111/nyas.14256] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 08/29/2019] [Accepted: 09/19/2019] [Indexed: 12/28/2022]
Abstract
Decades of cognitive neuroscience research has shown that where we look is intimately connected to what we remember. In this article, we review findings from human and nonhuman animals, using behavioral, neuropsychological, neuroimaging, and computational modeling methods, to show that the oculomotor and hippocampal memory systems interact in a reciprocal manner, on a moment-to-moment basis, mediated by a vast structural and functional network. Visual exploration serves to efficiently gather information from the environment for the purpose of creating new memories, updating existing memories, and reconstructing the rich, vivid details from memory. Conversely, memory increases the efficiency of visual exploration. We call for models of oculomotor control to consider the influence of the hippocampal memory system on the cognitive control of eye movements, and for models of hippocampal and broader medial temporal lobe function to consider the influence of the oculomotor system on the development and expression of memory. We describe eye movement-based applications for the detection of neurodegeneration and delivery of therapeutic interventions for mental health disorders for which the hippocampus is implicated and memory dysfunctions are at the forefront.
Collapse
Affiliation(s)
- Jennifer D. Ryan
- Rotman Research InstituteBaycrestTorontoOntarioCanada
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Department of PsychiatryUniversity of TorontoTorontoOntarioCanada
| | - Kelly Shen
- Rotman Research InstituteBaycrestTorontoOntarioCanada
| | - Zhong‐Xu Liu
- Department of Behavioral SciencesUniversity of Michigan‐DearbornDearbornMichigan
| |
Collapse
|
10
|
Morris AP, Krekelberg B. A Stable Visual World in Primate Primary Visual Cortex. Curr Biol 2019; 29:1471-1480.e6. [PMID: 31031112 DOI: 10.1016/j.cub.2019.03.069] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 02/13/2019] [Accepted: 03/28/2019] [Indexed: 11/26/2022]
Abstract
Humans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina-and propagated throughout the visual cortical hierarchy-is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here, we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded "eye tracker" that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in area V1 of macaque monkeys during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of gaze direction. This decoded signal tracked the eye accurately not only during fixation but also during fast and slow eye movements. After a fast eye movement, the eye-position signal arrived in V1 at approximately the same time at which the new visual information arrived from the retina. Using simulations, we show that this V1 eye-position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.
Collapse
Affiliation(s)
- Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, 26 Innovation Walk, Clayton, Victoria 3800, Australia.
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, 197 University Ave., Newark, New Jersey 07102, USA
| |
Collapse
|
11
|
Rolls ET, Wirth S. Spatial representations in the primate hippocampus, and their functions in memory and navigation. Prog Neurobiol 2018; 171:90-113. [DOI: 10.1016/j.pneurobio.2018.09.004] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 09/10/2018] [Accepted: 09/10/2018] [Indexed: 01/01/2023]
|
12
|
Abstract
Debate about the function of the hippocampus often pits theories advocating for spatial mapping against those that argue for a central role in memory. This review addresses whether research in the monkey supports the view that processing spatial information is fundamental to the function of the hippocampus. In support of spatial processing theories, neurons in the monkey hippocampal formation have striking spatial tuning, and an intact hippocampus is necessary to effectively utilize allocentric spatial relationships. However, the hippocampus also supports non-spatial processes, as its neurons acutely respond to distinct task events and hippocampal damage disrupts both expedient task acquisition and the monitoring of ongoing events in non-spatial paradigms. The features that are shared between spatial and non-spatial hippocampal-dependent tasks point toward a common mechanism underlying hippocampal function that is independent of processing spatial information. We suggest that spatial information is only one facet of immediate experience represented by the hippocampus. The current data support the idea that the hippocampus tracks many aspects of ongoing experience and the primary role of the hippocampus may be in linking experienced events into unitary episodes.
Collapse
|
13
|
Meister MLR, Buffalo EA. Getting directions from the hippocampus: The neural connection between looking and memory. Neurobiol Learn Mem 2016; 134 Pt A:135-144. [PMID: 26743043 PMCID: PMC4927424 DOI: 10.1016/j.nlm.2015.12.004] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Revised: 12/12/2015] [Accepted: 12/16/2015] [Indexed: 01/29/2023]
Abstract
Investigations into the neural basis of memory in human and non-human primates have focused on the hippocampus and associated medial temporal lobe (MTL) structures. However, how memory signals from the hippocampus affect motor actions is unknown. We propose that approaching this question through eye movement, especially by assessing the changes in looking behavior that occur with experience, is a promising method for exposing neural computations within the hippocampus. Here, we review how looking behavior is guided by memory in several ways, some of which have been shown to depend on the hippocampus, and how hippocampal neural signals are modulated by eye movements. Taken together, these findings highlight the need for future research on how MTL structures interact with the oculomotor system. Probing how the hippocampus reflects and impacts motor output during looking behavior renders a practical path to advance our understanding of the hippocampal memory system.
Collapse
Affiliation(s)
- Miriam L R Meister
- Department of Physiology and Biophysics, University of Washington, USA; Washington National Primate Research Center, USA; University of Washington School of Medicine, USA
| | - Elizabeth A Buffalo
- Department of Physiology and Biophysics, University of Washington, USA; Washington National Primate Research Center, USA; University of Washington School of Medicine, USA
| |
Collapse
|
14
|
Piccardi L, De Luca M, Nori R, Palermo L, Iachini F, Guariglia C. Navigational Style Influences Eye Movement Pattern during Exploration and Learning of an Environmental Map. Front Behav Neurosci 2016; 10:140. [PMID: 27445735 PMCID: PMC4925711 DOI: 10.3389/fnbeh.2016.00140] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Accepted: 06/16/2016] [Indexed: 11/20/2022] Open
Abstract
During navigation people may adopt three different spatial styles (i.e., Landmark, Route, and Survey). Landmark style (LS) people are able to recall familiar landmarks but cannot combine them with directional information; Route style (RS) people connect landmarks to each other using egocentric information about direction; Survey style (SS) people use a map-like representation of the environment. SS individuals generally navigate better than LS and RS people. Fifty-one college students (20 LS; 17 RS, and 14 SS) took part in the experiment. The spatial cognitive style (SCS) was assessed by means of the SCS test; participants then had to learn a schematic map of a city, and after 5 min had to recall the path depicted on it. During the learning and delayed recall phases, eye-movements were recorded. Our intent was to investigate whether there is a peculiar way to explore an environmental map related to the individual’s spatial style. Results support the presence of differences in the strategy used by the three spatial styles for learning the path and its delayed recall. Specifically, LS individuals produced a greater number of fixations of short duration, while the opposite eye movement pattern characterized SS individuals. Moreover, SS individuals showed a more spread and comprehensive explorative pattern of the map, while LS individuals focused their exploration on the path and related targets. RS individuals showed a pattern of exploration at a level of proficiency between LS and SS individuals. We discuss the clinical and anatomical implications of our data.
Collapse
Affiliation(s)
- Laura Piccardi
- Department of Life, Health and Environmental Science, University of L'AquilaL'Aquila, Italy; Neuropsychology Unit, IRCCS Fondazione Santa LuciaRome, Italy
| | - Maria De Luca
- Neuropsychology Unit, IRCCS Fondazione Santa Lucia Rome, Italy
| | - Raffaella Nori
- Department of Psychology, University of Bologna Bologna, Italy
| | - Liana Palermo
- Neuropsychology Unit, IRCCS Fondazione Santa LuciaRome, Italy; Department of Medical and Surgical Science, University Magna GraeciaCatanzaro, Italy
| | - Fabiana Iachini
- Department of Life, Health and Environmental Science, University of L'Aquila L'Aquila, Italy
| | - Cecilia Guariglia
- Neuropsychology Unit, IRCCS Fondazione Santa LuciaRome, Italy; Department of Psychology, "Sapienza" University of RomeRome, Italy
| |
Collapse
|
15
|
Lehky SR, Sereno ME, Sereno AB. Characteristics of Eye-Position Gain Field Populations Determine Geometry of Visual Space. Front Integr Neurosci 2016; 9:72. [PMID: 26834587 PMCID: PMC4718998 DOI: 10.3389/fnint.2015.00072] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 12/21/2015] [Indexed: 11/17/2022] Open
Abstract
We have previously demonstrated differences in eye-position spatial maps for anterior inferotemporal cortex (AIT) in the ventral stream and lateral intraparietal cortex (LIP) in the dorsal stream, based on population decoding of gaze angle modulations of neural visual responses (i.e., eye-position gain fields). Here we explore the basis of such spatial encoding differences through modeling of gain field characteristics. We created a population of model neurons, each having a different eye-position gain field. This population was used to reconstruct eye-position visual space using multidimensional scaling. As gain field shapes have never been well-established experimentally, we examined different functions, including planar, sigmoidal, elliptical, hyperbolic, and mixtures of those functions. All functions successfully recovered positions, indicating weak constraints on allowable gain field shapes. We then used a genetic algorithm to modify the characteristics of model gain field populations until the recovered spatial maps closely matched those derived from monkey neurophysiological data in AIT and LIP. The primary differences found between model AIT and LIP gain fields were that AIT gain fields were more foveally dominated. That is, gain fields in AIT operated on smaller spatial scales and smaller dispersions than in LIP. Thus, we show that the geometry of eye-position visual space depends on the population characteristics of gain fields, and that differences in gain field characteristics for different cortical areas may underlie differences in the representation of space.
Collapse
Affiliation(s)
- Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute La Jolla, CA, USA
| | | | - Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Medical School Houston, TX, USA
| |
Collapse
|
16
|
Shaikh AG, Ghasia FF. Gaze holding after anterior-inferior temporal lobectomy. Neurol Sci 2014; 35:1749-56. [DOI: 10.1007/s10072-014-1825-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2014] [Accepted: 05/06/2014] [Indexed: 10/25/2022]
|
17
|
Durand JB, Trotter Y, Celebrini S. Privileged Processing of the Straight-Ahead Direction in Primate Area V1. Neuron 2010; 66:126-37. [DOI: 10.1016/j.neuron.2010.03.014] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2010] [Indexed: 10/19/2022]
|
18
|
Abstract
Visual scene interpretation depends on assumptions based on the statistical regularities of the world. People have some preference for seeing ambiguously oriented objects (Necker cubes) as if tilted down or viewed from above. This bias is a near certainty in the first instant (∼1 s) of viewing and declines over the course of many seconds. In addition, we found that there is modulation of perceived orientation that varies with position—for example objects on the left are more likely to be interpreted as viewed from the right. Therefore there is both a viewed-from-above prior and a scene position-dependent modulation of perceived 3-D orientation. These results are consistent with the idea that ambiguously oriented objects are initially assigned an orientation consistent with our experience of an asymmetric world in which objects most probably sit on surfaces below eye level.
Collapse
Affiliation(s)
- Allan C Dobbins
- Department of Biomedical Engineering & Vision Science Research Center, University of Alabama at Birmingham, Birmingham, Alabama, United States of America.
| | | |
Collapse
|
19
|
Abstract
Background A key aspect of representations for object recognition and scene analysis in the ventral visual stream is the spatial frame of reference, be it a viewer-centered, object-centered, or scene-based coordinate system. Coordinate transforms from retinocentric space to other reference frames involve combining neural visual responses with extraretinal postural information. Methodology/Principal Findings We examined whether such spatial information is available to anterior inferotemporal (AIT) neurons in the macaque monkey by measuring the effect of eye position on responses to a set of simple 2D shapes. We report, for the first time, a significant eye position effect in over 40% of recorded neurons with small gaze angle shifts from central fixation. Although eye position modulates responses, it does not change shape selectivity. Conclusions/Significance These data demonstrate that spatial information is available in AIT for the representation of objects and scenes within a non-retinocentric frame of reference. More generally, the availability of spatial information in AIT calls into questions the classic dichotomy in visual processing that associates object shape processing with ventral structures such as AIT but places spatial processing in a separate anatomical stream projecting to dorsal structures.
Collapse
Affiliation(s)
- Sidney R. Lehky
- Computational Neuroscience Laboratory, The Salk Institute, La Jolla, California, United States of America
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
| | - Xinmiao Peng
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
| | - Carrie J. McAdams
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, Texas, United States of America
| | - Anne B. Sereno
- Department of Neurobiology and Anatomy, University of Texas Houston Health Science Center, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
20
|
Steinmetz PN. Alternate Task Inhibits Single-neuron Category-selective Responses in the Human Hippocampus while Preserving Selectivity in the Amygdala. J Cogn Neurosci 2008; 21:347-58. [DOI: 10.1162/jocn.2008.21017] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
One fifth of neurons in the medial-temporal lobe of human epilepsy patients respond selectively to categories of images, such as faces or cars. Here we show that responses of hippocampal neurons are rapidly modified as subjects alternate (over 60 sec) between two tasks (1) identifying images from a category, or (2) playing a simple video game superimposed on the same images. Category-selective responses, present when a subject identifies categories, are eliminated when the subject shifts to playing the game for 87% of category-selective hippocampal neurons. By contrast, responses in the amygdala are present during both tasks for 72% of category-selective amygdalar neurons. These results suggest that attention to images is required to evoke selective responses from single neurons in the hippocampus, but is not required by neurons in the amygdala.
Collapse
|
21
|
Abstract
The spatial representation in the human ventral object-related areas (i.e., the lateral occipital complex [LOC]) is currently unknown. It seems plausible, however, that it would diverge from the strict retinotopic mapping (characteristic of V1) to a more invariant coordinate frame, thereby allowing for reliable object recognition in the face of eye, head, or body movement. To study this, we compared the fMRI activation in LOC when object displacement was limited to either the retina or the screen by manipulating eye position and object locations. We found clear adaptation in LOC when the object's screen position was fixed, regardless of the object's retinal position. Furthermore, we found significantly greater activation in LOC in the hemisphere contralateral to the object's screen position, although the visual task was constructed in a way that the objects were present equally often on each of the 2 retinal hemifields. Together, these results indicate that a sizeable fraction of the neurons in LOC may have head-based receptive fields. Such an extraretinal representation may be useful for maintenance of object coherence across saccadic eye movements, which are an integral part of natural vision.
Collapse
Affiliation(s)
- Ayelet McKyton
- Neurobiology Department, Life Science Institute, Hebrew University, Jerusalem 91904, Israel
| | | |
Collapse
|
22
|
Abstract
Goal-directed self-motion through space is anything but a trivial task. What we take for granted in everyday life requires the complex interplay of different sensory and motor systems. On the sensory side most importantly a target of interest has to be localized relative to one's own position in space. On the motor side the most critical step in neural processing is to define and perform a movement towards the target as well as the avoidance of obstacles. Furthermore, the multisensory (visual, tactile and auditory) motion signals as induced by one's own movement have to be identified and differentiated from the real motion of visual, tactile or auditory objects in the outside world. In a number of experimental studies performed in recent years we and others have functionally characterized a subregion within monkey posterior parietal cortex (PPC) that appears to be well suited to contribute to such multisensory encoding of spatial and motion information. In this review I will summarize the most important experimental findings on the functional properties of this very region in monkey PPC, i.e. the ventral intraparietal area.
Collapse
Affiliation(s)
- Frank Bremmer
- Department of Neurophysics, Philipps-University Marburg, Renthof 7, D-35032 Marburg, Germany.
| |
Collapse
|
23
|
Abstract
We investigated the neural mechanisms underlying visual localization in 3-D space in area V1 of behaving monkeys. Three different sources of information, retinal disparity, viewing distance and gaze direction, that participate in these neural mechanisms are being reviewed. The way they interact with each other is studied by combining retinal and extraretinal signals. Interactions between retinal disparity and viewing distance have been shown in foveal V1; we have observed a strong modulation of the spontaneous activity and of the visual response of most V1 cells that was highly correlated with the vergence angle. As a consequence of these gain effects, neural horizontal disparity coding is favoured or refined for particular distances of fixation. Changing the gaze direction in the fronto-parallel plane also produces strong gains in the visual response of half of the cells in foveal V1. Cells tested for horizontal disparity and orientation selectivities show gain effects that occur coherently for the same spatial coordinates of the eyes. Shifts in preferred disparity also occurred in several neurons. Cells tested in calcarine V1 at retinal eccentricities larger than 10 degrees , show that horizontal disparity is encoded at least up to 20 degrees around both the horizontal and vertical meridians. At these large retinal eccentricities we found that vertical disparity is also encoded with tuning profiles similar to those of horizontal disparity coding. Combinations of horizontal and vertical disparity signals show that most cells encode both properties. In fact the expression of horizontal disparity coding depends on the vertical disparity signals that produce strong gain effects and frequent changes in peak selectivities. We conclude that the vertical disparity signal and the eye position signal serve to disambiguate the horizontal disparity signal to provide information on 3-D spatial coordinates in terms of distance, gaze direction and retinal eccentricity. We suggest that the relative weight among these different signals is the determining factor involved in the neural processing that gives information on 3-D spatial localization.
Collapse
Affiliation(s)
- Yves Trotter
- Faculté de Médecine Rangueil, Centre de Recherche Cerveau & Cognition, CNRS, Université Paul Sabatier, 133 route de Narbonne, 31062 Toulouse Cédex, France.
| | | | | |
Collapse
|
24
|
Abstract
Visual object perception is usually studied by presenting one object at a time at the fovea. However, the world around us is composed of multiple objects. The way our visual system deals with this complexity has remained controversial in the literature. Some models claim that the ventral pathway, a set of visual cortical areas responsible for object recognition, can process only one or very few objects at a time without ambiguity. Other models argue in favor of a massively parallel processing of objects in a scene. Recent experiments in monkeys have provided important data about this issue. The ventral pathway seems to be able to perform complex analyses on several objects simultaneously, but only during a short time period. Subsequently only one or very few objects are explicitly selected and consciously perceived. Here, we survey the implications of these new findings for our understanding of object processing.
Collapse
|
25
|
Britten KH. Eyes of the world be free; you have nothing to lose but your chains. Nat Neurosci 2000; 3:745-6. [PMID: 10903559 DOI: 10.1038/77632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|