1
|
Zutshi I, Apostolelli A, Yang W, Zheng ZS, Dohi T, Balzani E, Williams AH, Savin C, Buzsáki G. Hippocampal neuronal activity is aligned with action plans. Nature 2025; 639:153-161. [PMID: 39779866 DOI: 10.1038/s41586-024-08397-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 10/31/2024] [Indexed: 01/11/2025]
Abstract
Neurons in the hippocampus are correlated with different variables, including space, time, sensory cues, rewards and actions, in which the extent of tuning depends on ongoing task demands1-8. However, it remains uncertain whether such diverse tuning corresponds to distinct functions within the hippocampal network or whether a more generic computation can account for these observations9. Here, to disentangle the contribution of externally driven cues versus internal computation, we developed a task in mice in which space, auditory tones, rewards and context were juxtaposed with changing relevance. High-density electrophysiological recordings revealed that neurons were tuned to each of these modalities. By comparing movement paths and action sequences, we observed that external variables had limited direct influence on hippocampal firing. Instead, spiking was influenced by online action plans and modulated by goal uncertainty. Our results suggest that internally generated cell assembly sequences are selected and updated by action plans towards deliberate goals. The apparent tuning of hippocampal neuronal spiking to different sensory modalities might emerge due to alignment to the afforded action progression within a task rather than representation of external cues.
Collapse
Affiliation(s)
- Ipshita Zutshi
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
| | - Athina Apostolelli
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
| | - Wannan Yang
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Zheyang Sam Zheng
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Tora Dohi
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York, NY, USA
- Center for Data Science, New York University, New York, NY, USA
| | - Alex H Williams
- Center for Neural Science, New York University, New York, NY, USA
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY, USA
- Center for Data Science, New York University, New York, NY, USA
| | - György Buzsáki
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
2
|
Ray S, Yona I, Elami N, Palgi S, Latimer KW, Jacobsen B, Witter MP, Las L, Ulanovsky N. Hippocampal coding of identity, sex, hierarchy, and affiliation in a social group of wild fruit bats. Science 2025; 387:eadk9385. [PMID: 39883756 DOI: 10.1126/science.adk9385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 11/11/2024] [Indexed: 02/01/2025]
Abstract
Social animals live in groups and interact volitionally in complex ways. However, little is known about neural responses under such natural conditions. Here, we investigated hippocampal CA1 neurons in a mixed-sex group of five to 10 freely behaving wild Egyptian fruit bats that lived continuously in a laboratory-based cave and formed a stable social network. In-flight, most hippocampal place cells were socially modulated and represented the identity and sex of conspecifics. Upon social interactions, neurons represented specific interaction types. During active observation, neurons encoded the bat's own position and head direction, together with the position, direction, and identity of multiple conspecifics. Identity-coding neurons encoded the same bat across contexts. The strength of identity coding was modulated by sex, hierarchy, and social affiliation. Thus, hippocampal neurons form a multidimensional sociospatial representation of the natural world.
Collapse
Affiliation(s)
- Saikat Ray
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Itay Yona
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Nadav Elami
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Shaked Palgi
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | | | - Bente Jacobsen
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
- Faculty of Medicine and Health Science, Kavli Institute for Systems Neuroscience, NTNU Norwegian University for Science and Technology, Trondheim, Norway
| | - Menno P Witter
- Faculty of Medicine and Health Science, Kavli Institute for Systems Neuroscience, NTNU Norwegian University for Science and Technology, Trondheim, Norway
| | - Liora Las
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | - Nachum Ulanovsky
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
3
|
Noel JP, Zhang R, Pitkow X, Angelaki DE. Dorsolateral prefrontal cortex drives strategic aborting by optimizing long-run policy extraction. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.28.625897. [PMID: 39651243 PMCID: PMC11623693 DOI: 10.1101/2024.11.28.625897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Real world choices often involve balancing decisions that are optimized for the short-vs. long-term. Here, we reason that apparently sub-optimal single trial decisions in macaques may in fact reflect long-term, strategic planning. We demonstrate that macaques freely navigating in VR for sequentially presented targets will strategically abort offers, forgoing more immediate rewards on individual trials to maximize session-long returns. This behavior is highly specific to the individual, demonstrating that macaques reason about their own long-run performance. Reinforcement-learning (RL) models suggest this behavior is algorithmically supported by modular actor-critic networks with a policy module not only optimizing long-term value functions, but also informed of specific state-action values allowing for rapid policy optimization. The behavior of artificial networks suggests that changes in policy for a matched offer ought to be evident as soon as offers are made, even if the aborting behavior occurs much later. We confirm this prediction by demonstrating that single units and population dynamics in macaque dorsolateral prefrontal cortex (dlPFC), but not parietal area 7a or dorsomedial superior temporal area (MSTd), reflect the upcoming reward-maximizing aborting behavior upon offer presentation. These results cast dlPFC as a specialized policy module, and stand in contrast to recent work demonstrating the distributed and recurrent nature of belief-networks.
Collapse
|
4
|
Dupin L, Gerardin E, Térémetz M, Hamdoun S, Turc G, Maier MA, Baron JC, Lindberg PG. Alterations of tactile and anatomical spatial representations of the hand after stroke. Cortex 2024; 177:68-83. [PMID: 38838560 DOI: 10.1016/j.cortex.2024.04.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 03/19/2024] [Accepted: 04/18/2024] [Indexed: 06/07/2024]
Abstract
Stroke often causes long-term motor and somatosensory impairments. Motor planning and tactile perception rely on spatial body representations. However, the link between altered spatial body representations, motor deficit and tactile spatial coding remains unclear. This study investigates the relationship between motor deficits and alterations of anatomical (body) and tactile spatial representations of the hand in 20 post-stroke patients with upper limb hemiparesis. Anatomical and tactile spatial representations were assessed from 10 targets (nails and knuckles) respectively cued verbally by their anatomical name or using tactile stimulations. Two distance metrics (hand width and finger length) and two structural measures (relative organization of targets positions and angular deviation of fingers from their physical posture) were computed and compared to clinical assessments, normative data and lesions sites. Over half of the patients had altered anatomical and/or tactile spatial representations. Metrics of tactile and anatomical representations showed common variations, where a wider hand representation was linked to more severe motor deficits. In contrast, alterations in structural measures were not concomitantly observed in tactile and anatomical representations and did not correlate with clinical assessments. Finally, a preliminary analysis showed that specific alterations in tactile structural measures were associated with dorsolateral prefrontal stroke lesions. This study reveals shared and distinct characteristics of anatomical and tactile hand spatial representations, reflecting different mechanisms that can be affected differently after stroke: metrics and location of tactile and anatomical representations were partially shared while the structural measures of tactile and anatomical representations had distinct characteristics.
Collapse
Affiliation(s)
- Lucile Dupin
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, F-75014 Paris, France; Université Paris Cité, INCC UMR 8002, CNRS, F-75006 Paris, France.
| | - Eloïse Gerardin
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, F-75014 Paris, France
| | - Maxime Térémetz
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, F-75014 Paris, France
| | - Sonia Hamdoun
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, F-75014 Paris, France; Service de Médecine Physique et de Réadaptation, GHU-Paris Psychiatrie et Neurosciences, Hôpital Sainte Anne, F-75014 Paris, France
| | - Guillaume Turc
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, F-75014 Paris, France; Department of Neurology, GHU-Paris Psychiatrie et Neurosciences, FHU Neurovasc, Paris, France
| | - Marc A Maier
- Université Paris Cité, INCC UMR 8002, CNRS, F-75006 Paris, France
| | - Jean-Claude Baron
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, F-75014 Paris, France; Department of Neurology, GHU-Paris Psychiatrie et Neurosciences, FHU Neurovasc, Paris, France
| | - Påvel G Lindberg
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, F-75014 Paris, France
| |
Collapse
|
5
|
Zhang R, Pitkow X, Angelaki DE. Inductive biases of neural network modularity in spatial navigation. SCIENCE ADVANCES 2024; 10:eadk1256. [PMID: 39028809 PMCID: PMC11259174 DOI: 10.1126/sciadv.adk1256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 06/14/2024] [Indexed: 07/21/2024]
Abstract
The brain may have evolved a modular architecture for daily tasks, with circuits featuring functionally specialized modules that match the task structure. We hypothesize that this architecture enables better learning and generalization than architectures with less specialized modules. To test this, we trained reinforcement learning agents with various neural architectures on a naturalistic navigation task. We found that the modular agent, with an architecture that segregates computations of state representation, value, and action into specialized modules, achieved better learning and generalization. Its learned state representation combines prediction and observation, weighted by their relative uncertainty, akin to recursive Bayesian estimation. This agent's behavior also resembles macaques' behavior more closely. Our results shed light on the possible rationale for the brain's modularity and suggest that artificial systems can use this insight from neuroscience to improve learning and generalization in natural tasks.
Collapse
Affiliation(s)
- Ruiyi Zhang
- Tandon School of Engineering, New York University, New York, NY, USA
| | - Xaq Pitkow
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Dora E. Angelaki
- Tandon School of Engineering, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
6
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex of male macaques. Nat Commun 2024; 15:5738. [PMID: 38982106 PMCID: PMC11233555 DOI: 10.1038/s41467-024-50203-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 07/02/2024] [Indexed: 07/11/2024] Open
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here, we have male macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlational analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between neurons in prefrontal cortex maintain a stable population code and context-invariant beliefs during naturalistic behavior.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA.
- Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA.
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
- Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
7
|
Bufacchi RJ, Battaglia-Mayer A, Iannetti GD, Caminiti R. Cortico-spinal modularity in the parieto-frontal system: A new perspective on action control. Prog Neurobiol 2023; 231:102537. [PMID: 37832714 DOI: 10.1016/j.pneurobio.2023.102537] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 08/22/2023] [Accepted: 10/04/2023] [Indexed: 10/15/2023]
Abstract
Classical neurophysiology suggests that the motor cortex (MI) has a unique role in action control. In contrast, this review presents evidence for multiple parieto-frontal spinal command modules that can bypass MI. Five observations support this modular perspective: (i) the statistics of cortical connectivity demonstrate functionally-related clusters of cortical areas, defining functional modules in the premotor, cingulate, and parietal cortices; (ii) different corticospinal pathways originate from the above areas, each with a distinct range of conduction velocities; (iii) the activation time of each module varies depending on task, and different modules can be activated simultaneously; (iv) a modular architecture with direct motor output is faster and less metabolically expensive than an architecture that relies on MI, given the slow connections between MI and other cortical areas; (v) lesions of the areas composing parieto-frontal modules have different effects from lesions of MI. Here we provide examples of six cortico-spinal modules and functions they subserve: module 1) arm reaching, tool use and object construction; module 2) spatial navigation and locomotion; module 3) grasping and observation of hand and mouth actions; module 4) action initiation, motor sequences, time encoding; module 5) conditional motor association and learning, action plan switching and action inhibition; module 6) planning defensive actions. These modules can serve as a library of tools to be recombined when faced with novel tasks, and MI might serve as a recombinatory hub. In conclusion, the availability of locally-stored information and multiple outflow paths supports the physiological plausibility of the proposed modular perspective.
Collapse
Affiliation(s)
- R J Bufacchi
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy; International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai, China
| | - A Battaglia-Mayer
- Department of Physiology and Pharmacology, University of Rome, Sapienza, Italy
| | - G D Iannetti
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy; Department of Neuroscience, Physiology and Pharmacology, University College London (UCL), London, UK
| | - R Caminiti
- Neuroscience and Behaviour Laboratory, Istituto Italiano di Tecnologia, Rome, Italy.
| |
Collapse
|
8
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
9
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
10
|
Stavropoulos A, Lakshminarasimhan KJ, Angelaki DE. Belief embodiment through eye movements facilitates memory-guided navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.21.554107. [PMID: 37662309 PMCID: PMC10473632 DOI: 10.1101/2023.08.21.554107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Neural network models optimized for task performance often excel at predicting neural activity but do not explain other properties such as the distributed representation across functionally distinct areas. Distributed representations may arise from animals' strategies for resource utilization, however, fixation-based paradigms deprive animals of a vital resource: eye movements. During a naturalistic task in which humans use a joystick to steer and catch flashing fireflies in a virtual environment lacking position cues, subjects physically track the latent task variable with their gaze. We show this strategy to be true also during an inertial version of the task in the absence of optic flow and demonstrate that these task-relevant eye movements reflect an embodiment of the subjects' dynamically evolving internal beliefs about the goal. A neural network model with tuned recurrent connectivity between oculomotor and evidence-integrating frontoparietal circuits accounted for this behavioral strategy. Critically, this model better explained neural data from monkeys' posterior parietal cortex compared to task-optimized models unconstrained by such an oculomotor-based cognitive strategy. These results highlight the importance of unconstrained movement in working memory computations and establish a functional significance of oculomotor signals for evidence-integration and navigation computations via embodied cognition.
Collapse
Affiliation(s)
| | | | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY, USA
- Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
11
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.30.551169. [PMID: 37577498 PMCID: PMC10418097 DOI: 10.1101/2023.07.30.551169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here we have macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlation analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between prefrontal cortex neurons maintains a stable population code and context-invariant beliefs during naturalistic behavior with closed action-perception loops.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
12
|
Zhu SL, Lakshminarasimhan KJ, Angelaki DE. Computational cross-species views of the hippocampal formation. Hippocampus 2023; 33:586-599. [PMID: 37038890 PMCID: PMC10947336 DOI: 10.1002/hipo.23535] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/12/2023]
Abstract
The discovery of place cells and head direction cells in the hippocampal formation of freely foraging rodents has led to an emphasis of its role in encoding allocentric spatial relationships. In contrast, studies in head-fixed primates have additionally found representations of spatial views. We review recent experiments in freely moving monkeys that expand upon these findings and show that postural variables such as eye/head movements strongly influence neural activity in the hippocampal formation, suggesting that the function of the hippocampus depends on where the animal looks. We interpret these results in the light of recent studies in humans performing challenging navigation tasks which suggest that depending on the context, eye/head movements serve one of two roles-gathering information about the structure of the environment (active sensing) or externalizing the contents of internal beliefs/deliberation (embodied cognition). These findings prompt future experimental investigations into the information carried by signals flowing between the hippocampal formation and the brain regions controlling postural variables, and constitute a basis for updating computational theories of the hippocampal system to accommodate the influence of eye/head movements.
Collapse
Affiliation(s)
- Seren L Zhu
- Center for Neural Science, New York University, New York, New York, USA
| | - Kaushik J Lakshminarasimhan
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, New York, USA
- Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, New York, New York, USA
| |
Collapse
|
13
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.27.525974. [PMID: 36778376 PMCID: PMC9915492 DOI: 10.1101/2023.01.27.525974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, United States
| | - Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Haoran Ding
- Center for Neural Science, New York University, New York City, NY, United States
| | - John Vastola
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, United States
- Tandon School of Engineering, New York University, New York City, NY, United states
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Center for Brain Science, Harvard University, Boston, MA, United States
| |
Collapse
|
14
|
Maisson DJN, Wikenheiser A, Noel JPG, Keinath AT. Making Sense of the Multiplicity and Dynamics of Navigational Codes in the Brain. J Neurosci 2022; 42:8450-8459. [PMID: 36351831 PMCID: PMC9665915 DOI: 10.1523/jneurosci.1124-22.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 08/18/2022] [Accepted: 08/23/2022] [Indexed: 11/17/2022] Open
Abstract
Since the discovery of conspicuously spatially tuned neurons in the hippocampal formation over 50 years ago, characterizing which, where, and how neurons encode navigationally relevant variables has been a major thrust of navigational neuroscience. While much of this effort has centered on the hippocampal formation and functionally-adjacent structures, recent work suggests that spatial codes, in some form or another, can be found throughout the brain, even in areas traditionally associated with sensation, movement, and executive function. In this review, we highlight these unexpected results, draw insights from comparison of these codes across contexts, regions, and species, and finally suggest an avenue for future work to make sense of these diverse and dynamic navigational codes.
Collapse
Affiliation(s)
- David J-N Maisson
- Department of Neuroscience, University of Minnesota, Minneapolis, Minnesota 55455
| | - Andrew Wikenheiser
- Department of Psychology, University of California, Los Angeles, California 90024
| | - Jean-Paul G Noel
- Center for Neural Science, New York University, New York, New York 10003
| | - Alexandra T Keinath
- Department of Psychiatry, Douglas Hospital Research Centre, McGill University, Verdun H3A 0G4, Quebec Canada
- Department of Psychology, University of IL Chicago, Chicago, Illinois 60607
| |
Collapse
|
15
|
Noel JP, Balzani E, Avila E, Lakshminarasimhan KJ, Bruni S, Alefantis P, Savin C, Angelaki DE. Coding of latent variables in sensory, parietal, and frontal cortices during closed-loop virtual navigation. eLife 2022; 11:e80280. [PMID: 36282071 PMCID: PMC9668339 DOI: 10.7554/elife.80280] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 10/24/2022] [Indexed: 11/13/2022] Open
Abstract
We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to 'catch fireflies'. This task requires animals to actively sample from a closed-loop virtual environment while concurrently computing continuous latent variables: (i) the distance and angle travelled (i.e., path integration) and (ii) the distance and angle to a memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding for latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye movements. However, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating path integration and vector coding of hidden spatial goals. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals' gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the monkeys' natural and adaptive task strategy wherein they continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain functional subnetworks may be dynamically established to subserve (embodied) task strategies.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Edoardo Balzani
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Eric Avila
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Kaushik J Lakshminarasimhan
- Center for Neural Science, New York UniversityNew York CityUnited States
- Center for Theoretical Neuroscience, Columbia UniversityNew YorkUnited States
| | - Stefania Bruni
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Panos Alefantis
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Cristina Savin
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew York CityUnited States
| |
Collapse
|