1
|
Rg R, Ascoli GA, Sutton NM, Dannenberg H. Spatial periodicity in grid cell firing is explained by a neural sequence code of 2-D trajectories. eLife 2025; 13:RP96627. [PMID: 40396463 DOI: 10.7554/elife.96627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2025] Open
Abstract
Spatial periodicity in grid cell firing has been interpreted as a neural metric for space providing animals with a coordinate system in navigating physical and mental spaces. However, the specific computational problem being solved by grid cells has remained elusive. Here, we provide mathematical proof that spatial periodicity in grid cell firing is the only possible solution to a neural sequence code of 2-D trajectories and that the hexagonal firing pattern of grid cells is the most parsimonious solution to such a sequence code. We thereby provide a likely teleological cause for the existence of grid cells and reveal the underlying nature of the global geometric organization in grid maps as a direct consequence of a simple local sequence code. A sequence code by grid cells provides intuitive explanations for many previously puzzling experimental observations and may transform our thinking about grid cells.
Collapse
Affiliation(s)
- Rebecca Rg
- Department of Mathematical Sciences, George Mason University, Fairfax, United States
| | - Giorgio A Ascoli
- Department of Bioengineering, George Mason University, Fairfax, United States
| | - Nate M Sutton
- Department of Bioengineering, George Mason University, Fairfax, United States
| | - Holger Dannenberg
- Department of Bioengineering, George Mason University, Fairfax, United States
| |
Collapse
|
2
|
Huber DE. A memory model of rodent spatial navigation in which place cells are memories arranged in a grid and grid cells are non-spatial. eLife 2025; 13:RP95733. [PMID: 40388324 PMCID: PMC12088679 DOI: 10.7554/elife.95733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2025] Open
Abstract
A theory and neurocomputational model are presented that explain grid cell responses as the byproduct of equally dissimilar hippocampal memories. On this account, place and grid cells are best understood as the natural consequence of memory encoding and retrieval; a precise hexagonal grid is the exception rather than the rule, emerging when the animal explores a large surface that is devoid of landmarks and objects. In the proposed memory model, place cells represent memories that are conjunctions of both spatial and non-spatial attributes, and grid cells primarily represent the non-spatial attributes (e.g. sounds, surface texture, etc.) found throughout the two-dimensional recording enclosure. Place cells support memories of the locations where non-spatial attributes can be found (e.g. positions with a particular sound), which are arranged in a hexagonal lattice owing to memory encoding and consolidation processes (pattern separation) as applied to situations in which the non-spatial attributes are found at all locations of a two-dimensional surface. Grid cells exhibit their spatial firing pattern owing to feedback from hippocampal place cells (i.e. a hexagonal pattern of remembered locations for the non-spatial attribute represented by a grid cell). Model simulations explain a wide variety of results in the rodent spatial navigation literature.
Collapse
Affiliation(s)
- David E Huber
- Department of Psychology and Neuroscience, University of Colorado BoulderBoulderUnited States
| |
Collapse
|
3
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
4
|
Haga T, Oseki Y, Fukai T. A unified neural representation model for spatial and conceptual computations. Proc Natl Acad Sci U S A 2025; 122:e2413449122. [PMID: 40063809 PMCID: PMC11929392 DOI: 10.1073/pnas.2413449122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 01/26/2025] [Indexed: 03/25/2025] Open
Abstract
The hippocampus and entorhinal cortex encode spaces by spatially local and hexagonal grid activity patterns (place cells and grid cells), respectively. In addition, the same brain regions also implicate neural representations for nonspatial, semantic concepts (concept cells). These observations suggest that neurocomputational mechanisms for spatial knowledge and semantic concepts are related in the brain. However, the exact relationship remains to be understood. Here, we show a mathematical correspondence between a value function for goal-directed spatial navigation and an information measure for word embedding models in natural language processing. Based on this relationship, we integrate spatial and semantic computations into a neural representation model called "disentangled successor information" (DSI). DSI generates biologically plausible neural representations: spatial representations like place cells and grid cells, and concept-specific word representations which resemble concept cells. Furthermore, with DSI representations, we can perform inferences of spatial contexts and words by a common computational framework based on simple arithmetic operations. This computation can be biologically interpreted by partial modulations of cell assemblies of nongrid cells and concept cells. Our model offers a theoretical connection of spatial and semantic computations and suggests possible computational roles of hippocampal and entorhinal neural representations.
Collapse
Affiliation(s)
- Tatsuya Haga
- Neural Computation and Brain Coding Unit, Okinawa Institute of Science and Technology, Onna-son, Okinawa1919-1, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita-shi, Osaka565-0871, Japan
| | - Yohei Oseki
- Department of Language and Information Sciences, University of Tokyo, Meguro-ku, Tokyo153-8902, Japan
| | - Tomoki Fukai
- Neural Computation and Brain Coding Unit, Okinawa Institute of Science and Technology, Onna-son, Okinawa1919-1, Japan
| |
Collapse
|
5
|
Kim CM, Chow CC, Averbeck BB. Neural dynamics of reversal learning in the prefrontal cortex and recurrent neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.09.14.613033. [PMID: 39372802 PMCID: PMC11451584 DOI: 10.1101/2024.09.14.613033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
In probabilistic reversal learning, the choice option yielding reward with higher probability switches at a random trial. To perform optimally in this task, one has to accumulate evidence across trials to infer the probability that a reversal has occurred. We investigated how this reversal probability is represented in cortical neurons by analyzing the neural activity in the prefrontal cortex of monkeys and recurrent neural networks trained on the task. We found that in a neural subspace encoding reversal probability, its activity represented integration of reward outcomes as in a line attractor model. The reversal probability activity at the start of a trial was stationary, stable and consistent with the attractor dynamics. However, during the trial, the activity was associated with task-related behavior and became non-stationary, thus deviating from the line attractor. Fitting a predictive model to neural data showed that the stationary state at the trial start serves as an initial condition for launching the non-stationary activity. This suggested an extension of the line attractor model with behavior-induced non-stationary dynamics. The non-stationary trajectories were separable indicating that they can represent distinct probabilistic values. Perturbing the reversal probability activity in the recurrent neural networks biased choice outcomes demonstrating its functional significance. In sum, our results show that cortical networks encode reversal probability in stable stationary state at the start of a trial and utilize it to initiate non-stationary dynamics that accommodates task-related behavior while maintaining the reversal information.
Collapse
|
6
|
Hasselmo ME, LaChance PA, Robinson JC, Malmberg SL, Patel M, Gross E, Everett DE, Sankaranarayanan S, Fang J. How does the response of egocentric boundary cells depend upon the coordinate system of environmental features? BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.03.636060. [PMID: 39974880 PMCID: PMC11838506 DOI: 10.1101/2025.02.03.636060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Neurons in the retrosplenial (RSC) (Alexander et al., 2020a; LaChance and Hasselmo, 2024) and postrhinal cortex (POR) respond to environmental boundaries and configurations in egocentric coordinates relative to an animals current position. Neurons in these structures and adjacent structures also respond to spatial dimensions of self- motion such as running velocity (Carstensen et al., 2021; Robinson et al., 2024). Data and modeling suggest that these responses could be essential for guiding behaviors such as barrier avoidance and goal finding (Erdem and Hasselmo, 2012; 2014). However, these findings still leave the unanswered question: What are the features and what are the coordinate systems of these features that drive these egocentric neural responses? Here we present models of the potential circuit mechanisms generating egocentric responses in RSC. These can be generated based on coding of internal representations of barriers in head-centered coordinates of distance and angle that are transformed based on current running velocity for trajectory planning and obstacle avoidance. This hypothesis is compared with an alternate potentially complementary hypothesis that neurons in the same regions might respond to retinotopic position of features at the top, bottom or edges of walls as a precursor to head-centered coordinates. Alternate hypotheses include the forward scanning of trajectories (ray tracing) to test for collision with barriers, or the comparison of optic flow on different sides of the animal. These hypotheses generate complementary modeling predictions about how changes in environmental parameters could alter the neural responses of egocentric boundary cells that are presented here.
Collapse
|
7
|
Rebecca R, Ascoli GA, Sutton NM, Dannenberg H. Spatial periodicity in grid cell firing is explained by a neural sequence code of 2-D trajectories. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2023.05.30.542747. [PMID: 37398455 PMCID: PMC10312530 DOI: 10.1101/2023.05.30.542747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Spatial periodicity in grid cell firing has been interpreted as a neural metric for space providing animals with a coordinate system in navigating physical and mental spaces. However, the specific computational problem being solved by grid cells has remained elusive. Here, we provide mathematical proof that spatial periodicity in grid cell firing is the only possible solution to a neural sequence code of 2-D trajectories and that the hexagonal firing pattern of grid cells is the most parsimonious solution to such a sequence code. We thereby provide a likely teleological cause for the existence of grid cells and reveal the underlying nature of the global geometric organization in grid maps as a direct consequence of a simple local sequence code. A sequence code by grid cells provides intuitive explanations for many previously puzzling experimental observations and may transform our thinking about grid cells.
Collapse
Affiliation(s)
- R.G. Rebecca
- Department of Mathematical Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030
| | - Giorgio A. Ascoli
- Department of Bioengineering, George Mason University, 4400 University Dr., Fairfax, VA 22030
| | - Nate M. Sutton
- Department of Bioengineering, George Mason University, 4400 University Dr., Fairfax, VA 22030
| | - Holger Dannenberg
- Department of Bioengineering, George Mason University, 4400 University Dr., Fairfax, VA 22030
| |
Collapse
|
8
|
Redman WT, Acosta-Mendoza S, Wei XX, Goard MJ. Robust variability of grid cell properties within individual grid modules enhances encoding of local space. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.27.582373. [PMID: 38915504 PMCID: PMC11195105 DOI: 10.1101/2024.02.27.582373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Although grid cells are one of the most well studied functional classes of neurons in the mammalian brain, whether there is a single orientation and spacing value per grid module has not been carefully tested. We analyze a recent large-scale recording of medial entorhinal cortex to characterize the presence and degree of heterogeneity of grid properties within individual modules. We find evidence for small, but robust, variability and hypothesize that this property of the grid code could enhance the encoding of local spatial information. Performing analysis on synthetic populations of grid cells, where we have complete control over the amount heterogeneity in grid properties, we demonstrate that grid property variability of a similar magnitude to the analyzed data leads to significantly decreased decoding error. This holds even when restricted to activity from a single module. Our results highlight how the heterogeneity of the neural response properties may benefit coding and opens new directions for theoretical and experimental analysis of grid cells.
Collapse
Affiliation(s)
- William T Redman
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara
- Intelligent Systems Center, Johns Hopkins University Applied Physics Lab
| | - Santiago Acosta-Mendoza
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara
| | - Xue-Xin Wei
- Department of Neuroscience, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
| | - Michael J Goard
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
- Department of Molecular, Cellular, and Developmental Biology, University of California, Santa Barbara
- Neuroscience Research Institute, University of California, Santa Barbara
| |
Collapse
|
9
|
Mathis MW, Perez Rotondo A, Chang EF, Tolias AS, Mathis A. Decoding the brain: From neural representations to mechanistic models. Cell 2024; 187:5814-5832. [PMID: 39423801 PMCID: PMC11637322 DOI: 10.1016/j.cell.2024.08.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 07/29/2024] [Accepted: 08/26/2024] [Indexed: 10/21/2024]
Abstract
A central principle in neuroscience is that neurons within the brain act in concert to produce perception, cognition, and adaptive behavior. Neurons are organized into specialized brain areas, dedicated to different functions to varying extents, and their function relies on distributed circuits to continuously encode relevant environmental and body-state features, enabling other areas to decode (interpret) these representations for computing meaningful decisions and executing precise movements. Thus, the distributed brain can be thought of as a series of computations that act to encode and decode information. In this perspective, we detail important concepts of neural encoding and decoding and highlight the mathematical tools used to measure them, including deep learning methods. We provide case studies where decoding concepts enable foundational and translational science in motor, visual, and language processing.
Collapse
Affiliation(s)
- Mackenzie Weygandt Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland.
| | - Adriana Perez Rotondo
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Edward F Chang
- Department of Neurological Surgery, UCSF, San Francisco, CA, USA
| | - Andreas S Tolias
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford BioX, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Alexander Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
10
|
Dong LL, Fiete IR. Grid Cells in Cognition: Mechanisms and Function. Annu Rev Neurosci 2024; 47:345-368. [PMID: 38684081 DOI: 10.1146/annurev-neuro-101323-112047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The activity patterns of grid cells form distinctively regular triangular lattices over the explored spatial environment and are largely invariant to visual stimuli, animal movement, and environment geometry. These neurons present numerous fascinating challenges to the curious (neuro)scientist: What are the circuit mechanisms responsible for creating spatially periodic activity patterns from the monotonic input-output responses of single neurons? How and why does the brain encode a local, nonperiodic variable-the allocentric position of the animal-with a periodic, nonlocal code? And, are grid cells truly specialized for spatial computations? Otherwise, what is their role in general cognition more broadly? We review efforts in uncovering the mechanisms and functional properties of grid cells, highlighting recent progress in the experimental validation of mechanistic grid cell models, and discuss the coding properties and functional advantages of the grid code as suggested by continuous attractor network models of grid cells.
Collapse
Affiliation(s)
- Ling L Dong
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| | - Ila R Fiete
- McGovern Institute and K. Lisa Yang Integrative Computational Neuroscience Center, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA;
| |
Collapse
|
11
|
Mondal SS, Frankland S, Webb TW, Cohen JD. Determinantal point process attention over grid cell code supports out of distribution generalization. eLife 2024; 12:RP89911. [PMID: 39088258 PMCID: PMC11293867 DOI: 10.7554/elife.89911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/02/2024] Open
Abstract
Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies. However, these still fall short of, and therefore fail to provide insight into how the brain supports strong forms of generalization of which humans are capable. One such case is out-of-distribution (OOD) generalization - successful performance on test examples that lie outside the distribution of the training set. Here, we identify properties of processing in the brain that may contribute to this ability. We describe a two-part algorithm that draws on specific features of neural computation to achieve OOD generalization, and provide a proof of concept by evaluating performance on two challenging cognitive tasks. First we draw on the fact that the mammalian brain represents metric spaces using grid cell code (e.g., in the entorhinal cortex): abstract representations of relational structure, organized in recurring motifs that cover the representational space. Second, we propose an attentional mechanism that operates over the grid cell code using determinantal point process (DPP), that we call DPP attention (DPP-A) - a transformation that ensures maximum sparseness in the coverage of that space. We show that a loss function that combines standard task-optimized error with DPP-A can exploit the recurring motifs in the grid cell code, and can be integrated with common architectures to achieve strong OOD generalization performance on analogy and arithmetic tasks. This provides both an interpretation of how the grid cell code in the mammalian brain may contribute to generalization performance, and at the same time a potential means for improving such capabilities in artificial neural networks.
Collapse
Affiliation(s)
- Shanka Subhra Mondal
- Department of Electrical and Computer Engineering, Princeton UniversityPrincetonUnited States
| | - Steven Frankland
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Taylor W Webb
- Department of Psychology, University of California, Los AngelesLos AngelesUnited States
| | - Jonathan D Cohen
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| |
Collapse
|
12
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
13
|
Qu Y, Wei C, Du P, Che W, Zhang C, Ouyang W, Bian Y, Xu F, Hu B, Du K, Wu H, Liu J, Liu Q. Integration of cognitive tasks into artificial general intelligence test for large models. iScience 2024; 27:109550. [PMID: 38595796 PMCID: PMC11001637 DOI: 10.1016/j.isci.2024.109550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024] Open
Abstract
During the evolution of large models, performance evaluation is necessary for assessing their capabilities. However, current model evaluations mainly rely on specific tasks and datasets, lacking a united framework for assessing the multidimensional intelligence of large models. In this perspective, we advocate for a comprehensive framework of cognitive science-inspired artificial general intelligence (AGI) tests, including crystallized, fluid, social, and embodied intelligence. The AGI tests consist of well-designed cognitive tests adopted from human intelligence tests, and then naturally encapsulates into an immersive virtual community. We propose increasing the complexity of AGI testing tasks commensurate with advancements in large models and emphasizing the necessity for the interpretation of test results to avoid false negatives and false positives. We believe that cognitive science-inspired AGI tests will effectively guide the targeted improvement of large models in specific dimensions of intelligence and accelerate the integration of large models into human society.
Collapse
Affiliation(s)
- Youzhi Qu
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Chen Wei
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Penghui Du
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Wenxin Che
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Chi Zhang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | | | | | - Feiyang Xu
- iFLYTEK AI Research, Hefei 230088, China
| | - Bin Hu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Kai Du
- Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| | - Haiyan Wu
- Centre for Cognitive and Brain Sciences and Department of Psychology, University of Macau, Macau 999078, China
| | - Jia Liu
- Department of Psychology, Tsinghua University, Beijing 100084, China
| | - Quanying Liu
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| |
Collapse
|
14
|
McNamee DC. The generative neural microdynamics of cognitive processing. Curr Opin Neurobiol 2024; 85:102855. [PMID: 38428170 DOI: 10.1016/j.conb.2024.102855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/03/2024]
Abstract
The entorhinal cortex and hippocampus form a recurrent network that informs many cognitive processes, including memory, planning, navigation, and imagination. Neural recordings from these regions reveal spatially organized population codes corresponding to external environments and abstract spaces. Aligning the former cognitive functionalities with the latter neural phenomena is a central challenge in understanding the entorhinal-hippocampal circuit (EHC). Disparate experiments demonstrate a surprising level of complexity and apparent disorder in the intricate spatiotemporal dynamics of sequential non-local hippocampal reactivations, which occur particularly, though not exclusively, during immobile pauses and rest. We review these phenomena with a particular focus on their apparent lack of physical simulative realism. These observations are then integrated within a theoretical framework and proposed neural circuit mechanisms that normatively characterize this neural complexity by conceiving different regimes of hippocampal microdynamics as neuromarkers of diverse cognitive computations.
Collapse
|
15
|
Fernandez-Leon JA, Sarramone L. The grid-cell normative model: Unifying 'principles'. Biosystems 2024; 235:105091. [PMID: 38040283 DOI: 10.1016/j.biosystems.2023.105091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 11/21/2023] [Accepted: 11/21/2023] [Indexed: 12/03/2023]
Abstract
A normative model for the emergence of entorhinal grid cells in the brain's navigational system has been proposed (Sorscher et al., 2023. Neuron 111, 121-137). Using computational modeling of place-to-grid cell interactions, the authors characterized the fundamental nature of grid cells through information processing. However, the normative model does not consider certain discoveries that complement or contradict the conditions for such emergence. By briefly reviewing current evidence, we draw some implications on the interplay between place cell replay sequences and intrinsic grid cell oscillations related to the hippocampal-entorhinal navigation system that can extend the normative model.
Collapse
Affiliation(s)
- Jose A Fernandez-Leon
- Universidad Nacional del Centro de la Provincia de Buenos Aires (UNCPBA), Fac. Cs. Exactas, INTIA, Tandil, Argentina; Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina; CIFICEN, UNCPBA-CICPBA-CONICET, Tandil, Argentina.
| | - Luca Sarramone
- Universidad Nacional del Centro de la Provincia de Buenos Aires (UNCPBA), Fac. Cs. Exactas, INTIA, Tandil, Argentina; Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Buenos Aires, Argentina
| |
Collapse
|
16
|
Brown LS, Cho JR, Bolkan SS, Nieh EH, Schottdorf M, Tank DW, Brody CD, Witten IB, Goldman MS. Neural circuit models for evidence accumulation through choice-selective sequences. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.01.555612. [PMID: 38234715 PMCID: PMC10793437 DOI: 10.1101/2023.09.01.555612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Decision making is traditionally thought to be mediated by populations of neurons whose firing rates persistently accumulate evidence across time. However, recent decision-making experiments in rodents have observed neurons across the brain that fire sequentially as a function of spatial position or time, rather than persistently, with the subset of neurons in the sequence depending on the animal's choice. We develop two new candidate circuit models, in which evidence is encoded either in the relative firing rates of two competing chains of neurons or in the network location of a stereotyped pattern ("bump") of neural activity. Encoded evidence is then faithfully transferred between neuronal populations representing different positions or times. Neural recordings from four different brain regions during a decision-making task showed that, during the evidence accumulation period, different brain regions displayed tuning curves consistent with different candidate models for evidence accumulation. This work provides mechanistic models and potential neural substrates for how graded-value information may be precisely accumulated within and transferred between neural populations, a set of computations fundamental to many cognitive operations.
Collapse
|
17
|
Farrell M, Recanatesi S, Shea-Brown E. From lazy to rich to exclusive task representations in neural networks and neural codes. Curr Opin Neurobiol 2023; 83:102780. [PMID: 37757585 DOI: 10.1016/j.conb.2023.102780] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 08/04/2023] [Accepted: 08/16/2023] [Indexed: 09/29/2023]
Abstract
Neural circuits-both in the brain and in "artificial" neural network models-learn to solve a remarkable variety of tasks, and there is a great current opportunity to use neural networks as models for brain function. Key to this endeavor is the ability to characterize the representations formed by both artificial and biological brains. Here, we investigate this potential through the lens of recently developing theory that characterizes neural networks as "lazy" or "rich" depending on the approach they use to solve tasks: lazy networks solve tasks by making small changes in connectivity, while rich networks solve tasks by significantly modifying weights throughout the network (including "hidden layers"). We further elucidate rich networks through the lens of compression and "neural collapse", ideas that have recently been of significant interest to neuroscience and machine learning. We then show how these ideas apply to a domain of increasing importance to both fields: extracting latent structures through self-supervised learning.
Collapse
Affiliation(s)
- Matthew Farrell
- John A. Paulson School of Engineering and Applied Sciences, Harvard University and Center for Brain Science, Harvard University, United States
| | - Stefano Recanatesi
- Applied Mathematics, Physiology and Biophysics, and Computational Neuroscience Center, University of Washington, United States
| | - Eric Shea-Brown
- Applied Mathematics, Physiology and Biophysics, and Computational Neuroscience Center, University of Washington, United States.
| |
Collapse
|
18
|
Schøyen V, Pettersen MB, Holzhausen K, Fyhn M, Malthe-Sørenssen A, Lepperød ME. Coherently remapping toroidal cells but not Grid cells are responsible for path integration in virtual agents. iScience 2023; 26:108102. [PMID: 37867941 PMCID: PMC10589895 DOI: 10.1016/j.isci.2023.108102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 08/25/2023] [Accepted: 09/27/2023] [Indexed: 10/24/2023] Open
Abstract
It is widely believed that grid cells provide cues for path integration, with place cells encoding an animal's location and environmental identity. When entering a new environment, these cells remap concurrently, sparking debates about their causal relationship. Using a continuous attractor recurrent neural network, we study spatial cell dynamics in multiple environments. We investigate grid cell remapping as a function of global remapping in place-like units through random resampling of place cell centers. Dimensionality reduction techniques reveal that a subset of cells manifest a persistent torus across environments. Unexpectedly, these toroidal cells resemble band-like cells rather than high grid score units. Subsequent pruning studies reveal that toroidal cells are crucial for path integration while grid cells are not. As we extend the model to operate across many environments, we delineate its generalization boundaries, revealing challenges with modeling many environments in current models.
Collapse
Affiliation(s)
- Vemund Schøyen
- Department of Biosciences, University of Oslo, Oslo 0313, Norway
| | | | | | - Marianne Fyhn
- Department of Biosciences, University of Oslo, Oslo 0313, Norway
- Simula Research Laboratory, Norway
| | - Anders Malthe-Sørenssen
- Department of Physics, University of Oslo, Oslo 0313, Norway
- Simula Research Laboratory, Norway
| | - Mikkel Elle Lepperød
- Department of Physics, University of Oslo, Oslo 0313, Norway
- Department of Biosciences, University of Oslo, Oslo 0313, Norway
- Simula Research Laboratory, Norway
| |
Collapse
|
19
|
Applegate MC, Gutnichenko KS, Mackevicius EL, Aronov D. An entorhinal-like region in food-caching birds. Curr Biol 2023; 33:2465-2477.e7. [PMID: 37295426 PMCID: PMC10329498 DOI: 10.1016/j.cub.2023.05.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 04/14/2023] [Accepted: 05/15/2023] [Indexed: 06/12/2023]
Abstract
The mammalian entorhinal cortex routes inputs from diverse sources into the hippocampus. This information is mixed and expressed in the activity of many specialized entorhinal cell types, which are considered indispensable for hippocampal function. However, functionally similar hippocampi exist even in non-mammals that lack an obvious entorhinal cortex or, generally, any layered cortex. To address this dilemma, we mapped extrinsic hippocampal connections in chickadees, whose hippocampi are used for remembering numerous food caches. We found a well-delineated structure in these birds that is topologically similar to the entorhinal cortex and interfaces between the hippocampus and other pallial regions. Recordings of this structure revealed entorhinal-like activity, including border and multi-field grid-like cells. These cells were localized to the subregion predicted by anatomical mapping to match the dorsomedial entorhinal cortex. Our findings uncover an anatomical and physiological equivalence of vastly different brains, suggesting a fundamental nature of entorhinal-like computations for hippocampal function.
Collapse
Affiliation(s)
- Marissa C Applegate
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Konstantin S Gutnichenko
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Emily L Mackevicius
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Dmitriy Aronov
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, 3227 Broadway, New York, NY 10027, USA.
| |
Collapse
|
20
|
Langdon C, Genkin M, Engel TA. A unifying perspective on neural manifolds and circuits for cognition. Nat Rev Neurosci 2023; 24:363-377. [PMID: 37055616 PMCID: PMC11058347 DOI: 10.1038/s41583-023-00693-x] [Citation(s) in RCA: 53] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 04/15/2023]
Abstract
Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on neural manifolds - low-dimensional representations of behavioural signals in neural population activity - and suggests that neural computations are realized by emergent dynamics. Although manifolds reveal an interpretable structure in heterogeneous neuronal activity, finding the corresponding structure in connectivity remains a challenge. We highlight examples in which establishing the correspondence between low-dimensional activity and connectivity has been possible, unifying the neural manifold and circuit perspectives. This relationship is conspicuous in systems in which the geometry of neural responses mirrors their spatial layout in the brain, such as the fly navigational system. Furthermore, we describe evidence that, in systems in which neural responses are heterogeneous, the circuit comprises interactions between activity patterns on the manifold via low-rank connectivity. We suggest that unifying the manifold and circuit approaches is important if we are to be able to causally test theories about the neural computations that underlie behaviour.
Collapse
Affiliation(s)
- Christopher Langdon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| |
Collapse
|
21
|
Frey M, Mathis MW, Mathis A. NeuroAI: If grid cells are the answer, is path integration the question? Curr Biol 2023; 33:R190-R192. [PMID: 36917942 DOI: 10.1016/j.cub.2023.01.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
Abstract
Spatially modulated neurons known as grid cells are thought to play an important role in spatial cognition. A new study has found that units with grid-cell-like properties can emerge within artificial neural networks trained to path integrate, and developed a unifying theory explaining the formation of these cells which shows what circuit constraints are necessary and how learned systems carry out path integration.
Collapse
Affiliation(s)
- Markus Frey
- École Polytechnique Fédérale de Lausanne (EPFL), Brain Mind Institute & Neuro-X Institute, Geneva, Switzerland.
| | - Mackenzie W Mathis
- École Polytechnique Fédérale de Lausanne (EPFL), Brain Mind Institute & Neuro-X Institute, Geneva, Switzerland.
| | - Alexander Mathis
- École Polytechnique Fédérale de Lausanne (EPFL), Brain Mind Institute & Neuro-X Institute, Geneva, Switzerland.
| |
Collapse
|
22
|
Kanwisher N, Khosla M, Dobs K. Using artificial neural networks to ask 'why' questions of minds and brains. Trends Neurosci 2023; 46:240-254. [PMID: 36658072 DOI: 10.1016/j.tins.2022.12.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 11/29/2022] [Accepted: 12/22/2022] [Indexed: 01/19/2023]
Abstract
Neuroscientists have long characterized the properties and functions of the nervous system, and are increasingly succeeding in answering how brains perform the tasks they do. But the question 'why' brains work the way they do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like tasks now enables us to approach these 'why' questions by asking when the properties of networks optimized for a given task mirror the behavioral and neural characteristics of humans performing the same task. Here we highlight the recent success of this strategy in explaining why the visual and auditory systems work the way they do, at both behavioral and neural levels.
Collapse
Affiliation(s)
- Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Meenakshi Khosla
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Katharina Dobs
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
23
|
Kang YHR, Wolpert DM, Lengyel M. Spatial uncertainty and environmental geometry in navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.30.526278. [PMID: 36778354 PMCID: PMC9915518 DOI: 10.1101/2023.01.30.526278] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Variations in the geometry of the environment, such as the shape and size of an enclosure, have profound effects on navigational behavior and its neural underpinning. Here, we show that these effects arise as a consequence of a single, unifying principle: to navigate efficiently, the brain must maintain and update the uncertainty about one's location. We developed an image-computable Bayesian ideal observer model of navigation, continually combining noisy visual and self-motion inputs, and a neural encoding model optimized to represent the location uncertainty computed by the ideal observer. Through mathematical analysis and numerical simulations, we show that the ideal observer accounts for a diverse range of sometimes paradoxical distortions of human homing behavior in anisotropic and deformed environments, including 'boundary tethering', and its neural encoding accounts for distortions of rodent grid cell responses under identical environmental manipulations. Our results demonstrate that spatial uncertainty plays a key role in navigation.
Collapse
Affiliation(s)
- Yul HR Kang
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Department of Biological and Experimental Psychology, Queen Mary University of London, London, UK
| | - Daniel M Wolpert
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University, New York, NY, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
24
|
Applegate MC, Gutnichenko KS, Mackevicius EL, Aronov D. An entorhinal-like region in food-caching birds. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.05.522940. [PMID: 36711539 PMCID: PMC9881956 DOI: 10.1101/2023.01.05.522940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The mammalian entorhinal cortex routes inputs from diverse sources into the hippocampus. This information is mixed and expressed in the activity of many specialized entorhinal cell types, which are considered indispensable for hippocampal function. However, functionally similar hippocampi exist even in non-mammals that lack an obvious entorhinal cortex, or generally any layered cortex. To address this dilemma, we mapped extrinsic hippocampal connections in chickadees, whose hippocampi are used for remembering numerous food caches. We found a well-delineated structure in these birds that is topologically similar to the entorhinal cortex and interfaces between the hippocampus and other pallial regions. Recordings of this structure revealed entorhinal-like activity, including border and multi-field grid-like cells. These cells were localized to the subregion predicted by anatomical mapping to match the dorsomedial entorhinal cortex. Our findings uncover an anatomical and physiological equivalence of vastly different brains, suggesting a fundamental nature of entorhinal-like computations for hippocampal function.
Collapse
Affiliation(s)
| | | | | | - Dmitriy Aronov
- Zuckerman Mind Brain Behavior Institute, Columbia University
| |
Collapse
|
25
|
Zhang X, Long X, Zhang SJ, Chen ZS. Excitatory-inhibitory recurrent dynamics produce robust visual grids and stable attractors. Cell Rep 2022; 41:111777. [PMID: 36516752 PMCID: PMC9805366 DOI: 10.1016/j.celrep.2022.111777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 09/28/2022] [Accepted: 11/14/2022] [Indexed: 12/15/2022] Open
Abstract
Spatially modulated grid cells have been recently found in the rat secondary visual cortex (V2) during active navigation. However, the computational mechanism and functional significance of V2 grid cells remain unknown. To address the knowledge gap, we train a biologically inspired excitatory-inhibitory recurrent neural network to perform a two-dimensional spatial navigation task with multisensory input. We find grid-like responses in both excitatory and inhibitory RNN units, which are robust with respect to spatial cues, dimensionality of visual input, and activation function. Population responses reveal a low-dimensional, torus-like manifold and attractor. We find a link between functional grid clusters with similar receptive fields and structured excitatory-to-excitatory connections. Additionally, multistable torus-like attractors emerged with increasing sparsity in inter- and intra-subnetwork connectivity. Finally, irregular grid patterns are found in recurrent neural network (RNN) units during a visual sequence recognition task. Together, our results suggest common computational mechanisms of V2 grid cells for spatial and non-spatial tasks.
Collapse
Affiliation(s)
- Xiaohan Zhang
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY, USA
| | - Xiaoyang Long
- Department of Neurosurgery, Xinqiao Hospital, Chongqing, China
| | - Sheng-Jia Zhang
- Department of Neurosurgery, Xinqiao Hospital, Chongqing, China
| | - Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY, USA; Department of Neurosurgery, Xinqiao Hospital, Chongqing, China; Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|