1
|
Mininni CJ, Zanutto BS. Constructing neural networks with pre-specified dynamics. Sci Rep 2024; 14:18860. [PMID: 39143351 PMCID: PMC11324765 DOI: 10.1038/s41598-024-69747-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 08/08/2024] [Indexed: 08/16/2024] Open
Abstract
A main goal in neuroscience is to understand the computations carried out by neural populations that give animals their cognitive skills. Neural network models allow to formulate explicit hypotheses regarding the algorithms instantiated in the dynamics of a neural population, its firing statistics, and the underlying connectivity. Neural networks can be defined by a small set of parameters, carefully chosen to procure specific capabilities, or by a large set of free parameters, fitted with optimization algorithms that minimize a given loss function. In this work we alternatively propose a method to make a detailed adjustment of the network dynamics and firing statistic to better answer questions that link dynamics, structure, and function. Our algorithm-termed generalised Firing-to-Parameter (gFTP)-provides a way to construct binary recurrent neural networks whose dynamics strictly follows a user pre-specified transition graph that details the transitions between population firing states triggered by stimulus presentations. Our main contribution is a procedure that detects when a transition graph is not realisable in terms of a neural network, and makes the necessary modifications in order to obtain a new transition graph that is realisable and preserves all the information encoded in the transitions of the original graph. With a realisable transition graph, gFTP assigns values to the network firing states associated with each node in the graph, and finds the synaptic weight matrices by solving a set of linear separation problems. We test gFTP performance by constructing networks with random dynamics, continuous attractor-like dynamics that encode position in 2-dimensional space, and discrete attractor dynamics. We then show how gFTP can be employed as a tool to explore the link between structure, function, and the algorithms instantiated in the network dynamics.
Collapse
Affiliation(s)
- Camilo J Mininni
- Instituto de Biología y Medicina Experimental, Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina.
| | - B Silvano Zanutto
- Instituto de Biología y Medicina Experimental, Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina
- Instituto de Ingeniería Biomédica, Universidad de Buenos Aires, Buenos Aires, Argentina
| |
Collapse
|
2
|
Payne HL, Raymond JL, Goldman MS. Interactions between circuit architecture and plasticity in a closed-loop cerebellar system. eLife 2024; 13:e84770. [PMID: 38451856 PMCID: PMC10919899 DOI: 10.7554/elife.84770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 02/13/2024] [Indexed: 03/09/2024] Open
Abstract
Determining the sites and directions of plasticity underlying changes in neural activity and behavior is critical for understanding mechanisms of learning. Identifying such plasticity from neural recording data can be challenging due to feedback pathways that impede reasoning about cause and effect. We studied interactions between feedback, neural activity, and plasticity in the context of a closed-loop motor learning task for which there is disagreement about the loci and directions of plasticity: vestibulo-ocular reflex learning. We constructed a set of circuit models that differed in the strength of their recurrent feedback, from no feedback to very strong feedback. Despite these differences, each model successfully fit a large set of neural and behavioral data. However, the patterns of plasticity predicted by the models fundamentally differed, with the direction of plasticity at a key site changing from depression to potentiation as feedback strength increased. Guided by our analysis, we suggest how such models can be experimentally disambiguated. Our results address a long-standing debate regarding cerebellum-dependent motor learning, suggesting a reconciliation in which learning-related changes in the strength of synaptic inputs to Purkinje cells are compatible with seemingly oppositely directed changes in Purkinje cell spiking activity. More broadly, these results demonstrate how changes in neural activity over learning can appear to contradict the sign of the underlying plasticity when either internal feedback or feedback through the environment is present.
Collapse
Affiliation(s)
- Hannah L Payne
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | | | - Mark S Goldman
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California, DavisDavisUnited States
- Department of Ophthalmology and Vision Science, University of California, DavisDavisUnited States
| |
Collapse
|
3
|
Nardin M, Csicsvari J, Tkačik G, Savin C. The Structure of Hippocampal CA1 Interactions Optimizes Spatial Coding across Experience. J Neurosci 2023; 43:8140-8156. [PMID: 37758476 PMCID: PMC10697404 DOI: 10.1523/jneurosci.0194-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/11/2023] [Accepted: 09/14/2023] [Indexed: 10/03/2023] Open
Abstract
Although much is known about how single neurons in the hippocampus represent an animal's position, how circuit interactions contribute to spatial coding is less well understood. Using a novel statistical estimator and theoretical modeling, both developed in the framework of maximum entropy models, we reveal highly structured CA1 cell-cell interactions in male rats during open field exploration. The statistics of these interactions depend on whether the animal is in a familiar or novel environment. In both conditions the circuit interactions optimize the encoding of spatial information, but for regimes that differ in the informativeness of their spatial inputs. This structure facilitates linear decodability, making the information easy to read out by downstream circuits. Overall, our findings suggest that the efficient coding hypothesis is not only applicable to individual neuron properties in the sensory periphery, but also to neural interactions in the central brain.SIGNIFICANCE STATEMENT Local circuit interactions play a key role in neural computation and are dynamically shaped by experience. However, measuring and assessing their effects during behavior remains a challenge. Here, we combine techniques from statistical physics and machine learning to develop new tools for determining the effects of local network interactions on neural population activity. This approach reveals highly structured local interactions between hippocampal neurons, which make the neural code more precise and easier to read out by downstream circuits, across different levels of experience. More generally, the novel combination of theory and data analysis in the framework of maximum entropy models enables traditional neural coding questions to be asked in naturalistic settings.
Collapse
Affiliation(s)
- Michele Nardin
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147
| | - Jozsef Csicsvari
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
| | - Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg AT-3400, Austria
| | - Cristina Savin
- Center for Neural Science, New York University, New York, New York 10003
- Center for Data Science, New York University, New York, New York 10011
| |
Collapse
|
4
|
Langdon C, Genkin M, Engel TA. A unifying perspective on neural manifolds and circuits for cognition. Nat Rev Neurosci 2023; 24:363-377. [PMID: 37055616 PMCID: PMC11058347 DOI: 10.1038/s41583-023-00693-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 04/15/2023]
Abstract
Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on neural manifolds - low-dimensional representations of behavioural signals in neural population activity - and suggests that neural computations are realized by emergent dynamics. Although manifolds reveal an interpretable structure in heterogeneous neuronal activity, finding the corresponding structure in connectivity remains a challenge. We highlight examples in which establishing the correspondence between low-dimensional activity and connectivity has been possible, unifying the neural manifold and circuit perspectives. This relationship is conspicuous in systems in which the geometry of neural responses mirrors their spatial layout in the brain, such as the fly navigational system. Furthermore, we describe evidence that, in systems in which neural responses are heterogeneous, the circuit comprises interactions between activity patterns on the manifold via low-rank connectivity. We suggest that unifying the manifold and circuit approaches is important if we are to be able to causally test theories about the neural computations that underlie behaviour.
Collapse
Affiliation(s)
- Christopher Langdon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| |
Collapse
|
5
|
Borst A, Leibold C. Connecting Connectomes to Physiology. J Neurosci 2023; 43:3599-3610. [PMID: 37197984 PMCID: PMC10198452 DOI: 10.1523/jneurosci.2208-22.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/06/2023] [Accepted: 03/09/2023] [Indexed: 05/19/2023] Open
Abstract
With the advent of volumetric EM techniques, large connectomic datasets are being created, providing neuroscience researchers with knowledge about the full connectivity of neural circuits under study. This allows for numerical simulation of detailed, biophysical models of each neuron participating in the circuit. However, these models typically include a large number of parameters, and insight into which of these are essential for circuit function is not readily obtained. Here, we review two mathematical strategies for gaining insight into connectomics data: linear dynamical systems analysis and matrix reordering techniques. Such analytical treatment can allow us to make predictions about time constants of information processing and functional subunits in large networks.SIGNIFICANCE STATEMENT This viewpoint provides a concise overview on how to extract important insights from Connectomics data by mathematical methods. First, it explains how new dynamics and new time constants can evolve, simply through connectivity between neurons. These new time-constants can be far longer than the intrinsic membrane time-constants of the individual neurons. Second, it summarizes how structural motifs in the circuit can be discovered. Specifically, there are tools to decide whether or not a circuit is strictly feed-forward or whether feed-back connections exist. Only by reordering connectivity matrices can such motifs be made visible.
Collapse
Affiliation(s)
- Alexander Borst
- Max-Planck Institute for Biological Intelligence, Department Circuits-Computation-Models, Martinsried, Germany
| | - Christian Leibold
- Fakultät für Biologie & Bernstein Center Freiburg, Albert-Ludwigs-Universität Freiburg, D-79104, Freiburg, Germany
| |
Collapse
|
6
|
Fang C, Aronov D, Abbott LF, Mackevicius EL. Neural learning rules for generating flexible predictions and computing the successor representation. eLife 2023; 12:e80680. [PMID: 36928104 PMCID: PMC10019889 DOI: 10.7554/elife.80680] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 10/26/2022] [Indexed: 03/18/2023] Open
Abstract
The predictive nature of the hippocampus is thought to be useful for memory-guided cognitive behaviors. Inspired by the reinforcement learning literature, this notion has been formalized as a predictive map called the successor representation (SR). The SR captures a number of observations about hippocampal activity. However, the algorithm does not provide a neural mechanism for how such representations arise. Here, we show the dynamics of a recurrent neural network naturally calculate the SR when the synaptic weights match the transition probability matrix. Interestingly, the predictive horizon can be flexibly modulated simply by changing the network gain. We derive simple, biologically plausible learning rules to learn the SR in a recurrent network. We test our model with realistic inputs and match hippocampal data recorded during random foraging. Taken together, our results suggest that the SR is more accessible in neural circuits than previously thought and can support a broad range of cognitive functions.
Collapse
Affiliation(s)
- Ching Fang
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - Dmitriy Aronov
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - LF Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - Emily L Mackevicius
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
- Basis Research InstituteNew YorkUnited States
| |
Collapse
|
7
|
Biswas T, Fitzgerald JE. Geometric framework to predict structure from function in neural networks. PHYSICAL REVIEW RESEARCH 2022; 4:023255. [PMID: 37635906 PMCID: PMC10456994 DOI: 10.1103/physrevresearch.4.023255] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate the solution space of all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. A generalization accounting for noise further reveals that the solution space geometry can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning. We ultimately use this geometric characterization to derive certainty conditions guaranteeing a nonzero synapse between neurons. Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that follow generally from the model architecture.
Collapse
Affiliation(s)
- Tirthabir Biswas
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA
- Department of Physics, Loyola University, New Orleans, Louisiana 70118, USA
| | - James E. Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA
| |
Collapse
|
8
|
Confavreux B, Vogels TP. A familiar thought: Machines that replace us? Neuron 2022; 110:361-362. [PMID: 35114107 DOI: 10.1016/j.neuron.2022.01.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In this issue of Neuron, Tyulmankov et al., 2022 propose a model for familiarity detection whose parameters-including those guiding plasticity-are fully machine-tuned.
Collapse
Affiliation(s)
| | - Tim P Vogels
- Institute of Science and Technology, 3400 Klosterneuberg, Austria.
| |
Collapse
|
9
|
van Albada SJ, Morales-Gregorio A, Dickscheid T, Goulas A, Bakker R, Bludau S, Palm G, Hilgetag CC, Diesmann M. Bringing Anatomical Information into Neuronal Network Models. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:201-234. [DOI: 10.1007/978-3-030-89439-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
10
|
Clemens J, Schöneich S, Kostarakos K, Hennig RM, Hedwig B. A small, computationally flexible network produces the phenotypic diversity of song recognition in crickets. eLife 2021; 10:e61475. [PMID: 34761750 PMCID: PMC8635984 DOI: 10.7554/elife.61475] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 11/03/2021] [Indexed: 01/31/2023] Open
Abstract
How neural networks evolved to generate the diversity of species-specific communication signals is unknown. For receivers of the signals, one hypothesis is that novel recognition phenotypes arise from parameter variation in computationally flexible feature detection networks. We test this hypothesis in crickets, where males generate and females recognize the mating songs with a species-specific pulse pattern, by investigating whether the song recognition network in the cricket brain has the computational flexibility to recognize different temporal features. Using electrophysiological recordings from the network that recognizes crucial properties of the pulse pattern on the short timescale in the cricket Gryllus bimaculatus, we built a computational model that reproduces the neuronal and behavioral tuning of that species. An analysis of the model's parameter space reveals that the network can provide all recognition phenotypes for pulse duration and pause known in crickets and even other insects. Phenotypic diversity in the model is consistent with known preference types in crickets and other insects, and arises from computations that likely evolved to increase energy efficiency and robustness of pattern recognition. The model's parameter to phenotype mapping is degenerate - different network parameters can create similar changes in the phenotype - which likely supports evolutionary plasticity. Our study suggests that computationally flexible networks underlie the diverse pattern recognition phenotypes, and we reveal network properties that constrain and support behavioral diversity.
Collapse
Affiliation(s)
- Jan Clemens
- European Neuroscience Institute Göttingen – A Joint Initiative of the University Medical Center Göttingen and the Max-Planck SocietyGöttingenGermany
- BCCN GöttingenGöttingenGermany
| | - Stefan Schöneich
- University of Cambridge, Department of ZoologyCambridgeUnited Kingdom
- Friedrich-Schiller-University Jena, Institute for Zoology and Evolutionary ResearchJenaGermany
| | - Konstantinos Kostarakos
- University of Cambridge, Department of ZoologyCambridgeUnited Kingdom
- Institute of Biology, University of GrazUniversitätsplatzAustria
| | - R Matthias Hennig
- Humboldt-Universität zu Berlin, Department of BiologyPhilippstrasseGermany
| | - Berthold Hedwig
- University of Cambridge, Department of ZoologyCambridgeUnited Kingdom
| |
Collapse
|