1
|
Brown LS, Cho JR, Bolkan SS, Nieh EH, Schottdorf M, Tank DW, Brody CD, Witten IB, Goldman MS. Neural circuit models for evidence accumulation through choice-selective sequences. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.01.555612. [PMID: 38234715 PMCID: PMC10793437 DOI: 10.1101/2023.09.01.555612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Decision making is traditionally thought to be mediated by populations of neurons whose firing rates persistently accumulate evidence across time. However, recent decision-making experiments in rodents have observed neurons across the brain that fire sequentially as a function of spatial position or time, rather than persistently, with the subset of neurons in the sequence depending on the animal's choice. We develop two new candidate circuit models, in which evidence is encoded either in the relative firing rates of two competing chains of neurons or in the network location of a stereotyped pattern ("bump") of neural activity. Encoded evidence is then faithfully transferred between neuronal populations representing different positions or times. Neural recordings from four different brain regions during a decision-making task showed that, during the evidence accumulation period, different brain regions displayed tuning curves consistent with different candidate models for evidence accumulation. This work provides mechanistic models and potential neural substrates for how graded-value information may be precisely accumulated within and transferred between neural populations, a set of computations fundamental to many cognitive operations.
Collapse
|
2
|
Scott DN, Frank MJ. Adaptive control of synaptic plasticity integrates micro- and macroscopic network function. Neuropsychopharmacology 2023; 48:121-144. [PMID: 36038780 PMCID: PMC9700774 DOI: 10.1038/s41386-022-01374-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/09/2022]
Abstract
Synaptic plasticity configures interactions between neurons and is therefore likely to be a primary driver of behavioral learning and development. How this microscopic-macroscopic interaction occurs is poorly understood, as researchers frequently examine models within particular ranges of abstraction and scale. Computational neuroscience and machine learning models offer theoretically powerful analyses of plasticity in neural networks, but results are often siloed and only coarsely linked to biology. In this review, we examine connections between these areas, asking how network computations change as a function of diverse features of plasticity and vice versa. We review how plasticity can be controlled at synapses by calcium dynamics and neuromodulatory signals, the manifestation of these changes in networks, and their impacts in specialized circuits. We conclude that metaplasticity-defined broadly as the adaptive control of plasticity-forges connections across scales by governing what groups of synapses can and can't learn about, when, and to what ends. The metaplasticity we discuss acts by co-opting Hebbian mechanisms, shifting network properties, and routing activity within and across brain systems. Asking how these operations can go awry should also be useful for understanding pathology, which we address in the context of autism, schizophrenia and Parkinson's disease.
Collapse
Affiliation(s)
- Daniel N Scott
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Michael J Frank
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
3
|
Parmelee C, Alvarez JL, Curto C, Morrison K. Sequential Attractors in Combinatorial Threshold-Linear Networks. SIAM JOURNAL ON APPLIED DYNAMICAL SYSTEMS 2022; 21:1597-1630. [PMID: 37485069 PMCID: PMC10362966 DOI: 10.1137/21m1445120] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
Sequences of neural activity arise in many brain areas, including cortex, hippocampus, and central pattern generator circuits that underlie rhythmic behaviors like locomotion. While network architectures supporting sequence generation vary considerably, a common feature is an abundance of inhibition. In this work, we focus on architectures that support sequential activity in recurrently connected networks with inhibition-dominated dynamics. Specifically, we study emergent sequences in a special family of threshold-linear networks, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. Such networks naturally give rise to an abundance of sequences whose dynamics are tightly connected to the underlying graph. We find that architectures based on generalizations of cycle graphs produce limit cycle attractors that can be activated to generate transient or persistent (repeating) sequences. Each architecture type gives rise to an infinite family of graphs that can be built from arbitrary component subgraphs. Moreover, we prove a number of graph rules for the corresponding CTLNs in each family. The graph rules allow us to strongly constrain, and in some cases fully determine, the fixed points of the network in terms of the fixed points of the component subnetworks. Finally, we also show how the structure of certain architectures gives insight into the sequential dynamics of the corresponding attractor.
Collapse
Affiliation(s)
| | | | - Carina Curto
- Pennsylvania State University, University Park, PA 16802 USA
| | | |
Collapse
|
4
|
Characteristics of sequential activity in networks with temporally asymmetric Hebbian learning. Proc Natl Acad Sci U S A 2020; 117:29948-29958. [PMID: 33177232 PMCID: PMC7703604 DOI: 10.1073/pnas.1918674117] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sequential activity is a prominent feature of many neural systems, in multiple behavioral contexts. Here, we investigate how Hebbian rules lead to storage and recall of random sequences of inputs in both rate and spiking recurrent networks. In the case of the simplest (bilinear) rule, we characterize extensively the regions in parameter space that allow sequence retrieval and compute analytically the storage capacity of the network. We show that nonlinearities in the learning rule can lead to sparse sequences and find that sequences maintain robust decoding but display highly labile dynamics to continuous changes in the connectivity matrix, similar to recent observations in hippocampus and parietal cortex. Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.
Collapse
|
5
|
Memory replay in balanced recurrent networks. PLoS Comput Biol 2017; 13:e1005359. [PMID: 28135266 PMCID: PMC5305273 DOI: 10.1371/journal.pcbi.1005359] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2016] [Revised: 02/13/2017] [Accepted: 01/09/2017] [Indexed: 11/19/2022] Open
Abstract
Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global-potentially neuromodulatory-alterations of neuronal excitability can switch between network states that favor retrieval and consolidation.
Collapse
|
6
|
Raghavan M, Amrutur B, Narayanan R, Sikdar SK. Synconset waves and chains: spiking onsets in synchronous populations predict and are predicted by network structure. PLoS One 2013; 8:e74910. [PMID: 24116018 PMCID: PMC3792941 DOI: 10.1371/journal.pone.0074910] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2013] [Accepted: 08/07/2013] [Indexed: 11/30/2022] Open
Abstract
Synfire waves are propagating spike packets in synfire chains, which are feedforward chains embedded in random networks. Although synfire waves have proved to be effective quantification for network activity with clear relations to network structure, their utilities are largely limited to feedforward networks with low background activity. To overcome these shortcomings, we describe a novel generalisation of synfire waves, and define ‘synconset wave’ as a cascade of first spikes within a synchronisation event. Synconset waves would occur in ‘synconset chains’, which are feedforward chains embedded in possibly heavily recurrent networks with heavy background activity. We probed the utility of synconset waves using simulation of single compartment neuron network models with biophysically realistic conductances, and demonstrated that the spread of synconset waves directly follows from the network connectivity matrix and is modulated by top-down inputs and the resultant oscillations. Such synconset profiles lend intuitive insights into network organisation in terms of connection probabilities between various network regions rather than an adjacency matrix. To test this intuition, we develop a Bayesian likelihood function that quantifies the probability that an observed synfire wave was caused by a given network. Further, we demonstrate it's utility in the inverse problem of identifying the network that caused a given synfire wave. This method was effective even in highly subsampled networks where only a small subset of neurons were accessible, thus showing it's utility in experimental estimation of connectomes in real neuronal-networks. Together, we propose synconset chains/waves as an effective framework for understanding the impact of network structure on function, and as a step towards developing physiology-driven network identification methods. Finally, as synconset chains extend the utilities of synfire chains to arbitrary networks, we suggest utilities of our framework to several aspects of network physiology including cell assemblies, population codes, and oscillatory synchrony.
Collapse
Affiliation(s)
- Mohan Raghavan
- Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, Karnataka, India
- * E-mail:
| | - Bharadwaj Amrutur
- Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, Karnataka, India
| | - Rishikesh Narayanan
- Molecular Biophysics Unit, Indian Institute of Science, Bangalore, Karnataka, India
| | - Sujit Kumar Sikdar
- Molecular Biophysics Unit, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
7
|
Abstract
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.
Collapse
|
8
|
Baruchi I, Ben-Jacob E. Towards neuro-memory-chip: imprinting multiple memories in cultured neural networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2007; 75:050901. [PMID: 17677014 DOI: 10.1103/physreve.75.050901] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2006] [Indexed: 05/16/2023]
Abstract
We show that using local chemical stimulations it is possible to imprint persisting (days) multiple memories (collective modes of neuron firing) in the activity of cultured neural networks. Microdroplets of inhibitory antagonist are injected at a location selected based on real-time analysis of the recorded activity. The neurons at the stimulated locations turn into a focus for initiating synchronized bursting events (the collective modes) each with its own specific spatiotemporal pattern of neuron firing.
Collapse
Affiliation(s)
- Itay Baruchi
- School of Physics and Astronomy, Raymond & Beverly Sackler Faculty of Exact Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | | |
Collapse
|
9
|
Raichelgauz I, Odinaev K, Zeevi YY. Natural signal classification by neural cliques and phase-locked attractors. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2006; Suppl:6693-6697. [PMID: 17959488 DOI: 10.1109/iembs.2006.260923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Cortical neural networks are responsible for identification, recognition and classification of natural signals mediated by various sensory channels. These tasks are still too complex to be accomplished by state-of-the-art engineering systems. There is, therefore, a great deal of interest in the development of suitable biologically-motivated architectures which are based on a realistic model of generic neural ensembles. We present a computational architecture for classification of natural signals, such as physiological signals,based on the emergence of instant neural cliques and phase-locked attractors in liquid architectures. The emergence of instant neural cliques enables mapping of complex classes of signals onto specific spatio-temporal firing patterns. The convergence of neural cliques onto attractors, along phase-locked pathways, reveals a new type dynamic behavior of neural ensembles, which lends itself to simple discrete-output computational systems.
Collapse
|
10
|
Sevush S. Single-neuron theory of consciousness. J Theor Biol 2005; 238:704-25. [PMID: 16083912 DOI: 10.1016/j.jtbi.2005.06.018] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2004] [Revised: 05/02/2005] [Accepted: 06/22/2005] [Indexed: 11/30/2022]
Abstract
By most accounts, the mind arises from the integrated activity of large populations of neurons distributed across multiple brain regions. A contrasting model is presented in the present paper that places the mind/brain interface not at the whole brain level but at the level of single neurons. Specifically, it is proposed that each neuron in the nervous system is independently conscious, with conscious content corresponding to the spatial pattern of a portion of that neuron's dendritic electrical activity. For most neurons, such as those in the hypothalamus or posterior sensory cortices, the conscious activity would be assumed to be simple and unable to directly affect the organism's macroscopic conscious behavior. For a subpopulation of layer 5 pyramidal neurons in the lateral prefrontal cortices, however, an arrangement is proposed to be present such that, at any given moment: (i) the spatial pattern of electrical activity in a portion of the dendritic tree of each neuron in the subpopulation individually manifests a complexity and diversity sufficient to account for the complexity and diversity of conscious experience; (ii) the dendritic trees of the neurons in the subpopulation all contain similar spatial electrical patterns; (iii) the spatial electrical pattern in the dendritic tree of each neuron interacts non-linearly with the remaining ambient dendritic electrical activity to determine the neuron's overall axonal response; (iv) the dendritic spatial pattern is reexpressed at the population level by the spatial pattern exhibited by a synchronously firing subgroup of the conscious neurons, thereby providing a mechanism by which conscious activity at the neuronal level can influence overall behavior. The resulting scheme is one in which conscious behavior appears to be the product of a single macroscopic mind, but is actually the integrated output of a chorus of minds, each associated with a different neuron.
Collapse
Affiliation(s)
- Steven Sevush
- Department of Psychiatry, University of Miami School of Medicine, 1400 NW 10 Ave, Suite 702, Miami, FL 33136, USA.
| |
Collapse
|
11
|
Moradi F. Information coding and oscillatory activity in synfire neural networks with and without inhibitory coupling. BIOLOGICAL CYBERNETICS 2004; 91:283-294. [PMID: 15452717 DOI: 10.1007/s00422-004-0499-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2002] [Accepted: 06/22/2004] [Indexed: 05/24/2023]
Abstract
When a population spike (pulse-packet) propagates through a feedforward network with random excitatory connections, it either evolves to a sustained stable level of synchronous activity or fades away (Diesmann et al. in Nature 402:529-533 1999; Cateau and Fukai Neur Netw 14:675-685 2001). Here I demonstrate that in the presence of noise, the probability of the survival of the pulse-packet (or, equivalently, the firing rate of output neurons) reflects the intensity of the input. Furthermore, inhibitory coupling between layers can result in quasi- periodic alternation between several levels of firing activity. These results are obtained by analyzing the evolution of pulse-packet activity as a Markov chain. For the Markov chain analysis, the output of the chain is a linear mapping of the input into a lower-dimensional space, and the eigenvalues and eigenvectors of the transition matrix determine the dynamics of the evolution. Synchronous propagation of firing activity in successive pools of neurons are simulated in networks of integrate-and-fire and compartmental model neurons, and, consistent with the discrete Markov process, the activation of each pool is observed to be predominantly dependent upon the number of cells that fired in the previous pool. Simulation results agree with the numerical solutions of the Markov model. When inhibitory coupling between layers are included in the Markov model, some eigenvalues become complex numbers, implying oscillatory dynamics. The quasiperiodic dynamics is validated with simulation with leaky integrate-and-fire neurons. The networks demonstrate different modes of quasiperiodic activity as the inhibition or excitation parameters of the network are varied.
Collapse
Affiliation(s)
- Farshad Moradi
- School of Intelligent Systems, Institute for Studies in Theoretical Physics and Mathematics, Tehran, Iran.
| |
Collapse
|
12
|
Rodríguez FB, Huerta R. Analysis of perfect mappings of the stimuli through neural temporal sequences. Neural Netw 2004; 17:963-73. [PMID: 15312839 DOI: 10.1016/j.neunet.2003.12.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2002] [Accepted: 12/11/2003] [Indexed: 10/26/2022]
Abstract
The analysis of an optimal neural system that maps stimuli into unique sequences of activations of fundamental atoms or functional clusters (FCs) is carried out. We say that it is perfect because the system maps with an injective function every stimulus in minimum time with the least number of FCs, such that every FC is activated only once. The neural system has the possibility to sustain several sequences in parallel. In this framework, we study the capacity achievable by the system, minimal completion time and complexity in terms of the number of parallel sequences. We show that the maximum capacity of the system is achieved without using parallel sequences at the expense of long completion times. However, when the capacity value is fixed, the largest possible number of parallel sequences is optimal because it requires short completion times. The complexity measure adds to important points: (i) the largest complexity of the system is achieved without parallel sequences, and (ii) the capacity estimation is a good estimation of the complexity of the system.
Collapse
Affiliation(s)
- Francisco B Rodríguez
- GNB, Escuela Tecnica Superior de Informatica, Ingeniería Informática, Universidad Autónoma de Madrid, Cra. De Colmenar Uiejo, km 15, 28049 Madrid, Spain.
| | | |
Collapse
|
13
|
Tetzlaff T, Morrison A, Geisel T, Diesmann M. Consequences of realistic network size on the stability of embedded synfire chains. Neurocomputing 2004. [DOI: 10.1016/j.neucom.2004.01.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
14
|
Melamed O, Gerstner W, Maass W, Tsodyks M, Markram H. Coding and learning of behavioral sequences. Trends Neurosci 2004; 27:11-4; discussion 14-5. [PMID: 14698603 DOI: 10.1016/j.tins.2003.10.014] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Ofer Melamed
- Brain Mind Institute, EPFL, 1015 Lausanne, Switzerland
| | | | | | | | | |
Collapse
|
15
|
|