1
|
Mobille Z, Sikandar UB, Sponberg S, Choi H. Temporal resolution of spike coding in feedforward networks with signal convergence and divergence. PLoS Comput Biol 2025; 21:e1012971. [PMID: 40258062 PMCID: PMC12021431 DOI: 10.1371/journal.pcbi.1012971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 04/24/2025] [Accepted: 03/17/2025] [Indexed: 04/23/2025] Open
Abstract
Convergent and divergent structures in the networks that make up biological brains are found across many species and brain regions at various spatial scales. Neurons in these networks fire action potentials, or "spikes," whose precise timing is becoming increasingly appreciated as large sources of information about both sensory input and motor output. In this work, we investigate the extent to which feedforward convergent/divergent network structure is related to the gain in information of spike timing representations over spike count representations. While previous theories on coding in convergent and divergent networks have largely neglected the role of precise spike timing, our model and analyses place this aspect at the forefront. For a suite of stimuli with different timescales, we demonstrate that structural bottlenecks-small groups of neurons post-synaptic to network convergence-have a stronger preference for spike timing codes than expansion layers created by structural divergence. We further show that this relationship can be generalized across different spike-generating models and measures of coding capacity, implying a potentially fundamental link between network structure and coding strategy using spikes. Additionally, we found that a simple network model based on convergence and divergence ratios of a hawkmoth (Manduca sexta) nervous system can reproduce the relative contribution of spike timing information in its motor output, providing testable predictions on optimal temporal resolutions of spike coding across the moth sensory-motor pathway at both the single-neuron and population levels.
Collapse
Affiliation(s)
- Zach Mobille
- School of Mathematics, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Interdisciplinary Graduate Program in Quantitative Biosciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Usama Bin Sikandar
- School of Physics, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Simon Sponberg
- Interdisciplinary Graduate Program in Quantitative Biosciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Physics, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Hannah Choi
- School of Mathematics, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Interdisciplinary Graduate Program in Quantitative Biosciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| |
Collapse
|
2
|
Neri M, Brovelli A, Castro S, Fraisopi F, Gatica M, Herzog R, Mediano PAM, Mindlin I, Petri G, Bor D, Rosas FE, Tramacere A, Estarellas M. A Taxonomy of Neuroscientific Strategies Based on Interaction Orders. Eur J Neurosci 2025; 61:e16676. [PMID: 39906974 DOI: 10.1111/ejn.16676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 11/15/2024] [Accepted: 12/29/2024] [Indexed: 02/06/2025]
Abstract
In recent decades, neuroscience has advanced with increasingly sophisticated strategies for recording and analysing brain activity, enabling detailed investigations into the roles of functional units, such as individual neurons, brain regions and their interactions. Recently, new strategies for the investigation of cognitive functions regard the study of higher order interactions-that is, the interactions involving more than two brain regions or neurons. Although methods focusing on individual units and their interactions at various levels offer valuable and often complementary insights, each approach comes with its own set of limitations. In this context, a conceptual map to categorize and locate diverse strategies could be crucial to orient researchers and guide future research directions. To this end, we define the spectrum of orders of interaction, namely, a framework that categorizes the interactions among neurons or brain regions based on the number of elements involved in these interactions. We use a simulation of a toy model and a few case studies to demonstrate the utility and the challenges of the exploration of the spectrum. We conclude by proposing future research directions aimed at enhancing our understanding of brain function and cognition through a more nuanced methodological framework.
Collapse
Affiliation(s)
- Matteo Neri
- Institut de Neurosciences de la Timone, Aix-Marseille Université, UMR 7289 CNRS, Marseille, France
| | - Andrea Brovelli
- Institut de Neurosciences de la Timone, Aix-Marseille Université, UMR 7289 CNRS, Marseille, France
| | - Samy Castro
- Laboratoire de Neurosciences Cognitives et Adaptatives (LNCA), UMR 7364, Strasbourg, France
- Institut de Neurosciences Des Systèmes (INS), Aix-Marseille Université, UMR 1106, Marseille, France
| | - Fausto Fraisopi
- Institute for Advanced Study, Aix-Marseille University, Marseille, France
| | - Marilyn Gatica
- NPLab, Network Science Institute, Northeastern University London, London, UK
| | - Ruben Herzog
- DreamTeam, Paris Brain Institute (ICM), Paris, France
| | - Pedro A M Mediano
- Department of Computing, Imperial College London, London, UK
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Ivan Mindlin
- DreamTeam, Paris Brain Institute (ICM), Paris, France
- PICNIC lab, Paris Brain Institute (ICM), Paris, France
| | - Giovanni Petri
- NPLab, Network Science Institute, Northeastern University London, London, UK
- Department of Physics, Northeastern University, Boston, Massachusetts, USA
- NPLab, CENTAI Institute, Turin, Italy
| | - Daniel Bor
- Department of Psychology, School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Fernando E Rosas
- Sussex Centre for Consciousness Science and Sussex AI, Department of Informatics, University of Sussex, Brighton, UK
- Center for Psychedelic Research and Centre for Complexity Science, Department of Brain Science, Imperial College London, London, UK
- Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, UK
- Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS), Prague, Czechia
| | - Antonella Tramacere
- Department of Philosophy, Communication and Performing Arts, Roma Tre University, Rome, Italy
| | - Mar Estarellas
- Department of Psychology, School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
- Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
3
|
Mobille Z, Sikandar UB, Sponberg S, Choi H. Temporal resolution of spike coding in feedforward networks with signal convergence and divergence. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.08.602598. [PMID: 39026834 PMCID: PMC11257569 DOI: 10.1101/2024.07.08.602598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Convergent and divergent structures in the networks that make up biological brains are found across many species and brain regions at various spatial scales. Neurons in these networks fire action potentials, or "spikes", whose precise timing is becoming increasingly appreciated as large sources of information about both sensory input and motor output. While previous theories on coding in convergent and divergent networks have largely neglected the role of precise spike timing, our model and analyses place this aspect at the forefront. For a suite of stimuli with different timescales, we demonstrate that structural bottlenecks- small groups of neurons post-synaptic to network convergence - have a stronger preference for spike timing codes than expansion layers created by structural divergence. Additionally, we found that a simple network model based on convergence and divergence ratios of a hawkmoth (Manduca sexta) nervous system can reproduce the relative contribution of spike timing information in its motor output, providing testable predictions on optimal temporal resolutions of spike coding across the moth sensory-motor pathway at both the single-neuron and population levels. Our simulations and analyses suggest a relationship between the level of convergent/divergent structure present in a feedforward network and the loss of stimulus information encoded by its population spike trains as their temporal resolution decreases, which could be tested experimentally across diverse neural systems in future studies. We further show that this relationship can be generalized across different spike-generating models and measures of coding capacity, implying a potentially fundamental link between network structure and coding strategy using spikes.
Collapse
Affiliation(s)
- Zach Mobille
- School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332
- Quantitative Biosciences Program, Georgia Institute of Technology, Atlanta, GA 30332
| | - Usama Bin Sikandar
- School of Physics, Georgia Institute of Technology, Atlanta, GA 30332
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332
| | - Simon Sponberg
- Quantitative Biosciences Program, Georgia Institute of Technology, Atlanta, GA 30332
- School of Physics, Georgia Institute of Technology, Atlanta, GA 30332
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA 30332
| | - Hannah Choi
- School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332
- Quantitative Biosciences Program, Georgia Institute of Technology, Atlanta, GA 30332
| |
Collapse
|
4
|
Dragoi G. The generative grammar of the brain: a critique of internally generated representations. Nat Rev Neurosci 2024; 25:60-75. [PMID: 38036709 PMCID: PMC11878217 DOI: 10.1038/s41583-023-00763-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/18/2023] [Indexed: 12/02/2023]
Abstract
The past decade of progress in neurobiology has uncovered important organizational principles for network preconfiguration and neuronal selection that suggest a generative grammar exists in the brain. In this Perspective, I discuss the competence of the hippocampal neural network to generically express temporally compressed sequences of neuronal firing that represent novel experiences, which is envisioned as a form of generative neural syntax supporting a neurobiological perspective on brain function. I compare this neural competence with the hippocampal network performance that represents specific experiences with higher fidelity after new learning during replay, which is envisioned as a form of neural semantic that supports a complementary neuropsychological perspective. I also demonstrate how the syntax of network competence emerges a priori during early postnatal life and is followed by the later development of network performance that enables rapid encoding and memory consolidation. Thus, I propose that this generative grammar of the brain is essential for internally generated representations, which are crucial for the cognitive processes underlying learning and memory, prospection, and inference, which ultimately underlie our reason and representation of the world.
Collapse
Affiliation(s)
- George Dragoi
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA.
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT, USA.
- Wu Tsai Institute, Yale University, New Haven, CT, USA.
| |
Collapse
|
5
|
Sundiang M, Hatsopoulos NG, MacLean JN. Dynamic structure of motor cortical neuron coactivity carries behaviorally relevant information. Netw Neurosci 2023; 7:661-678. [PMID: 37397877 PMCID: PMC10312288 DOI: 10.1162/netn_a_00298] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 12/02/2022] [Indexed: 01/28/2024] Open
Abstract
Skillful, voluntary movements are underpinned by computations performed by networks of interconnected neurons in the primary motor cortex (M1). Computations are reflected by patterns of coactivity between neurons. Using pairwise spike time statistics, coactivity can be summarized as a functional network (FN). Here, we show that the structure of FNs constructed from an instructed-delay reach task in nonhuman primates is behaviorally specific: Low-dimensional embedding and graph alignment scores show that FNs constructed from closer target reach directions are also closer in network space. Using short intervals across a trial, we constructed temporal FNs and found that temporal FNs traverse a low-dimensional subspace in a reach-specific trajectory. Alignment scores show that FNs become separable and correspondingly decodable shortly after the Instruction cue. Finally, we observe that reciprocal connections in FNs transiently decrease following the Instruction cue, consistent with the hypothesis that information external to the recorded population temporarily alters the structure of the network at this moment.
Collapse
Affiliation(s)
- Marina Sundiang
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Nicholas G. Hatsopoulos
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
- University of Chicago Neuroscience Institute, Chicago, IL, USA
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, USA
| | - Jason N. MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
- University of Chicago Neuroscience Institute, Chicago, IL, USA
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
6
|
Abstract
Networks are fundamental for our understanding of complex systems. The study of networks has uncovered common principles that underlie the behavior of vastly different fields of study, including physics, biology, sociology, and engineering. One of these common principles is the existence of network motifs-small recurrent patterns that can provide certain features that are important for the specific network. However, it remains unclear how network motifs are joined in real networks to make larger circuits and what properties emerge from interactions between network motifs. Here, we develop a framework to explore the mesoscale-level behavior of complex networks. Considering network motifs as hypernodes, we define the rules for their interaction at the network's next level of organization. We develop a method to infer the favorable arrangements of interactions between network motifs into hypermotifs from real evolved and designed network data. We mathematically explore the emergent properties of these higher-order circuits and their relations to the properties of the individual minimal circuit components they combine. We apply this framework to biological, neuronal, social, linguistic, and electronic networks and find that network motifs are not randomly distributed in real networks but are combined in a way that both maintains autonomy and generates emergent properties. This framework provides a basis for exploring the mesoscale structure and behavior of complex systems where it can be used to reveal intermediate patterns in complex networks and to identify specific nodes and links in the network that are the key drivers of the network's emergent properties.
Collapse
|
7
|
Miller DR, Guenther DT, Maurer AP, Hansen CA, Zalesky A, Khoshbouei H. Dopamine Transporter Is a Master Regulator of Dopaminergic Neural Network Connectivity. J Neurosci 2021; 41:5453-5470. [PMID: 33980544 PMCID: PMC8221606 DOI: 10.1523/jneurosci.0223-21.2021] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 04/19/2021] [Accepted: 05/01/2021] [Indexed: 12/13/2022] Open
Abstract
Dopaminergic neurons of the substantia nigra pars compacta (SNC) and ventral tegmental area (VTA) exhibit spontaneous firing activity. The dopaminergic neurons in these regions have been shown to exhibit differential sensitivity to neuronal loss and psychostimulants targeting dopamine transporter. However, it remains unclear whether these regional differences scale beyond individual neuronal activity to regional neuronal networks. Here, we used live-cell calcium imaging to show that network connectivity greatly differs between SNC and VTA regions with higher incidence of hub-like neurons in the VTA. Specifically, the frequency of hub-like neurons was significantly lower in SNC than in the adjacent VTA, consistent with the interpretation of a lower network resilience to SNC neuronal loss. We tested this hypothesis, in DAT-cre/loxP-GCaMP6f mice of either sex, when activity of an individual dopaminergic neuron is suppressed, through whole-cell patch clamp electrophysiology, in either SNC or VTA networks. Neuronal loss in the SNC increased network clustering, whereas the larger number of hub-neurons in the VTA overcompensated by decreasing network clustering in the VTA. We further show that network properties are regulatable via a dopamine transporter but not a D2 receptor dependent mechanism. Our results demonstrate novel regulatory mechanisms of functional network topology in dopaminergic brain regions.SIGNIFICANCE STATEMENT In this work, we begin to untangle the differences in complex network properties between the substantia nigra pars compacta (SNC) and VTA, that may underlie differential sensitivity between regions. The methods and analysis employed provide a springboard for investigations of network topology in multiple deep brain structures and disorders.
Collapse
Affiliation(s)
- Douglas R Miller
- Department of Neuroscience, University of Florida, Gainesville, Florida
| | - Dylan T Guenther
- Department of Neuroscience, University of Florida, Gainesville, Florida
| | - Andrew P Maurer
- Department of Neuroscience, University of Florida, Gainesville, Florida
| | - Carissa A Hansen
- Department of Neuroscience, University of Florida, Gainesville, Florida
| | - Andrew Zalesky
- Melbourne Neuropsychiatry Centre, The University of Melbourne and Melbourne Health, Melbourne, Victoria 3010, Australia
- Department of Biomedical Engineering, Melbourne School of Engineering, The University of Melbourne, Melbourne, Victoria 3010, Australia
| | | |
Collapse
|
8
|
Bojanek K, Zhu Y, MacLean J. Cyclic transitions between higher order motifs underlie sustained asynchronous spiking in sparse recurrent networks. PLoS Comput Biol 2020; 16:e1007409. [PMID: 32997658 PMCID: PMC7549833 DOI: 10.1371/journal.pcbi.1007409] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/12/2020] [Accepted: 07/28/2020] [Indexed: 12/26/2022] Open
Abstract
A basic—yet nontrivial—function which neocortical circuitry must satisfy is the ability to maintain stable spiking activity over time. Stable neocortical activity is asynchronous, critical, and low rate, and these features of spiking dynamics contribute to efficient computation and optimal information propagation. However, it remains unclear how neocortex maintains this asynchronous spiking regime. Here we algorithmically construct spiking neural network models, each composed of 5000 neurons. Network construction synthesized topological statistics from neocortex with a set of objective functions identifying naturalistic low-rate, asynchronous, and critical activity. We find that simulations run on the same topology exhibit sustained asynchronous activity under certain sets of initial membrane voltages but truncated activity under others. Synchrony, rate, and criticality do not provide a full explanation of this dichotomy. Consequently, in order to achieve mechanistic understanding of sustained asynchronous activity, we summarized activity as functional graphs where edges between units are defined by pairwise spike dependencies. We then analyzed the intersection between functional edges and synaptic connectivity- i.e. recruitment networks. Higher-order patterns, such as triplet or triangle motifs, have been tied to cooperativity and integration. We find, over time in each sustained simulation, low-variance periodic transitions between isomorphic triangle motifs in the recruitment networks. We quantify the phenomenon as a Markov process and discover that if the network fails to engage this stereotyped regime of motif dominance “cycling”, spiking activity truncates early. Cycling of motif dominance generalized across manipulations of synaptic weights and topologies, demonstrating the robustness of this regime for maintenance of network activity. Our results point to the crucial role of excitatory higher-order patterns in sustaining asynchronous activity in sparse recurrent networks. They also provide a possible explanation why such connectivity and activity patterns have been prominently reported in neocortex. Neocortical spiking activity tends to be low-rate and non-rhythmic, and to operate near the critical point of a phase transition. It remains unclear how this kind of spiking activity can be maintained within a neuronal network. Neurons are leaky and individual synaptic connections are sparse and weak, making the maintenance of an asynchronous regime a nontrivial problem. Higher order patterns involving more than two units abound in neocortex, and several lines of evidence suggest that they may be instrumental for brain function. For example, stable activity in vivo displays elevated clustering dominated by specific three-node (triplet) motifs. In this study we demonstrate a link between the maintenance of asynchronous activity and triplet motifs. We algorithmically build spiking neural network models to mimic the topology of neocortex and the spiking statistics that characterize wakefulness. We show that higher order coordination of synapses is always present during sustained asynchronous activity. Coordination takes the form of transitions in time between specific triangle motifs. These motifs summarize the way spikes traverse the underlying synaptic topology. The results of our model are consistent with numerous experimental observations, and their generalizability to other weakly and sparsely connected networks is predicted.
Collapse
Affiliation(s)
- Kyle Bojanek
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Yuqing Zhu
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Jason MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
9
|
Levy M, Sporns O, MacLean JN. Network Analysis of Murine Cortical Dynamics Implicates Untuned Neurons in Visual Stimulus Coding. Cell Rep 2020; 31:107483. [PMID: 32294431 PMCID: PMC7218481 DOI: 10.1016/j.celrep.2020.03.047] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 01/22/2020] [Accepted: 03/13/2020] [Indexed: 02/02/2023] Open
Abstract
Unbiased and dense sampling of large populations of layer 2/3 pyramidal neurons in mouse primary visual cortex (V1) reveals two functional sub-populations: neurons tuned and untuned to drifting gratings. Whether functional interactions between these two groups contribute to the representation of visual stimuli is unclear. To examine these interactions, we summarize the population partial pairwise correlation structure as a directed and weighted graph. We find that tuned and untuned neurons have distinct topological properties, with untuned neurons occupying central positions in functional networks (FNs). Implementation of a decoder that utilizes the topology of these FNs yields accurate decoding of visual stimuli. We further show that decoding performance degrades comparably following manipulations of either tuned or untuned neurons. Our results demonstrate that untuned neurons are an integral component of V1 FNs and suggest that network interactions contain information about the stimulus that is accessible to downstream elements.
Collapse
Affiliation(s)
- Maayan Levy
- Committee on Computational Neuroscience, The University of Chicago, Chicago, IL 60637, USA
| | - Olaf Sporns
- Indiana University Network Science Institute, Indiana University, Bloomington, IN 47405, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA
| | - Jason N MacLean
- Committee on Computational Neuroscience, The University of Chicago, Chicago, IL 60637, USA; Department of Neurobiology, The University of Chicago, Chicago, IL 60637, USA; Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior.
| |
Collapse
|
10
|
Kotekal S, MacLean JN. Recurrent interactions can explain the variance in single trial responses. PLoS Comput Biol 2020; 16:e1007591. [PMID: 31999693 PMCID: PMC7012453 DOI: 10.1371/journal.pcbi.1007591] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 02/11/2020] [Accepted: 12/09/2019] [Indexed: 11/25/2022] Open
Abstract
To develop a complete description of sensory encoding, it is necessary to account for trial-to-trial variability in cortical neurons. Using a linear model with terms corresponding to the visual stimulus, mouse running speed, and experimentally measured neuronal correlations, we modeled short term dynamics of L2/3 murine visual cortical neurons to evaluate the relative importance of each factor to neuronal variability within single trials. We find single trial predictions improve most when conditioning on the experimentally measured local correlations in comparison to predictions based on the stimulus or running speed. Specifically, accurate predictions are driven by positively co-varying and synchronously active functional groups of neurons. Including functional groups in the model enhances decoding accuracy of sensory information compared to a model that assumes neuronal independence. Functional groups, in encoding and decoding frameworks, provide an operational definition of Hebbian assemblies in which local correlations largely explain neuronal responses on individual trials.
Collapse
Affiliation(s)
- Subhodh Kotekal
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
| | - Jason N. MacLean
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, Chicago, Illinois, United States of America
| |
Collapse
|
11
|
Miller DR, Lebowitz JJ, Guenther DT, Refowich AJ, Hansen C, Maurer AP, Khoshbouei H. Methamphetamine regulation of activity and topology of ventral midbrain networks. PLoS One 2019; 14:e0222957. [PMID: 31536584 PMCID: PMC6752877 DOI: 10.1371/journal.pone.0222957] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 09/10/2019] [Indexed: 12/16/2022] Open
Abstract
The ventral midbrain supports a variety of functions through the heterogeneity of neurons. Dopaminergic and GABA neurons within this region are particularly susceptible targets of amphetamine-class psychostimulants such as methamphetamine. While this has been evidenced through single-neuron methods, it remains unclear whether and to what extent the local neuronal network is affected and if so, by which mechanisms. Both GABAergic and dopaminergic neurons were heavily featured within the primary ventral midbrain network model system. Using spontaneous calcium activity, our data suggest methamphetamine decreased total network output via a D2 receptor-dependent manner. Over culture duration, functional connectivity between neurons decreased significantly but was unaffected by methamphetamine. However, across culture duration, exposure to methamphetamine significantly altered changes in network assortativity. Here we have established primary ventral midbrain networks culture as a viable model system that reveals specific changes in network activity, connectivity, and topology modulation by methamphetamine. This network culture system enables control over the type and number of neurons that comprise a network and facilitates detection of emergent properties that arise from the specific organization. Thus, the multidimensional properties of methamphetamine can be unraveled, leading to a better understanding of its impact on the local network structure and function.
Collapse
Affiliation(s)
- Douglas R. Miller
- Department of Neuroscience, University of Florida, Gainesville, FL, United States of America
| | - Joseph J. Lebowitz
- Department of Neuroscience, University of Florida, Gainesville, FL, United States of America
| | - Dylan T. Guenther
- Department of Neuroscience, University of Florida, Gainesville, FL, United States of America
| | - Alexander J. Refowich
- Department of Neuroscience, University of Florida, Gainesville, FL, United States of America
| | - Carissa Hansen
- Department of Neuroscience, University of Florida, Gainesville, FL, United States of America
| | - Andrew P. Maurer
- Department of Neuroscience, University of Florida, Gainesville, FL, United States of America
- McKnight Brain Institute, University of Florida, Gainesville, FL, United States of America
- Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States of America
- Department of Civil and Coastal Engineering, University of Florida, Gainesville, FL, United States of America
- * E-mail: (APM); (HK)
| | - Habibeh Khoshbouei
- Department of Neuroscience, University of Florida, Gainesville, FL, United States of America
- * E-mail: (APM); (HK)
| |
Collapse
|
12
|
Curto C, Morrison K. Relating network connectivity to dynamics: opportunities and challenges for theoretical neuroscience. Curr Opin Neurobiol 2019; 58:11-20. [PMID: 31319287 DOI: 10.1016/j.conb.2019.06.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 06/22/2019] [Indexed: 11/29/2022]
Abstract
We review recent work relating network connectivity to the dynamics of neural activity. While concepts stemming from network science provide a valuable starting point, the interpretation of graph-theoretic structures and measures can be highly dependent on the dynamics associated to the network. Properties that are quite meaningful for linear dynamics, such as random walk and network flow models, may be of limited relevance in the neuroscience setting. Theoretical and computational neuroscience are playing a vital role in understanding the relationship between network connectivity and the nonlinear dynamics associated to neural networks.
Collapse
Affiliation(s)
- Carina Curto
- The Pennsylvania State University, PA 16802, United States.
| | - Katherine Morrison
- School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639, USA
| |
Collapse
|
13
|
Fan X, Markram H. A Brief History of Simulation Neuroscience. Front Neuroinform 2019; 13:32. [PMID: 31133838 PMCID: PMC6513977 DOI: 10.3389/fninf.2019.00032] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 04/12/2019] [Indexed: 12/19/2022] Open
Abstract
Our knowledge of the brain has evolved over millennia in philosophical, experimental and theoretical phases. We suggest that the next phase is simulation neuroscience. The main drivers of simulation neuroscience are big data generated at multiple levels of brain organization and the need to integrate these data to trace the causal chain of interactions within and across all these levels. Simulation neuroscience is currently the only methodology for systematically approaching the multiscale brain. In this review, we attempt to reconstruct the deep historical paths leading to simulation neuroscience, from the first observations of the nerve cell to modern efforts to digitally reconstruct and simulate the brain. Neuroscience began with the identification of the neuron as the fundamental unit of brain structure and function and has evolved towards understanding the role of each cell type in the brain, how brain cells are connected to each other, and how the seemingly infinite networks they form give rise to the vast diversity of brain functions. Neuronal mapping is evolving from subjective descriptions of cell types towards objective classes, subclasses and types. Connectivity mapping is evolving from loose topographic maps between brain regions towards dense anatomical and physiological maps of connections between individual genetically distinct neurons. Functional mapping is evolving from psychological and behavioral stereotypes towards a map of behaviors emerging from structural and functional connectomes. We show how industrialization of neuroscience and the resulting large disconnected datasets are generating demand for integrative neuroscience, how the scale of neuronal and connectivity maps is driving digital atlasing and digital reconstruction to piece together the multiple levels of brain organization, and how the complexity of the interactions between molecules, neurons, microcircuits and brain regions is driving brain simulation to understand the interactions in the multiscale brain.
Collapse
Affiliation(s)
- Xue Fan
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | | |
Collapse
|
14
|
Chambers B, Levy M, Dechery JB, MacLean JN. Ensemble stacking mitigates biases in inference of synaptic connectivity. Netw Neurosci 2018; 2:60-85. [PMID: 29911678 PMCID: PMC5989998 DOI: 10.1162/netn_a_00032] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 10/11/2017] [Indexed: 01/26/2023] Open
Abstract
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Collapse
Affiliation(s)
- Brendan Chambers
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Maayan Levy
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Joseph B Dechery
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Jason N MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA.,Department of Neurobiology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
15
|
Dechery JB, MacLean JN. Functional triplet motifs underlie accurate predictions of single-trial responses in populations of tuned and untuned V1 neurons. PLoS Comput Biol 2018; 14:e1006153. [PMID: 29727448 PMCID: PMC5955581 DOI: 10.1371/journal.pcbi.1006153] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 05/16/2018] [Accepted: 04/25/2018] [Indexed: 11/30/2022] Open
Abstract
Visual stimuli evoke activity in visual cortical neuronal populations. Neuronal activity can be selectively modulated by particular visual stimulus parameters, such as the direction of a moving bar of light, resulting in well-defined trial averaged tuning properties. However, given any single stimulus parameter, a large number of neurons in visual cortex remain unmodulated, and the role of this untuned population is not well understood. Here, we use two-photon calcium imaging to record, in an unbiased manner, from large populations of layer 2/3 excitatory neurons in mouse primary visual cortex to describe co-varying activity on single trials in neuronal populations consisting of both tuned and untuned neurons. Specifically, we summarize pairwise covariability with an asymmetric partial correlation coefficient, allowing us to analyze the resultant population correlation structure, or functional network, with graph theory. Using the graph neighbors of a neuron, we find that the local population, including both tuned and untuned neurons, are able to predict individual neuron activity on a moment to moment basis, while also recapitulating tuning properties of tuned neurons. Variance explained in total population activity scales with the number of neurons imaged, demonstrating larger sample sizes are required to fully capture local network interactions. We also find that a specific functional triplet motif in the graph results in the best predictions, suggesting a signature of informative correlations in these populations. In summary, we show that unbiased sampling of the local population can explain single trial response variability as well as trial-averaged tuning properties in V1, and the ability to predict responses is tied to the occurrence of a functional triplet motif.
Collapse
Affiliation(s)
- Joseph B. Dechery
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Jason N. MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|
16
|
Imbalanced amplification: A mechanism of amplification and suppression from local imbalance of excitation and inhibition in cortical circuits. PLoS Comput Biol 2018; 14:e1006048. [PMID: 29543827 PMCID: PMC5871018 DOI: 10.1371/journal.pcbi.1006048] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 03/27/2018] [Accepted: 02/22/2018] [Indexed: 01/02/2023] Open
Abstract
Understanding the relationship between external stimuli and the spiking activity of cortical populations is a central problem in neuroscience. Dense recurrent connectivity in local cortical circuits can lead to counterintuitive response properties, raising the question of whether there are simple arithmetical rules for relating circuits’ connectivity structure to their response properties. One such arithmetic is provided by the mean field theory of balanced networks, which is derived in a limit where excitatory and inhibitory synaptic currents precisely balance on average. However, balanced network theory is not applicable to some biologically relevant connectivity structures. We show that cortical circuits with such structure are susceptible to an amplification mechanism arising when excitatory-inhibitory balance is broken at the level of local subpopulations, but maintained at a global level. This amplification, which can be quantified by a linear correction to the classical mean field theory of balanced networks, explains several response properties observed in cortical recordings and provides fundamental insights into the relationship between connectivity structure and neural responses in cortical circuits. Understanding how the brain represents and processes stimuli requires a quantitative understanding of how signals propagate through networks of neurons. Developing such an understanding is made difficult by the dense interconnectivity of neurons, especially in the cerebral cortex. One approach to quantifying neural processing in the cortex is derived from observations that excitatory (positive) and inhibitory (negative) interactions between neurons tend to balance each other in many brain areas. This balance is achieved under a class of computational models called “balanced networks.” However, previous approaches to the mathematical analysis of balanced network models is not possible under some biologically relevant connectivity structures. We show that, under these structures, balance between excitation and inhibition is necessarily broken and the resulting imbalance causes some stimulus features to be amplified. This “imbalanced amplification” of stimuli can explain several observations from recordings in mouse somatosensory and visual cortical circuits and provides fundamental insights into the relationship between connectivity structure and neural responses in cortical circuits.
Collapse
|
17
|
Dechery JB, MacLean JN. Emergent cortical circuit dynamics contain dense, interwoven ensembles of spike sequences. J Neurophysiol 2017; 118:1914-1925. [PMID: 28724786 DOI: 10.1152/jn.00394.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 07/05/2017] [Accepted: 07/14/2017] [Indexed: 01/30/2023] Open
Abstract
Temporal codes are theoretically powerful encoding schemes, but their precise form in the neocortex remains unknown in part because of the large number of possible codes and the difficulty in disambiguating informative spikes from statistical noise. A biologically plausible and computationally powerful temporal coding scheme is the Hebbian assembly phase sequence (APS), which predicts reliable propagation of spikes between functionally related assemblies of neurons. Here, we sought to measure the inherent capacity of neocortical networks to produce reliable sequences of spikes, as would be predicted by an APS code. To record microcircuit activity, the scale at which computation is implemented, we used two-photon calcium imaging to densely sample spontaneous activity in murine neocortical networks ex vivo. We show that the population spike histogram is sufficient to produce a spatiotemporal progression of activity across the population. To more comprehensively evaluate the capacity for sequential spiking that cannot be explained by the overall population spiking, we identify statistically significant spike sequences. We found a large repertoire of sequence spikes that collectively comprise the majority of spiking in the circuit. Sequences manifest probabilistically and share neuron membership, resulting in unique ensembles of interwoven sequences characterizing individual spatiotemporal progressions of activity. Distillation of population dynamics into its constituent sequences provides a way to capture trial-to-trial variability and may prove to be a powerful decoding substrate in vivo. Informed by these data, we suggest that the Hebbian APS be reformulated as interwoven sequences with flexible assembly membership due to shared overlapping neurons.NEW & NOTEWORTHY Neocortical computation occurs largely within microcircuits comprised of individual neurons and their connections within small volumes (<500 μm3). We found evidence for a long-postulated temporal code, the Hebbian assembly phase sequence, by identifying repeated and co-occurring sequences of spikes. Variance in population activity across trials was explained in part by the ensemble of active sequences. The presence of interwoven sequences suggests that neuronal assembly structure can be variable and is determined by previous activity.
Collapse
Affiliation(s)
- Joseph B Dechery
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois; and
| | - Jason N MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois; and .,Department of Neurobiology, University of Chicago, Illinois
| |
Collapse
|
18
|
Reimann MW, Nolte M, Scolamiero M, Turner K, Perin R, Chindemi G, Dłotko P, Levi R, Hess K, Markram H. Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function. Front Comput Neurosci 2017; 11:48. [PMID: 28659782 PMCID: PMC5467434 DOI: 10.3389/fncom.2017.00048] [Citation(s) in RCA: 123] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Accepted: 05/18/2017] [Indexed: 01/21/2023] Open
Abstract
The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. We have now come closer to describing such a link by taking the direction of synaptic transmission into account, constructing graphs of a network that reflect the direction of information flow, and analyzing these directed graphs using algebraic topology. Applying this approach to a local network of neurons in the neocortex revealed a remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.
Collapse
Affiliation(s)
- Michael W Reimann
- Blue Brain Project, École Polytechnique Fédérale de LausanneGeneva, Switzerland
| | - Max Nolte
- Blue Brain Project, École Polytechnique Fédérale de LausanneGeneva, Switzerland
| | - Martina Scolamiero
- Laboratory for Topology and Neuroscience, Brain Mind Institute, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Katharine Turner
- Laboratory for Topology and Neuroscience, Brain Mind Institute, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Rodrigo Perin
- Laboratory of Neural Microcircuitry, Brain Mind Institute, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Giuseppe Chindemi
- Blue Brain Project, École Polytechnique Fédérale de LausanneGeneva, Switzerland
| | | | - Ran Levi
- Institute of Mathematics, University of AberdeenAberdeen, United Kingdom
| | - Kathryn Hess
- Laboratory for Topology and Neuroscience, Brain Mind Institute, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| | - Henry Markram
- Blue Brain Project, École Polytechnique Fédérale de LausanneGeneva, Switzerland.,Laboratory of Neural Microcircuitry, Brain Mind Institute, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| |
Collapse
|