1
|
Wang D, Yeop Lee K, Lee D, Kagan ZB, Bradley K. 10 kHz spinal cord stimulation improves metrics of spinal sensory processing in a male STZ rat model of diabetes. Neurosci Lett 2024; 842:137990. [PMID: 39278460 DOI: 10.1016/j.neulet.2024.137990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 09/11/2024] [Accepted: 09/12/2024] [Indexed: 09/18/2024]
Abstract
To explore why clinical 10 kHz spinal cord stimulation (10 kHz SCS) might improve neurological function in a model of painful diabetic neuropathy (PDN), the short-term behavioral, electrophysiological, and histological effects of 10 kHz SCS were studied using adult male streptozotocin (STZ)-induced diabetic Sprague-Dawley rats. Four testing groups were established: Naïve controls (N = 8), STZ controls (N = 7), STZ+Sham SCS (N = 9), and STZ+10 kHz SCS (N = 11). After intraperitoneal injection (60 mg/kg) of STZ caused the rats to become hyperglycemic, SCS electrodes were implanted in the dorsal epidural space over the L5-L6 spinal segments in the STZ+Sham SCS and STZ+10 kHz SCS groups and were stimulated for 14 days. The von Frey filament paw withdrawal threshold was measured weekly. At termination, animals were anesthetized and the electrophysiologic response of dorsal horn neurons (receptive field size, vibration, radiant warmth) of the ipsilateral foot was measured. Tissue from the plantar paw surface was obtained post-euthanization for intraepidermal nerve fiber density measurements. In comparison to other control groups, while no significant effect of 10 kHz SCS on peripheral intraepidermal nerve fiber density was observed, 10 kHz SCS 'normalized' the central neural response to vibration, receptive field, and paw withdrawal threshold, and elevated the neural response to tissue recovery from warm stimuli. These results suggest that short-term, low intensity 10 kHz SCS operates in the spinal cord to ameliorate compromised sensory processing, and may compensate for reduced peripheral sensory functionality from chronic hyperglycemia, thereby treating a broader spectrum of the sensory symptoms in diabetic neuropathy.
Collapse
Affiliation(s)
- Dong Wang
- Nevro Corp, 1800 Bridge Pkwy, Redwood City, CA 94065, USA.
| | - Kwan Yeop Lee
- Nevro Corp, 1800 Bridge Pkwy, Redwood City, CA 94065, USA.
| | - Dongchul Lee
- Nevro Corp, 1800 Bridge Pkwy, Redwood City, CA 94065, USA.
| | | | - Kerry Bradley
- Nevro Corp, 1800 Bridge Pkwy, Redwood City, CA 94065, USA.
| |
Collapse
|
2
|
Barzan R, Bozkurt B, Nejad MM, Süß ST, Surdin T, Böke H, Spoida K, Azimi Z, Grömmke M, Eickelbeck D, Mark MD, Rohr L, Siveke I, Cheng S, Herlitze S, Jancke D. Gain control of sensory input across polysynaptic circuitries in mouse visual cortex by a single G protein-coupled receptor type (5-HT 2A). Nat Commun 2024; 15:8078. [PMID: 39277631 PMCID: PMC11401874 DOI: 10.1038/s41467-024-51861-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 08/16/2024] [Indexed: 09/17/2024] Open
Abstract
Response gain is a crucial means by which modulatory systems control the impact of sensory input. In the visual cortex, the serotonergic 5-HT2A receptor is key in such modulation. However, due to its expression across different cell types and lack of methods that allow for specific activation, the underlying network mechanisms remain unsolved. Here we optogenetically activate endogenous G protein-coupled receptor (GPCR) signaling of a single receptor subtype in distinct mouse neocortical subpopulations in vivo. We show that photoactivation of the 5-HT2A receptor pathway in pyramidal neurons enhances firing of both excitatory neurons and interneurons, whereas 5-HT2A photoactivation in parvalbumin interneurons produces bidirectional effects. Combined photoactivation in both cell types and cortical network modelling demonstrates a conductance-driven polysynaptic mechanism that controls the gain of visual input without affecting ongoing baseline levels. Our study opens avenues to explore GPCRs neuromodulation and its impact on sensory-driven activity and ongoing neuronal dynamics.
Collapse
Affiliation(s)
- Ruxandra Barzan
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
- MEDICE Arzneimittel Pütter GmbH & Co. KG, Iserlohn, Germany
| | - Beyza Bozkurt
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Mohammadreza M Nejad
- Computational Neuroscience, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
| | - Sandra T Süß
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| | - Tatjana Surdin
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| | - Hanna Böke
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| | - Katharina Spoida
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| | - Zohre Azimi
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Michelle Grömmke
- Behavioral Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Dennis Eickelbeck
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| | - Melanie D Mark
- Behavioral Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Lennard Rohr
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| | - Ida Siveke
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Computational Neuroscience, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
| | - Stefan Herlitze
- Department of Zoology and Neurobiology, Ruhr University Bochum, Bochum, Germany
| | - Dirk Jancke
- Optical Imaging Group, Institut für Neuroinformatik, Ruhr University Bochum, Bochum, Germany.
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany.
| |
Collapse
|
3
|
Zhu V, Rosenbaum R. Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model. Neural Comput 2024; 36:1568-1600. [PMID: 39028956 DOI: 10.1162/neco_a_01681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 03/18/2024] [Indexed: 07/21/2024]
Abstract
In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. In parallel, training fixed points is a central topic in the study of deep equilibrium models in machine learning. A natural approach is to use gradient descent on the Euclidean space of weights. We show that this approach can lead to poor learning performance due in part to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We demonstrate that these learning rules avoid singularities and learn more effectively than standard gradient descent. The new learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.
Collapse
Affiliation(s)
- Vicky Zhu
- Babson College, Mathematics, Analytics, Science, and Technology Division, Wellesley, MA 02481, U.S.A.
| | - Robert Rosenbaum
- University of Notre Dame, Department of Applied and Computational Mathematics and Statistics, Notre Dame, IN 46556, U.S.A.
| |
Collapse
|
4
|
Evaluating the extent to which homeostatic plasticity learns to compute prediction errors in unstructured neuronal networks. J Comput Neurosci 2022; 50:357-373. [PMID: 35657570 DOI: 10.1007/s10827-022-00820-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 02/18/2022] [Accepted: 05/10/2022] [Indexed: 10/18/2022]
Abstract
The brain is believed to operate in part by making predictions about sensory stimuli and encoding deviations from these predictions in the activity of "prediction error neurons." This principle defines the widely influential theory of predictive coding. The precise circuitry and plasticity mechanisms through which animals learn to compute and update their predictions are unknown. Homeostatic inhibitory synaptic plasticity is a promising mechanism for training neuronal networks to perform predictive coding. Homeostatic plasticity causes neurons to maintain a steady, baseline firing rate in response to inputs that closely match the inputs on which a network was trained, but firing rates can deviate away from this baseline in response to stimuli that are mismatched from training. We combine computer simulations and mathematical analysis systematically to test the extent to which randomly connected, unstructured networks compute prediction errors after training with homeostatic inhibitory synaptic plasticity. We find that homeostatic plasticity alone is sufficient for computing prediction errors for trivial time-constant stimuli, but not for more realistic time-varying stimuli. We use a mean-field theory of plastic networks to explain our findings and characterize the assumptions under which they apply.
Collapse
|
5
|
Sanzeni A, Histed MH, Brunel N. Emergence of Irregular Activity in Networks of Strongly Coupled Conductance-Based Neurons. PHYSICAL REVIEW. X 2022; 12:011044. [PMID: 35923858 PMCID: PMC9344604 DOI: 10.1103/physrevx.12.011044] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cortical neurons are characterized by irregular firing and a broad distribution of rates. The balanced state model explains these observations with a cancellation of mean excitatory and inhibitory currents, which makes fluctuations drive firing. In networks of neurons with current-based synapses, the balanced state emerges dynamically if coupling is strong, i.e., if the mean number of synapses per neuron K is large and synaptic efficacy is of the order of 1 / K . When synapses are conductance-based, current fluctuations are suppressed when coupling is strong, questioning the applicability of the balanced state idea to biological neural networks. We analyze networks of strongly coupled conductance-based neurons and show that asynchronous irregular activity and broad distributions of rates emerge if synaptic efficacy is of the order of 1/ log(K). In such networks, unlike in the standard balanced state model, current fluctuations are small and firing is maintained by a drift-diffusion balance. This balance emerges dynamically, without fine-tuning, if inputs are smaller than a critical value, which depends on synaptic time constants and coupling strength, and is significantly more robust to connection heterogeneities than the classical balanced state model. Our analysis makes experimentally testable predictions of how the network response properties should evolve as input increases.
Collapse
Affiliation(s)
- A. Sanzeni
- Center for Theoretical Neuroscience, Columbia University, New York, New York, USA
- Department of Neurobiology, Duke University, Durham, North Carolina, USA
- National Institute of Mental Health Intramural Program, NIH, Bethesda, Maryland, USA
| | - M. H. Histed
- National Institute of Mental Health Intramural Program, NIH, Bethesda, Maryland, USA
| | - N. Brunel
- Department of Neurobiology, Duke University, Durham, North Carolina, USA
- Department of Physics, Duke University, Durham, North Carolina, USA
| |
Collapse
|
6
|
Akil AE, Rosenbaum R, Josić K. Balanced networks under spike-time dependent plasticity. PLoS Comput Biol 2021; 17:e1008958. [PMID: 33979336 PMCID: PMC8143429 DOI: 10.1371/journal.pcbi.1008958] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 05/24/2021] [Accepted: 04/12/2021] [Indexed: 11/28/2022] Open
Abstract
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input. Animals are able to learn complex tasks through changes in individual synapses between cells. Such changes lead to the coevolution of neural activity patterns and the structure of neural connectivity, but the consequences of these interactions are not fully understood. We consider plasticity in model neural networks which achieve an average balance between the excitatory and inhibitory synaptic inputs to different cells, and display cortical–like, irregular activity. We extend the theory of balanced networks to account for synaptic plasticity and show which rules can maintain balance, and which will drive the network into a different state. This theory of plasticity can provide insights into the relationship between stimuli, network dynamics, and synaptic circuitry.
Collapse
Affiliation(s)
- Alan Eric Akil
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana, United States of America
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana, United States of America
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
- Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
7
|
Wyrick D, Mazzucato L. State-Dependent Regulation of Cortical Processing Speed via Gain Modulation. J Neurosci 2021; 41:3988-4005. [PMID: 33858943 PMCID: PMC8176754 DOI: 10.1523/jneurosci.1895-20.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 03/04/2021] [Accepted: 03/08/2021] [Indexed: 11/21/2022] Open
Abstract
To thrive in dynamic environments, animals must be capable of rapidly and flexibly adapting behavioral responses to a changing context and internal state. Examples of behavioral flexibility include faster stimulus responses when attentive and slower responses when distracted. Contextual or state-dependent modulations may occur early in the cortical hierarchy and may be implemented via top-down projections from corticocortical or neuromodulatory pathways. However, the computational mechanisms mediating the effects of such projections are not known. Here, we introduce a theoretical framework to classify the effects of cell type-specific top-down perturbations on the information processing speed of cortical circuits. Our theory demonstrates that perturbation effects on stimulus processing can be predicted by intrinsic gain modulation, which controls the timescale of the circuit dynamics. Our theory leads to counterintuitive effects, such as improved performance with increased input variance. We tested the model predictions using large-scale electrophysiological recordings from the visual hierarchy in freely running mice, where we found that a decrease in single-cell intrinsic gain during locomotion led to an acceleration of visual processing. Our results establish a novel theory of cell type-specific perturbations, applicable to top-down modulation as well as optogenetic and pharmacological manipulations. Our theory links connectivity, dynamics, and information processing via gain modulation.SIGNIFICANCE STATEMENT To thrive in dynamic environments, animals adapt their behavior to changing circumstances and different internal states. Examples of behavioral flexibility include faster responses to sensory stimuli when attentive and slower responses when distracted. Previous work suggested that contextual modulations may be implemented via top-down inputs to sensory cortex coming from higher brain areas or neuromodulatory pathways. Here, we introduce a theory explaining how the speed at which sensory cortex processes incoming information is adjusted by changes in these top-down projections, which control the timescale of neural activity. We tested our model predictions in freely running mice, revealing that locomotion accelerates visual processing. Our theory is applicable to internal modulation as well as optogenetic and pharmacological manipulations and links circuit connectivity, dynamics, and information processing.
Collapse
Affiliation(s)
- David Wyrick
- Department of Biology and Institute of Neuroscience
| | - Luca Mazzucato
- Department of Biology and Institute of Neuroscience
- Departments of Mathematics and Physics, University of Oregon, Eugene, Oregon 97403
| |
Collapse
|
8
|
Baker C, Zhu V, Rosenbaum R. Nonlinear stimulus representations in neural circuits with approximate excitatory-inhibitory balance. PLoS Comput Biol 2020; 16:e1008192. [PMID: 32946433 PMCID: PMC7526938 DOI: 10.1371/journal.pcbi.1008192] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 09/30/2020] [Accepted: 07/24/2020] [Indexed: 12/02/2022] Open
Abstract
Balanced excitation and inhibition is widely observed in cortex. How does this balance shape neural computations and stimulus representations? This question is often studied using computational models of neuronal networks in a dynamically balanced state. But balanced network models predict a linear relationship between stimuli and population responses. So how do cortical circuits implement nonlinear representations and computations? We show that every balanced network architecture admits stimuli that break the balanced state and these breaks in balance push the network into a "semi-balanced state" characterized by excess inhibition to some neurons, but an absence of excess excitation. The semi-balanced state produces nonlinear stimulus representations and nonlinear computations, is unavoidable in networks driven by multiple stimuli, is consistent with cortical recordings, and has a direct mathematical relationship to artificial neural networks.
Collapse
Affiliation(s)
- Cody Baker
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Vicky Zhu
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, IN, USA
| |
Collapse
|
9
|
Herstel LJ, Wierenga CJ. Network control through coordinated inhibition. Curr Opin Neurobiol 2020; 67:34-41. [PMID: 32853970 DOI: 10.1016/j.conb.2020.08.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 07/29/2020] [Accepted: 08/01/2020] [Indexed: 12/29/2022]
Abstract
Coordinated excitatory and inhibitory activity is required for proper brain functioning. Recent computational and experimental studies have demonstrated that activity patterns in recurrent cortical networks are dominated by inhibition. Whereas previous studies have suggested that inhibitory plasticity is important for homeostatic control, this new framework puts inhibition in the driver's seat. Complex neuronal networks in the brain comprise many configurations in parallel, controlled by external and internal 'switches'. Context-dependent modulation and plasticity of inhibitory connections play a key role in memory and learning. It is therefore important to realize that synaptic plasticity is often multisynaptic and that a proper balance between excitation and inhibition is not fixed, but depends on context and activity level.
Collapse
Affiliation(s)
- Lotte J Herstel
- Cell Biology, Neurobiology and Biophysics, Biology Department, Faculty of Science, Utrecht University, The Netherlands
| | - Corette J Wierenga
- Cell Biology, Neurobiology and Biophysics, Biology Department, Faculty of Science, Utrecht University, The Netherlands.
| |
Collapse
|
10
|
Ebsch C, Rosenbaum R. Spatially extended balanced networks without translationally invariant connectivity. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2020; 10:8. [PMID: 32405723 PMCID: PMC7221049 DOI: 10.1186/s13408-020-00085-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 04/28/2020] [Indexed: 06/11/2023]
Abstract
Networks of neurons in the cerebral cortex exhibit a balance between excitation (positive input current) and inhibition (negative input current). Balanced network theory provides a parsimonious mathematical model of this excitatory-inhibitory balance using randomly connected networks of model neurons in which balance is realized as a stable fixed point of network dynamics in the limit of large network size. Balanced network theory reproduces many salient features of cortical network dynamics such as asynchronous-irregular spiking activity. Early studies of balanced networks did not account for the spatial topology of cortical networks. Later works introduced spatial connectivity structure, but were restricted to networks with translationally invariant connectivity structure in which connection probability depends on distance alone and boundaries are assumed to be periodic. Spatial connectivity structure in cortical network does not always satisfy these assumptions. We use the mathematical theory of integral equations to extend the mean-field theory of balanced networks to account for more general dependence of connection probability on the spatial location of pre- and postsynaptic neurons. We compare our mathematical derivations to simulations of large networks of recurrently connected spiking neuron models.
Collapse
Affiliation(s)
- Christopher Ebsch
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, USA
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, USA.
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, USA.
| |
Collapse
|
11
|
Inference of synaptic connectivity and external variability in neural microcircuits. J Comput Neurosci 2020; 48:123-147. [PMID: 32080777 DOI: 10.1007/s10827-020-00739-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 11/15/2019] [Accepted: 01/03/2020] [Indexed: 10/25/2022]
Abstract
A major goal in neuroscience is to estimate neural connectivity from large scale extracellular recordings of neural activity in vivo. This is challenging in part because any such activity is modulated by the unmeasured external synaptic input to the network, known as the common input problem. Many different measures of functional connectivity have been proposed in the literature, but their direct relationship to synaptic connectivity is often assumed or ignored. For in vivo data, measurements of this relationship would require a knowledge of ground truth connectivity, which is nearly always unavailable. Instead, many studies use in silico simulations as benchmarks for investigation, but such approaches necessarily rely upon a variety of simplifying assumptions about the simulated network and can depend on numerous simulation parameters. We combine neuronal network simulations, mathematical analysis, and calcium imaging data to address the question of when and how functional connectivity, synaptic connectivity, and latent external input variability can be untangled. We show numerically and analytically that, even though the precision matrix of recorded spiking activity does not uniquely determine synaptic connectivity, it is in practice often closely related to synaptic connectivity. This relation becomes more pronounced when the spatial structure of neuronal variability is jointly considered.
Collapse
|
12
|
Cohen BP, Chow CC, Vattikuti S. Dynamical modeling of multi-scale variability in neuronal competition. Commun Biol 2019; 2:319. [PMID: 31453383 PMCID: PMC6707190 DOI: 10.1038/s42003-019-0555-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Accepted: 07/15/2019] [Indexed: 01/03/2023] Open
Abstract
Variability is observed at multiple-scales in the brain and ubiquitous in perception. However, the nature of perceptual variability is an open question. We focus on variability during perceptual rivalry, a form of neuronal competition. Rivalry provides a window into neural processing since activity in many brain areas is correlated to the alternating perception rather than a constant ambiguous stimulus. It exhibits robust properties at multiple scales including conscious awareness and neuron dynamics. The prevalent theory for spiking variability is called the balanced state; whereas, the source of perceptual variability is unknown. Here we show that a single biophysical circuit model, satisfying certain mutual inhibition architectures, can explain spiking and perceptual variability during rivalry. These models adhere to a broad set of strict experimental constraints at multiple scales. As we show, the models predict how spiking and perceptual variability changes with stimulus conditions.
Collapse
Affiliation(s)
- Benjamin P. Cohen
- Mathematical Biology Section, Laboratory of Biological Modeling, National Institutes of Diabetes and Digestive and Kidney Disease, National Institutes of Health, Bethesda, MD USA
| | - Carson C. Chow
- Mathematical Biology Section, Laboratory of Biological Modeling, National Institutes of Diabetes and Digestive and Kidney Disease, National Institutes of Health, Bethesda, MD USA
| | - Shashaank Vattikuti
- Mathematical Biology Section, Laboratory of Biological Modeling, National Institutes of Diabetes and Digestive and Kidney Disease, National Institutes of Health, Bethesda, MD USA
| |
Collapse
|
13
|
Baker C, Ebsch C, Lampl I, Rosenbaum R. Correlated states in balanced neuronal networks. Phys Rev E 2019; 99:052414. [PMID: 31212573 DOI: 10.1103/physreve.99.052414] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Indexed: 06/09/2023]
Abstract
Understanding the magnitude and structure of interneuronal correlations and their relationship to synaptic connectivity structure is an important and difficult problem in computational neuroscience. Early studies show that neuronal network models with excitatory-inhibitory balance naturally create very weak spike train correlations, defining the "asynchronous state." Later work showed that, under some connectivity structures, balanced networks can produce larger correlations between some neuron pairs, even when the average correlation is very small. All of these previous studies assume that the local network receives feedforward synaptic input from a population of uncorrelated spike trains. We show that when spike trains providing feedforward input are correlated, the downstream recurrent network produces much larger correlations. We provide an in-depth analysis of the resulting "correlated state" in balanced networks and show that, unlike the asynchronous state, it produces a tight excitatory-inhibitory balance consistent with in vivo cortical recordings.
Collapse
Affiliation(s)
- Cody Baker
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Christopher Ebsch
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Ilan Lampl
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 7610001, Israel
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana 46556, USA
| |
Collapse
|
14
|
Reaction Time Improvements by Neural Bistability. Behav Sci (Basel) 2019; 9:bs9030028. [PMID: 30889937 PMCID: PMC6466602 DOI: 10.3390/bs9030028] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2018] [Revised: 01/22/2019] [Accepted: 03/12/2019] [Indexed: 12/16/2022] Open
Abstract
The often reported reduction of Reaction Time (RT) by Vision Training) is successfully replicated by 81 athletes across sports. This enabled us to achieve a mean reduction of RTs for athletes eye-hand coordination of more than 10%, with high statistical significance. We explain how such an observed effect of Sensorimotor systems' plasticity causing reduced RT can last in practice for multiple days and even weeks in subjects, via a proof of principle. Its mathematical neural model can be forced outside a previous stable (but long) RT into a state leading to reduced eye-hand coordination RT, which is, again, in a stable neural state.
Collapse
|