1
|
Ramaswamy S. Data-driven multiscale computational models of cortical and subcortical regions. Curr Opin Neurobiol 2024; 85:102842. [PMID: 38320453 DOI: 10.1016/j.conb.2024.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 02/08/2024]
Abstract
Data-driven computational models of neurons, synapses, microcircuits, and mesocircuits have become essential tools in modern brain research. The goal of these multiscale models is to integrate and synthesize information from different levels of brain organization, from cellular properties, dendritic excitability, and synaptic dynamics to microcircuits, mesocircuits, and ultimately behavior. This article surveys recent advances in the genesis of data-driven computational models of mammalian neural networks in cortical and subcortical areas. I discuss the challenges and opportunities in developing data-driven multiscale models, including the need for interdisciplinary collaborations, the importance of model validation and comparison, and the potential impact on basic and translational neuroscience research. Finally, I highlight future directions and emerging technologies that will enable more comprehensive and predictive data-driven models of brain function and dysfunction.
Collapse
Affiliation(s)
- Srikanth Ramaswamy
- Neural Circuits Laboratory, Biosciences Institute, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom.
| |
Collapse
|
2
|
Bugnon T, Mayner WGP, Cirelli C, Tononi G. Sleep and wake in a model of the thalamocortical system with Martinotti cells. Eur J Neurosci 2024; 59:703-736. [PMID: 36215116 PMCID: PMC10083195 DOI: 10.1111/ejn.15836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 08/26/2022] [Accepted: 10/05/2022] [Indexed: 12/14/2022]
Abstract
The mechanisms leading to the alternation between active (UP) and silent (DOWN) states during sleep slow waves (SWs) remain poorly understood. Previous models have explained the transition to the DOWN state by a progressive failure of excitation because of the build-up of adaptation currents or synaptic depression. However, these models are at odds with recent studies suggesting a role for presynaptic inhibition by Martinotti cells (MaCs) in generating SWs. Here, we update a classical large-scale model of sleep SWs to include MaCs and propose a different mechanism for the generation of SWs. In the wake mode, the network exhibits irregular and selective activity with low firing rates (FRs). Following an increase in the strength of background inputs and a modulation of synaptic strength and potassium leak potential mimicking the reduced effect of acetylcholine during sleep, the network enters a sleep-like regime in which local increases of network activity trigger bursts of MaC activity, resulting in strong disfacilitation of the local network via presynaptic GABAB1a -type inhibition. This model replicates findings on slow wave activity (SWA) during sleep that challenge previous models, including low and skewed FRs that are comparable between the wake and sleep modes, higher synchrony of transitions to DOWN states than to UP states, the possibility of triggering SWs by optogenetic stimulation of MaCs, and the local dependence of SWA on synaptic strength. Overall, this work points to a role for presynaptic inhibition by MaCs in the generation of DOWN states during sleep.
Collapse
Affiliation(s)
- Tom Bugnon
- Department of Psychiatry, University of Wisconsin-Madison, 6001 Research Park Blvd, Madison, WI 53719 USA
- Neuroscience Training Program, University of Wisconsin, Madison
| | - William G. P. Mayner
- Department of Psychiatry, University of Wisconsin-Madison, 6001 Research Park Blvd, Madison, WI 53719 USA
- Neuroscience Training Program, University of Wisconsin, Madison
| | - Chiara Cirelli
- Department of Psychiatry, University of Wisconsin-Madison, 6001 Research Park Blvd, Madison, WI 53719 USA
| | - Giulio Tononi
- Department of Psychiatry, University of Wisconsin-Madison, 6001 Research Park Blvd, Madison, WI 53719 USA
| |
Collapse
|
3
|
Dorkenwald S, Li PH, Januszewski M, Berger DR, Maitin-Shepard J, Bodor AL, Collman F, Schneider-Mizell CM, da Costa NM, Lichtman JW, Jain V. Multi-layered maps of neuropil with segmentation-guided contrastive learning. Nat Methods 2023; 20:2011-2020. [PMID: 37985712 PMCID: PMC10703674 DOI: 10.1038/s41592-023-02059-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 10/02/2023] [Indexed: 11/22/2023]
Abstract
Maps of the nervous system that identify individual cells along with their type, subcellular components and connectivity have the potential to elucidate fundamental organizational principles of neural circuits. Nanometer-resolution imaging of brain tissue provides the necessary raw data, but inferring cellular and subcellular annotation layers is challenging. We present segmentation-guided contrastive learning of representations (SegCLR), a self-supervised machine learning technique that produces representations of cells directly from 3D imagery and segmentations. When applied to volumes of human and mouse cortex, SegCLR enables accurate classification of cellular subcompartments and achieves performance equivalent to a supervised approach while requiring 400-fold fewer labeled examples. SegCLR also enables inference of cell types from fragments as small as 10 μm, which enhances the utility of volumes in which many neurites are truncated at boundaries. Finally, SegCLR enables exploration of layer 5 pyramidal cell subtypes and automated large-scale analysis of synaptic partners in mouse visual cortex.
Collapse
Affiliation(s)
- Sven Dorkenwald
- Google Research, Mountain View, CA, USA
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Computer Science Department, Princeton University, Princeton, NJ, USA
| | | | | | - Daniel R Berger
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard, Cambridge, MA, USA
| | | | | | | | | | | | - Jeff W Lichtman
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard, Cambridge, MA, USA
| | - Viren Jain
- Google Research, Mountain View, CA, USA.
| |
Collapse
|
4
|
Zarei Eskikand P, Soto-Breceda A, Cook MJ, Burkitt AN, Grayden DB. Inhibitory stabilized network behaviour in a balanced neural mass model of a cortical column. Neural Netw 2023; 166:296-312. [PMID: 37541162 DOI: 10.1016/j.neunet.2023.07.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/16/2023] [Accepted: 07/12/2023] [Indexed: 08/06/2023]
Abstract
Strong inhibitory recurrent connections can reduce the tendency for a neural network to become unstable. This is known as inhibitory stabilization; networks that are unstable in the absence of strong inhibitory feedback because of their unstable excitatory recurrent connections are known as Inhibition Stabilized Networks (ISNs). One of the characteristics of ISNs is their "paradoxical response", where perturbing the inhibitory neurons with additional excitatory input results in a decrease in their activity after a temporal delay instead of increasing their activity. Here, we develop a model of populations of neurons across different layers of cortex. Within each layer, there is one population of inhibitory neurons and one population of excitatory neurons. The connectivity weights across different populations in the model are derived from a synaptic physiology database provided by the Allen Institute. The model shows a gradient of excitation-inhibition balance across different layers in the cortex, where superficial layers are more inhibitory dominated compared to deeper layers. To investigate the presence of ISNs across different layers, we measured the membrane potentials of neural populations in the model after perturbing inhibitory populations. The results show that layer 2/3 in the model does not operate in the ISN regime but layers 4 and 5 do operate in the ISN regime. These results accord with neurophysiological findings that explored the presence of ISNs across different layers in the cortex. The results show that there may be a systematic macroscopic gradient of inhibitory stabilization across different layers in the cortex that depends on the level of excitation-inhibition balance, and that the strength of the paradoxical response increases as the model moves closer to bifurcation points.
Collapse
Affiliation(s)
- Parvin Zarei Eskikand
- Department of Biomedical Engineering, The University of Melbourne, Victoria, Australia.
| | - Artemio Soto-Breceda
- Department of Biomedical Engineering, The University of Melbourne, Victoria, Australia
| | - Mark J Cook
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Victoria, Australia; Department of Medicine, St Vincent's Hospital, Melbourne, Victoria, Australia
| | - Anthony N Burkitt
- Department of Biomedical Engineering, The University of Melbourne, Victoria, Australia
| | - David B Grayden
- Department of Biomedical Engineering, The University of Melbourne, Victoria, Australia; Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Victoria, Australia; Department of Medicine, St Vincent's Hospital, Melbourne, Victoria, Australia
| |
Collapse
|
5
|
Milstein AD, Tran S, Ng G, Soltesz I. Offline memory replay in recurrent neuronal networks emerges from constraints on online dynamics. J Physiol 2023; 601:3241-3264. [PMID: 35907087 PMCID: PMC9885000 DOI: 10.1113/jp283216] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 07/22/2022] [Indexed: 02/01/2023] Open
Abstract
During spatial exploration, neural circuits in the hippocampus store memories of sequences of sensory events encountered in the environment. When sensory information is absent during 'offline' resting periods, brief neuronal population bursts can 'replay' sequences of activity that resemble bouts of sensory experience. These sequences can occur in either forward or reverse order, and can even include spatial trajectories that have not been experienced, but are consistent with the topology of the environment. The neural circuit mechanisms underlying this variable and flexible sequence generation are unknown. Here we demonstrate in a recurrent spiking network model of hippocampal area CA3 that experimental constraints on network dynamics such as population sparsity, stimulus selectivity, rhythmicity and spike rate adaptation, as well as associative synaptic connectivity, enable additional emergent properties, including variable offline memory replay. In an online stimulus-driven state, we observed the emergence of neuronal sequences that swept from representations of past to future stimuli on the timescale of the theta rhythm. In an offline state driven only by noise, the network generated both forward and reverse neuronal sequences, and recapitulated the experimental observation that offline memory replay events tend to include salient locations like the site of a reward. These results demonstrate that biological constraints on the dynamics of recurrent neural circuits are sufficient to enable memories of sensory events stored in the strengths of synaptic connections to be flexibly read out during rest and sleep, which is thought to be important for memory consolidation and planning of future behaviour. KEY POINTS: A recurrent spiking network model of hippocampal area CA3 was optimized to recapitulate experimentally observed network dynamics during simulated spatial exploration. During simulated offline rest, the network exhibited the emergent property of generating flexible forward, reverse and mixed direction memory replay events. Network perturbations and analysis of model diversity and degeneracy identified associative synaptic connectivity and key features of network dynamics as important for offline sequence generation. Network simulations demonstrate that population over-representation of salient positions like the site of reward results in biased memory replay.
Collapse
Affiliation(s)
- Aaron D. Milstein
- Department of Neurosurgery, Stanford University School of Medicine, Stanford CA
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School and Center for Advanced Biotechnology and Medicine, Rutgers University, Piscataway, NJ
| | - Sarah Tran
- Department of Neurosurgery, Stanford University School of Medicine, Stanford CA
| | - Grace Ng
- Department of Neurosurgery, Stanford University School of Medicine, Stanford CA
| | - Ivan Soltesz
- Department of Neurosurgery, Stanford University School of Medicine, Stanford CA
| |
Collapse
|
6
|
Bernáez Timón L, Ekelmans P, Kraynyukova N, Rose T, Busse L, Tchumatchenko T. How to incorporate biological insights into network models and why it matters. J Physiol 2023; 601:3037-3053. [PMID: 36069408 DOI: 10.1113/jp282755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 08/24/2022] [Indexed: 11/08/2022] Open
Abstract
Due to the staggering complexity of the brain and its neural circuitry, neuroscientists rely on the analysis of mathematical models to elucidate its function. From Hodgkin and Huxley's detailed description of the action potential in 1952 to today, new theories and increasing computational power have opened up novel avenues to study how neural circuits implement the computations that underlie behaviour. Computational neuroscientists have developed many models of neural circuits that differ in complexity, biological realism or emergent network properties. With recent advances in experimental techniques for detailed anatomical reconstructions or large-scale activity recordings, rich biological data have become more available. The challenge when building network models is to reflect experimental results, either through a high level of detail or by finding an appropriate level of abstraction. Meanwhile, machine learning has facilitated the development of artificial neural networks, which are trained to perform specific tasks. While they have proven successful at achieving task-oriented behaviour, they are often abstract constructs that differ in many features from the physiology of brain circuits. Thus, it is unclear whether the mechanisms underlying computation in biological circuits can be investigated by analysing artificial networks that accomplish the same function but differ in their mechanisms. Here, we argue that building biologically realistic network models is crucial to establishing causal relationships between neurons, synapses, circuits and behaviour. More specifically, we advocate for network models that consider the connectivity structure and the recorded activity dynamics while evaluating task performance.
Collapse
Affiliation(s)
- Laura Bernáez Timón
- Institute for Physiological Chemistry, University of Mainz Medical Center, Mainz, Germany
| | - Pierre Ekelmans
- Frankfurt Institute for Advanced Studies, Frankfurt, Germany
| | - Nataliya Kraynyukova
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| | - Tobias Rose
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Tatjana Tchumatchenko
- Institute for Physiological Chemistry, University of Mainz Medical Center, Mainz, Germany
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| |
Collapse
|
7
|
Rimehaug AE, Stasik AJ, Hagen E, Billeh YN, Siegle JH, Dai K, Olsen SR, Koch C, Einevoll GT, Arkhipov A. Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortex. eLife 2023; 12:e87169. [PMID: 37486105 PMCID: PMC10393295 DOI: 10.7554/elife.87169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/10/2023] [Indexed: 07/25/2023] Open
Abstract
Local field potential (LFP) recordings reflect the dynamics of the current source density (CSD) in brain tissue. The synaptic, cellular, and circuit contributions to current sinks and sources are ill-understood. We investigated these in mouse primary visual cortex using public Neuropixels recordings and a detailed circuit model based on simulating the Hodgkin-Huxley dynamics of >50,000 neurons belonging to 17 cell types. The model simultaneously captured spiking and CSD responses and demonstrated a two-way dissociation: firing rates are altered with minor effects on the CSD pattern by adjusting synaptic weights, and CSD is altered with minor effects on firing rates by adjusting synaptic placement on the dendrites. We describe how thalamocortical inputs and recurrent connections sculpt specific sinks and sources early in the visual response, whereas cortical feedback crucially alters them in later stages. These results establish quantitative links between macroscopic brain measurements (LFP/CSD) and microscopic biophysics-based understanding of neuron dynamics and show that CSD analysis provides powerful constraints for modeling beyond those from considering spikes.
Collapse
Affiliation(s)
| | | | - Espen Hagen
- Department of Physics, University of OsloOsloNorway
- Department of Data Science, Norwegian University of Life SciencesÅsNorway
| | | | - Josh H Siegle
- MindScope Program, Allen InstituteSeattleUnited States
| | - Kael Dai
- MindScope Program, Allen InstituteSeattleUnited States
| | - Shawn R Olsen
- MindScope Program, Allen InstituteSeattleUnited States
| | - Christof Koch
- MindScope Program, Allen InstituteSeattleUnited States
| | - Gaute T Einevoll
- Department of Physics, University of OsloOsloNorway
- Department of Physics, Norwegian University of Life SciencesÅsNorway
| | | |
Collapse
|
8
|
Schneider M, Tzanou A, Uran C, Vinck M. Cell-type-specific propagation of visual flicker. Cell Rep 2023; 42:112492. [PMID: 37195864 DOI: 10.1016/j.celrep.2023.112492] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 03/10/2023] [Accepted: 04/24/2023] [Indexed: 05/19/2023] Open
Abstract
Rhythmic flicker stimulation has gained interest as a treatment for neurodegenerative diseases and as a method for frequency tagging neural activity. Yet, little is known about the way in which flicker-induced synchronization propagates across cortical levels and impacts different cell types. Here, we use Neuropixels to record from the lateral geniculate nucleus (LGN), the primary visual cortex (V1), and CA1 in mice while presenting visual flicker stimuli. LGN neurons show strong phase locking up to 40 Hz, whereas phase locking is substantially weaker in V1 and is absent in CA1. Laminar analyses reveal an attenuation of phase locking at 40 Hz for each processing stage. Gamma-rhythmic flicker predominantly entrains fast-spiking interneurons. Optotagging experiments show that these neurons correspond to either parvalbumin (PV+) or narrow-waveform somatostatin (Sst+) neurons. A computational model can explain the observed differences based on the neurons' capacitative low-pass filtering properties. In summary, the propagation of synchronized activity and its effect on distinct cell types strongly depend on its frequency.
Collapse
Affiliation(s)
- Marius Schneider
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands.
| | - Athanasia Tzanou
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Cem Uran
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands.
| |
Collapse
|
9
|
Haufler D, Ito S, Koch C, Arkhipov A. Simulations of cortical networks using spatially extended conductance-based neuronal models. J Physiol 2022. [PMID: 36567262 PMCID: PMC10290729 DOI: 10.1113/jp284030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/19/2022] [Indexed: 12/27/2022] Open
Abstract
The Hodgkin-Huxley model of action potential generation and propagation, published in the Journal of Physiology in 1952, initiated the field of biophysically detailed computational modelling in neuroscience, which has expanded to encompass a variety of species and components of the nervous system. Here we review the developments in this area with a focus on efforts in the community towards modelling the mammalian neocortex using spatially extended conductance-based neuronal models. The Hodgkin-Huxley formalism and related foundational contributions, such as Rall's cable theory, remain widely used in these efforts to the current day. We argue that at present the field is undergoing a qualitative change due to new very rich datasets describing the composition, connectivity and functional activity of cortical circuits, which are being integrated systematically into large-scale network models. This trend, combined with the accelerating development of convenient software tools supporting such complex modelling projects, is giving rise to highly detailed models of the cortex that are extensively constrained by the data, enabling computational investigation of a multitude of questions about cortical structure and function.
Collapse
Affiliation(s)
- Darrell Haufler
- Mindscope Program, Allen Institute, Seattle, Washington, USA
| | - Shinya Ito
- Mindscope Program, Allen Institute, Seattle, Washington, USA
| | - Christof Koch
- Mindscope Program, Allen Institute, Seattle, Washington, USA
| | - Anton Arkhipov
- Mindscope Program, Allen Institute, Seattle, Washington, USA
| |
Collapse
|
10
|
Oláh VJ, Pedersen NP, Rowan MJM. Ultrafast simulation of large-scale neocortical microcircuitry with biophysically realistic neurons. eLife 2022; 11:e79535. [PMID: 36341568 PMCID: PMC9640191 DOI: 10.7554/elife.79535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Accepted: 10/23/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding the activity of the mammalian brain requires an integrative knowledge of circuits at distinct scales, ranging from ion channel gating to circuit connectomics. Computational models are regularly employed to understand how multiple parameters contribute synergistically to circuit behavior. However, traditional models of anatomically and biophysically realistic neurons are computationally demanding, especially when scaled to model local circuits. To overcome this limitation, we trained several artificial neural network (ANN) architectures to model the activity of realistic multicompartmental cortical neurons. We identified an ANN architecture that accurately predicted subthreshold activity and action potential firing. The ANN could correctly generalize to previously unobserved synaptic input, including in models containing nonlinear dendritic properties. When scaled, processing times were orders of magnitude faster compared with traditional approaches, allowing for rapid parameter-space mapping in a circuit model of Rett syndrome. Thus, we present a novel ANN approach allowing for rapid, detailed network experiments using inexpensive and commonly available computational resources.
Collapse
Affiliation(s)
- Viktor J Oláh
- Department of Cell Biology, Emory University School of MedicineAtlantaUnited States
| | - Nigel P Pedersen
- Department of Neurology, Emory University School of MedicineAtlantaUnited States
| | - Matthew JM Rowan
- Department of Cell Biology, Emory University School of MedicineAtlantaUnited States
| |
Collapse
|
11
|
Borges FS, Moreira JVS, Takarabe LM, Lytton WW, Dura-Bernal S. Large-scale biophysically detailed model of somatosensory thalamocortical circuits in NetPyNE. Front Neuroinform 2022; 16:884245. [PMID: 36213546 PMCID: PMC9536213 DOI: 10.3389/fninf.2022.884245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
The primary somatosensory cortex (S1) of mammals is critically important in the perception of touch and related sensorimotor behaviors. In 2015, the Blue Brain Project (BBP) developed a groundbreaking rat S1 microcircuit simulation with over 31,000 neurons with 207 morpho-electrical neuron types, and 37 million synapses, incorporating anatomical and physiological information from a wide range of experimental studies. We have implemented this highly detailed and complex S1 model in NetPyNE, using the data available in the Neocortical Microcircuit Collaboration Portal. NetPyNE provides a Python high-level interface to NEURON and allows defining complicated multiscale models using an intuitive declarative standardized language. It also facilitates running parallel simulations, automates the optimization and exploration of parameters using supercomputers, and provides a wide range of built-in analysis functions. This will make the S1 model more accessible and simpler to scale, modify and extend in order to explore research questions or interconnect to other existing models. Despite some implementation differences, the NetPyNE model preserved the original cell morphologies, electrophysiological responses and spatial distribution for all 207 cell types; and the connectivity properties of all 1941 pathways, including synaptic dynamics and short-term plasticity (STP). The NetPyNE S1 simulations produced reasonable physiological firing rates and activity patterns across all populations. When STP was included, the network generated a 1 Hz oscillation comparable to the original model in vitro-like state. By then reducing the extracellular calcium concentration, the model reproduced the original S1 in vivo-like states with asynchronous activity. These results validate the original study using a new modeling tool. Simulated local field potentials (LFPs) exhibited realistic oscillatory patterns and features, including distance- and frequency-dependent attenuation. The model was extended by adding thalamic circuits, including 6 distinct thalamic populations with intrathalamic, thalamocortical (TC) and corticothalamic connectivity derived from experimental data. The thalamic model reproduced single known cell and circuit-level dynamics, including burst and tonic firing modes and oscillatory patterns, providing a more realistic input to cortex and enabling study of TC interactions. Overall, our work provides a widely accessible, data-driven and biophysically-detailed model of the somatosensory TC circuits that can be employed as a community tool for researchers to study neural dynamics, function and disease.
Collapse
Affiliation(s)
- Fernando S. Borges
- Department of Physiology and Pharmacology, State University of New York Downstate Health Sciences University, Brooklyn, NY, United States
- Center for Mathematics, Computation, and Cognition, Federal University of ABC, São Paulo, Brazil
| | - Joao V. S. Moreira
- Department of Physiology and Pharmacology, State University of New York Downstate Health Sciences University, Brooklyn, NY, United States
| | - Lavinia M. Takarabe
- Center for Mathematics, Computation, and Cognition, Federal University of ABC, São Paulo, Brazil
| | - William W. Lytton
- Department of Physiology and Pharmacology, State University of New York Downstate Health Sciences University, Brooklyn, NY, United States
- Department of Neurology, Kings County Hospital Center, Brooklyn, NY, United States
- Aligning Science Across Parkinson’s (ASAP) Collaborative Research Network, Chevy Chase, MD, United States
| | - Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York Downstate Health Sciences University, Brooklyn, NY, United States
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| |
Collapse
|
12
|
Beaubois R, Khoyratee F, Branchereau P, Ikeuchi Y, Levi T. From real-time single to multicompartmental Hodgkin-Huxley neurons on FPGA for bio-hybrid systems. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1602-1606. [PMID: 36083914 DOI: 10.1109/embc48229.2022.9871176] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Modeling biological neural networks has been a field opening to major advances in our understanding of the mechanisms governing the functioning of the brain in normal and pathological conditions. The emergence of real-time neuromorphic platforms has been leading to a rising significance of bio-hybrid experiments as part of the development of neuromorphic biomedical devices such as neuroprosthesis. To provide a new tool for the neurological disorder characterization, we design real-time single and multicompartmental Hodgkin-Huxley neurons on FPGA. These neurons allow biological neural network emulation featuring improved accuracy through compartment modeling and show integration in bio-hybrid system thanks to its real-time dynamics.
Collapse
|
13
|
Schürmann F, Courcol JD, Ramaswamy S. Computational Concepts for Reconstructing and Simulating Brain Tissue. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:237-259. [PMID: 35471542 DOI: 10.1007/978-3-030-89439-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
It has previously been shown that it is possible to derive a new class of biophysically detailed brain tissue models when one computationally analyzes and exploits the interdependencies or the multi-modal and multi-scale organization of the brain. These reconstructions, sometimes referred to as digital twins, enable a spectrum of scientific investigations. Building such models has become possible because of increase in quantitative data but also advances in computational capabilities, algorithmic and methodological innovations. This chapter presents the computational science concepts that provide the foundation to the data-driven approach to reconstructing and simulating brain tissue as developed by the EPFL Blue Brain Project, which was originally applied to neocortical microcircuitry and extended to other brain regions. Accordingly, the chapter covers aspects such as a knowledge graph-based data organization and the importance of the concept of a dataset release. We illustrate algorithmic advances in finding suitable parameters for electrical models of neurons or how spatial constraints can be exploited for predicting synaptic connections. Furthermore, we explain how in silico experimentation with such models necessitates specific addressing schemes or requires strategies for an efficient simulation. The entire data-driven approach relies on the systematic validation of the model. We conclude by discussing complementary strategies that not only enable judging the fidelity of the model but also form the basis for its systematic refinements.
Collapse
Affiliation(s)
- Felix Schürmann
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Geneva, Switzerland.
| | - Jean-Denis Courcol
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Srikanth Ramaswamy
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
14
|
Li HY, Cheng GM, Ching ESC. Heterogeneous Responses to Changes in Inhibitory Synaptic Strength in Networks of Spiking Neurons. Front Cell Neurosci 2022; 16:785207. [PMID: 35281294 PMCID: PMC8908097 DOI: 10.3389/fncel.2022.785207] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 01/18/2022] [Indexed: 12/25/2022] Open
Abstract
How does the dynamics of neurons in a network respond to changes in synaptic weights? Answer to this question would be important for a full understanding of synaptic plasticity. In this article, we report our numerical study of the effects of changes in inhibitory synaptic weights on the spontaneous activity of networks of spiking neurons with conductance-based synapses. Networks with biologically realistic features, which were reconstructed from multi-electrode array recordings taken in a cortical neuronal culture, and their modifications were used in the simulations. The magnitudes of the synaptic weights of all the inhibitory connections are decreased by a uniform amount subjecting to the condition that inhibitory connections would not be turned into excitatory ones. Our simulation results reveal that the responses of the neurons are heterogeneous: while the firing rate of some neurons increases as expected, the firing rate of other neurons decreases or remains unchanged. The same results show that heterogeneous responses also occur for an enhancement of inhibition. This heterogeneity in the responses of neurons to changes in inhibitory synaptic strength suggests that activity-induced modification of synaptic strength does not necessarily generate a positive feedback loop on the dynamics of neurons connected in a network. Our results could be used to understand the effects of bicuculline on spiking and bursting activities of neuronal cultures. Using reconstructed networks with biologically realistic features enables us to identify a long-tailed distribution of average synaptic weights for outgoing links as a crucial feature in giving rise to bursting in neuronal networks and in determining the overall response of the whole network to changes in synaptic strength. For networks whose average synaptic weights for outgoing links have a long-tailed distribution, bursting is observed and the average firing rate of the whole network increases upon inhibition suppression or decreases upon inhibition enhancement. For networks whose average synaptic weights for outgoing links are approximately normally distributed, bursting is not found and the average firing rate of the whole network remains approximately constant upon changes in inhibitory synaptic strength.
Collapse
|
15
|
Accelerating Allen Brain Institute’s Large-Scale Computational Model of Mice Primary Visual Cortex. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20503-3_57] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
16
|
Huang C, Zeldenrust F, Celikel T. Cortical Representation of Touch in Silico. Neuroinformatics 2022; 20:1013-1039. [PMID: 35486347 PMCID: PMC9588483 DOI: 10.1007/s12021-022-09576-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 12/31/2022]
Abstract
With its six layers and ~ 12,000 neurons, a cortical column is a complex network whose function is plausibly greater than the sum of its constituents'. Functional characterization of its network components will require going beyond the brute-force modulation of the neural activity of a small group of neurons. Here we introduce an open-source, biologically inspired, computationally efficient network model of the somatosensory cortex's granular and supragranular layers after reconstructing the barrel cortex in soma resolution. Comparisons of the network activity to empirical observations showed that the in silico network replicates the known properties of touch representations and whisker deprivation-induced changes in synaptic strength induced in vivo. Simulations show that the history of the membrane potential acts as a spatial filter that determines the presynaptic population of neurons contributing to a post-synaptic action potential; this spatial filtering might be critical for synaptic integration of top-down and bottom-up information.
Collapse
Affiliation(s)
- Chao Huang
- grid.9647.c0000 0004 7669 9786Department of Biology, University of Leipzig, Leipzig, Germany
| | - Fleur Zeldenrust
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tansu Celikel
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands ,grid.213917.f0000 0001 2097 4943School of Psychology, Georgia Institute of Technology, Atlanta, GA USA
| |
Collapse
|
17
|
Narrow and Broad γ Bands Process Complementary Visual Information in Mouse Primary Visual Cortex. eNeuro 2021; 8:ENEURO.0106-21.2021. [PMID: 34663617 PMCID: PMC8570688 DOI: 10.1523/eneuro.0106-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 06/03/2021] [Accepted: 06/22/2021] [Indexed: 11/21/2022] Open
Abstract
γ Band plays a key role in the encoding of visual features in the primary visual cortex (V1). In rodents V1 two ranges within the γ band are sensitive to contrast: a broad γ band (BB) increasing with contrast, and a narrow γ band (NB), peaking at ∼60 Hz, decreasing with contrast. The functional roles of the two bands and the neural circuits originating them are not completely clear yet. Here, we show, combining experimental and simulated data, that in mice V1 (1) BB carries information about high contrast and NB about low contrast; (2) BB modulation depends on excitatory-inhibitory interplay in the cortex, while NB modulation is because of entrainment to the thalamic drive. In awake mice presented with alternating gratings, NB power progressively decreased from low to intermediate levels of contrast where it reached a plateau. Conversely, BB power was constant across low levels of contrast, but it progressively increased from intermediate to high levels of contrast. Furthermore, BB response was stronger immediately after contrast reversal, while the opposite held for NB. These complementary modulations were reproduced by a recurrent excitatory-inhibitory leaky integrate-and-fire network provided that the thalamic inputs were composed of a sustained and a periodic component having complementary sensitivity ranges. These results show that in rodents the thalamic-driven NB plays a specific key role in encoding visual contrast. Moreover, we propose a simple and effective network model of response to visual stimuli in rodents that might help in investigating network dysfunctions of pathologic visual information processing.
Collapse
|
18
|
Sinha M, Narayanan R. Active Dendrites and Local Field Potentials: Biophysical Mechanisms and Computational Explorations. Neuroscience 2021; 489:111-142. [PMID: 34506834 PMCID: PMC7612676 DOI: 10.1016/j.neuroscience.2021.08.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 10/27/2022]
Abstract
Neurons and glial cells are endowed with membranes that express a rich repertoire of ion channels, transporters, and receptors. The constant flux of ions across the neuronal and glial membranes results in voltage fluctuations that can be recorded from the extracellular matrix. The high frequency components of this voltage signal contain information about the spiking activity, reflecting the output from the neurons surrounding the recording location. The low frequency components of the signal, referred to as the local field potential (LFP), have been traditionally thought to provide information about the synaptic inputs that impinge on the large dendritic trees of various neurons. In this review, we discuss recent computational and experimental studies pointing to a critical role of several active dendritic mechanisms that can influence the genesis and the location-dependent spectro-temporal dynamics of LFPs, spanning different brain regions. We strongly emphasize the need to account for the several fast and slow dendritic events and associated active mechanisms - including gradients in their expression profiles, inter- and intra-cellular spatio-temporal interactions spanning neurons and glia, heterogeneities and degeneracy across scales, neuromodulatory influences, and activitydependent plasticity - towards gaining important insights about the origins of LFP under different behavioral states in health and disease. We provide simple but essential guidelines on how to model LFPs taking into account these dendritic mechanisms, with detailed methodology on how to account for various heterogeneities and electrophysiological properties of neurons and synapses while studying LFPs.
Collapse
Affiliation(s)
- Manisha Sinha
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, Karnataka 560012, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, Karnataka 560012, India.
| |
Collapse
|
19
|
Chizhov AV, Graham LJ. A strategy for mapping biophysical to abstract neuronal network models applied to primary visual cortex. PLoS Comput Biol 2021; 17:e1009007. [PMID: 34398895 PMCID: PMC8389851 DOI: 10.1371/journal.pcbi.1009007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 08/26/2021] [Accepted: 07/27/2021] [Indexed: 11/18/2022] Open
Abstract
A fundamental challenge for the theoretical study of neuronal networks is to make the link between complex biophysical models based directly on experimental data, to progressively simpler mathematical models that allow the derivation of general operating principles. We present a strategy that successively maps a relatively detailed biophysical population model, comprising conductance-based Hodgkin-Huxley type neuron models with connectivity rules derived from anatomical data, to various representations with fewer parameters, finishing with a firing rate network model that permits analysis. We apply this methodology to primary visual cortex of higher mammals, focusing on the functional property of stimulus orientation selectivity of receptive fields of individual neurons. The mapping produces compact expressions for the parameters of the abstract model that clearly identify the impact of specific electrophysiological and anatomical parameters on the analytical results, in particular as manifested by specific functional signatures of visual cortex, including input-output sharpening, conductance invariance, virtual rotation and the tilt after effect. Importantly, qualitative differences between model behaviours point out consequences of various simplifications. The strategy may be applied to other neuronal systems with appropriate modifications. A hierarchy of theoretical approaches to study a neuronal network depends on a tradeoff between biological fidelity and mathematical tractibility. Biophysically-detailed models consider cellular mechanisms and anatomically defined synaptic circuits, but are often too complex to reveal insights into fundamental principles. In contrast, increasingly abstract reduced models facilitate analytical insights. To better ground the latter to the underlying biology, we describe a systematic procedure to move across the model hierarchy that allows understanding how changes in biological parameters—physiological, pathophysiological, or because of new data—impact the behaviour of the network. We apply this approach to mammalian primary visual cortex, and examine how the different models in the hierarchy reproduce functional signatures of this area, in particular the tuning of neurons to the orientation of a visual stimulus. Our work provides a navigation of the complex parameter space of neural network models faithful to biology, as well as highlighting how simplifications made for mathematical convenience can fundamentally change their behaviour.
Collapse
Affiliation(s)
- Anton V. Chizhov
- Computational Physics Laboratory, Ioffe Institute, Saint Petersburg, Russia
- Laboratory of Molecular Mechanisms of Neural Interactions, Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint Petersburg, Russia
- * E-mail:
| | - Lyle J. Graham
- Centre Giovanni Borelli - CNRS UMR9010, Université de Paris, France
| |
Collapse
|
20
|
Pietras B, Gallice N, Schwalger T. Low-dimensional firing-rate dynamics for populations of renewal-type spiking neurons. Phys Rev E 2021; 102:022407. [PMID: 32942450 DOI: 10.1103/physreve.102.022407] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 06/29/2020] [Indexed: 11/07/2022]
Abstract
The macroscopic dynamics of large populations of neurons can be mathematically analyzed using low-dimensional firing-rate or neural-mass models. However, these models fail to capture spike synchronization effects and nonstationary responses of the population activity to rapidly changing stimuli. Here we derive low-dimensional firing-rate models for homogeneous populations of neurons modeled as time-dependent renewal processes. The class of renewal neurons includes integrate-and-fire models driven by white noise and has been frequently used to model neuronal refractoriness and spike synchronization dynamics. The derivation is based on an eigenmode expansion of the associated refractory density equation, which generalizes previous spectral methods for Fokker-Planck equations to arbitrary renewal models. We find a simple relation between the eigenvalues characterizing the timescales of the firing rate dynamics and the Laplace transform of the interspike interval density, for which explicit expressions are available for many renewal models. Retaining only the first eigenmode already yields a reliable low-dimensional approximation of the firing-rate dynamics that captures spike synchronization effects and fast transient dynamics at stimulus onset. We explicitly demonstrate the validity of our model for a large homogeneous population of Poisson neurons with absolute refractoriness and other renewal models that admit an explicit analytical calculation of the eigenvalues. The eigenmode expansion presented here provides a systematic framework for alternative firing-rate models in computational neuroscience based on spiking neuron dynamics with refractoriness.
Collapse
Affiliation(s)
- Bastian Pietras
- Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Noé Gallice
- Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Station 15, CH-1015 Lausanne, Switzerland
| | - Tilo Schwalger
- Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| |
Collapse
|
21
|
Groblewski PA, Sullivan D, Lecoq J, de Vries SEJ, Caldejon S, L'Heureux Q, Keenan T, Roll K, Slaughterback C, Williford A, Farrell C. A standardized head-fixation system for performing large-scale, in vivo physiological recordings in mice. J Neurosci Methods 2020; 346:108922. [PMID: 32946912 DOI: 10.1016/j.jneumeth.2020.108922] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 08/24/2020] [Accepted: 08/24/2020] [Indexed: 11/25/2022]
Abstract
BACKGROUND The Allen Institute recently built a set of high-throughput experimental pipelines to collect comprehensive in vivo surveys of physiological activity in the visual cortex of awake, head-fixed mice. Developing these large-scale, industrial-like pipelines posed many scientific, operational, and engineering challenges. NEW METHOD Our strategies for creating a cross-platform reference space to which all pipeline datasets were mapped required development of 1) a robust headframe, 2) a reproducible clamping system, and 3) data-collection systems that are built, and maintained, around precise alignment with a reference artifact. RESULTS When paired with our pipeline clamping system, our headframe exceeded deflection and reproducibility requirements. By leveraging our headframe and clamping system we were able to create a cross-platform reference space to which multi-modal imaging datasets could be mapped. COMPARISON WITH EXISTING METHODS Together, the Allen Brain Observatory headframe, surgical tooling, clamping system, and system registration strategy create a unique system for collecting large amounts of standardized in vivo datasets over long periods of time. Moreover, the integrated approach to cross-platform registration allows for multi-modal datasets to be collected within a shared reference space. CONCLUSIONS Here we report the engineering strategies that we implemented when creating the Allen Brain Observatory physiology pipelines. All of the documentation related to headframe, surgical tooling, and clamp design has been made freely available and can be readily manufactured or procured. The engineering strategy, or components of the strategy, described in this report can be tailored and applied by external researchers to improve data standardization and stability.
Collapse
Affiliation(s)
- P A Groblewski
- Allen Institute for Brain Science, Seattle, WA, 98109, USA.
| | - D Sullivan
- Allen Institute for Brain Science, Seattle, WA, 98109, USA
| | - J Lecoq
- Allen Institute for Brain Science, Seattle, WA, 98109, USA
| | - S E J de Vries
- Allen Institute for Brain Science, Seattle, WA, 98109, USA
| | - S Caldejon
- Allen Institute for Brain Science, Seattle, WA, 98109, USA
| | - Q L'Heureux
- Allen Institute for Brain Science, Seattle, WA, 98109, USA
| | - T Keenan
- Amazon Logistics, Bellevue, WA, 98004, USA
| | - K Roll
- Allen Institute for Brain Science, Seattle, WA, 98109, USA
| | | | - A Williford
- Allen Institute for Brain Science, Seattle, WA, 98109, USA
| | - C Farrell
- Allen Institute for Brain Science, Seattle, WA, 98109, USA
| |
Collapse
|
22
|
Dai K, Gratiy SL, Billeh YN, Xu R, Cai B, Cain N, Rimehaug AE, Stasik AJ, Einevoll GT, Mihalas S, Koch C, Arkhipov A. Brain Modeling ToolKit: An open source software suite for multiscale modeling of brain circuits. PLoS Comput Biol 2020; 16:e1008386. [PMID: 33253147 PMCID: PMC7728187 DOI: 10.1371/journal.pcbi.1008386] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 12/10/2020] [Accepted: 09/16/2020] [Indexed: 11/26/2022] Open
Abstract
Experimental studies in neuroscience are producing data at a rapidly increasing rate, providing exciting opportunities and formidable challenges to existing theoretical and modeling approaches. To turn massive datasets into predictive quantitative frameworks, the field needs software solutions for systematic integration of data into realistic, multiscale models. Here we describe the Brain Modeling ToolKit (BMTK), a software suite for building models and performing simulations at multiple levels of resolution, from biophysically detailed multi-compartmental, to point-neuron, to population-statistical approaches. Leveraging the SONATA file format and existing software such as NEURON, NEST, and others, BMTK offers a consistent user experience across multiple levels of resolution. It permits highly sophisticated simulations to be set up with little coding required, thus lowering entry barriers to new users. We illustrate successful applications of BMTK to large-scale simulations of a cortical area. BMTK is an open-source package provided as a resource supporting modeling-based discovery in the community.
Collapse
Affiliation(s)
- Kael Dai
- Allen Institute, Seattle, Washington, United States of America
| | | | - Yazan N. Billeh
- Allen Institute, Seattle, Washington, United States of America
| | - Richard Xu
- Allen Institute, Seattle, Washington, United States of America
| | - Binghuang Cai
- Allen Institute, Seattle, Washington, United States of America
| | - Nicholas Cain
- Allen Institute, Seattle, Washington, United States of America
| | - Atle E. Rimehaug
- Norwegian University of Life Sciences & University of Oslo, Oslo, Norway
| | | | - Gaute T. Einevoll
- Norwegian University of Life Sciences & University of Oslo, Oslo, Norway
| | - Stefan Mihalas
- Allen Institute, Seattle, Washington, United States of America
| | - Christof Koch
- Allen Institute, Seattle, Washington, United States of America
| | - Anton Arkhipov
- Allen Institute, Seattle, Washington, United States of America
| |
Collapse
|
23
|
Næss S, Halnes G, Hagen E, Hagler DJ, Dale AM, Einevoll GT, Ness TV. Biophysically detailed forward modeling of the neural origin of EEG and MEG signals. Neuroimage 2020; 225:117467. [PMID: 33075556 DOI: 10.1016/j.neuroimage.2020.117467] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 09/28/2020] [Accepted: 10/12/2020] [Indexed: 12/22/2022] Open
Abstract
Electroencephalography (EEG) and magnetoencephalography (MEG) are among the most important techniques for non-invasively studying cognition and disease in the human brain. These signals are known to originate from cortical neural activity, typically described in terms of current dipoles. While the link between cortical current dipoles and EEG/MEG signals is relatively well understood, surprisingly little is known about the link between different kinds of neural activity and the current dipoles themselves. Detailed biophysical modeling has played an important role in exploring the neural origin of intracranial electric signals, like extracellular spikes and local field potentials. However, this approach has not yet been taken full advantage of in the context of exploring the neural origin of the cortical current dipoles that are causing EEG/MEG signals. Here, we present a method for reducing arbitrary simulated neural activity to single current dipoles. We find that the method is applicable for calculating extracranial signals, but less suited for calculating intracranial electrocorticography (ECoG) signals. We demonstrate that this approach can serve as a powerful tool for investigating the neural origin of EEG/MEG signals. This is done through example studies of the single-neuron EEG contribution, the putative EEG contribution from calcium spikes, and from calculating EEG signals from large-scale neural network simulations. We also demonstrate how the simulated current dipoles can be used directly in combination with detailed head models, allowing for simulated EEG signals with an unprecedented level of biophysical details. In conclusion, this paper presents a framework for biophysically detailed modeling of EEG and MEG signals, which can be used to better our understanding of non-inasively measured neural activity in humans.
Collapse
Affiliation(s)
- Solveig Næss
- Department of Informatics, University of Oslo, Oslo 0316, Norway
| | - Geir Halnes
- Faculty of Science and Technology, Norwegian University of Life Sciences, 1432 Ås, Norway
| | - Espen Hagen
- Department of Physics, University of Oslo, Oslo 0316, Norway
| | - Donald J Hagler
- Department of Radiology, University of California, La Jolla, CA 92093, USA
| | - Anders M Dale
- Department of Radiology, University of California, La Jolla, CA 92093, USA; Department of Neurosciences, University of California, La Jolla, CA 92093, USA
| | - Gaute T Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, 1432 Ås, Norway; Department of Physics, University of Oslo, Oslo 0316, Norway.
| | - Torbjørn V Ness
- Faculty of Science and Technology, Norwegian University of Life Sciences, 1432 Ås, Norway.
| |
Collapse
|
24
|
Chizhov A, Merkulyeva N. Refractory density model of cortical direction selectivity: Lagged-nonlagged, transient-sustained, and On-Off thalamic neuron-based mechanisms and intracortical amplification. PLoS Comput Biol 2020; 16:e1008333. [PMID: 33052899 PMCID: PMC7605712 DOI: 10.1371/journal.pcbi.1008333] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 11/02/2020] [Accepted: 09/12/2020] [Indexed: 11/18/2022] Open
Abstract
A biophysically detailed description of the mechanisms of the primary vision is still being developed. We have incorporated a simplified, filter-based description of retino-thalamic visual signal processing into the detailed, conductance-based refractory density description of the neuronal population activity of the primary visual cortex. We compared four mechanisms of the direction selectivity (DS), three of them being based on asymmetrical projections of different types of thalamic neurons to the cortex, distinguishing between (i) lagged and nonlagged, (ii) transient and sustained, and (iii) On and Off neurons. The fourth mechanism implies a lack of subcortical bias and is an epiphenomenon of intracortical interactions between orientation columns. The simulations of the cortical response to moving gratings have verified that first three mechanisms provide DS to an extent compared with experimental data and that the biophysical model realistically reproduces characteristics of the visual cortex activity, such as membrane potential, firing rate, and synaptic conductances. The proposed model reveals the difference between the mechanisms of both the intact and the silenced cortex, favoring the second mechanism. In the fourth case, DS is weaker but significant; it completely vanishes in the silenced cortex.DS in the On-Off mechanism derives from the nonlinear interactions within the orientation map. Results of simulations can help to identify a prevailing mechanism of DS in V1. This is a step towards a comprehensive biophysical modeling of the primary visual system in the frameworks of the population rate coding concept. A major mechanism that underlies tuning of cortical neurons to the direction of a moving stimulus is still debated. Considering the visual cortex structured with orientation-selective columns, we have realized and compared in our biophysically detailed mathematical model four hypothetical mechanisms of the direction selectivity (DS) known from experiments. The present model accomplishes our previous model that was tuned to experimental data on excitability in slices and reproduces orientation tuning effects in vivo. In simulations, we have found that the convergence of inputs from so-called transient and sustained (or lagged and nonlagged) thalamic neurons in the cortex provides an initial bias for DS, whereas cortical interactions amplify the tuning. In the absence of any bias, DS emerges as an epiphenomenon of the orientation map. In the case of a biased convergence of On- and Off- thalamic inputs, DS emerges with the help of the intracortical interactions on the orientation map, also. Thus, we have proposed a comprehensive description of the primary vision and revealed characteristic features of different mechanisms of DS in the visual cortex with columnar structure.
Collapse
Affiliation(s)
- Anton Chizhov
- Ioffe Institute, St.-Petersburg, Russia
- Sechenov Institute of Evolutionary Physiology and Biochemistry of RAS, St.-Petersburg, Russia
- * E-mail:
| | | |
Collapse
|
25
|
Salehi S, A Dehaqani MR, Noudoost B, Esteky H. Distinct mechanisms of face representation by enhancive and suppressive neurons of the inferior temporal cortex. J Neurophysiol 2020; 124:1216-1228. [PMID: 32902342 DOI: 10.1152/jn.00203.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Face-selective neurons in the inferior temporal (IT) cortex respond to faces by either increasing (ENH) or decreasing (SUP) their spiking activities compared with their baseline. Although nearly half of IT face neurons are selectively suppressed by face stimulation, their role in face representation is not clear. To address this issue, we recorded the spiking activities and local field potential (LFP) from IT cortex of three monkeys while they viewed a large set of visual stimuli. LFP high-gamma (HG-LFP) power indicated the presence of both ENH and SUP face-selective neural clusters in IT cortex. The magnitude of HG-LFP power of the recording sites was correlated with the magnitude of change in the evoked spiking activities of its constituent neurons for both ENH and SUP face clusters. Spatial distribution of the ENH and SUP face clusters suggests the presence of a complex and heterogeneous face hypercluster organization in IT cortex. Importantly, ENH neurons conveyed more face category and SUP neurons conveyed more face identity information at both the single-unit and neuronal population levels. Onset and peak of suppressive single-unit, neuronal population, and HG-LFP power activities lagged those of the ENH ones. These results demonstrate that IT neuronal code for face representation is optimized by increasing sparseness through selective suppression of a subset of face neurons. We suggest that IT cortex contains spatial clusters of both ENH and SUP face neurons with distinct specialized functional role in face representation.NEW & NOTEWORTHY Electrophysiological and imaging studies have suggested that face information is encoded by a network of clusters of enhancive face-selective neurons in the visual cortex of man and monkey. We show that nearly half of face-selective neurons are suppressed by face stimulation. The suppressive neurons form spatial clusters and convey more face identity information than the enhancive face neurons. Our results suggest the presence of two neuronal subsystems for coarse and fine face information processing.
Collapse
Affiliation(s)
- Sina Salehi
- Shiraz Neuroscience Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Mohammad Reza A Dehaqani
- Cognitive Systems Laboratory, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, Utah
| | - Hossein Esteky
- Research Group for Brain and Cognitive Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
26
|
Sivagnanam S, Gorman W, Doherty D, Neymotin SA, Fang S, Hovhannisyan H, Lytton WW, Dura-Bernal S. Simulating Large-scale Models of Brain Neuronal Circuits using Google Cloud Platform. PEARC20 : PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING 2020 : CATCH THE WAVE : JULY 27-31, 2020, PORTLAND, OR VIRTUAL CONFERENCE. PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING (CONFERENCE) (2020 : ONLINE) 2020; 2020:505-509. [PMID: 35098264 DOI: 10.1145/3311790.3399621] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Biophysically detailed modeling provides an unmatched method to integrate data from many disparate experimental studies, and manipulate and explore with high precision the resultin brain circuit simulation. We developed a detailed model of the brain motor cortex circuits, simulating over 10,000 biophysically detailed neurons and 30 million synaptic connections. Optimization and evaluation of the cortical model parameters and responses was achieved via parameter exploration using grid search parameter sweeps and evolutionary algorithms. This involves running tens of thousands of simulations requiring significant computational resources. This paper describes our experience in setting up and using Google Compute Platform (GCP) with Slurm to run these large-scale simulations. We describe the best practices and solutions to the issues that arose during the process, and present preliminary results from running simulations on GCP.
Collapse
Affiliation(s)
- Subhashini Sivagnanam
- State University of New York DMC, Brooklyn NY; San Diego Supercomputer Center / University California San Diego, La Jolla CA
| | | | | | | | | | | | - William W Lytton
- State University of New York DMC, Brooklyn NY; King's County Hospital, Brooklyn NY
| | - Salvador Dura-Bernal
- State University of New York DMC, Brooklyn NY; Nathan Kline Institute for Psychiatric Research, Orangeburg NY USA
| |
Collapse
|
27
|
Poirazi P, Papoutsi A. Illuminating dendritic function with computational models. Nat Rev Neurosci 2020; 21:303-321. [PMID: 32393820 DOI: 10.1038/s41583-020-0301-7] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2020] [Indexed: 02/06/2023]
Abstract
Dendrites have always fascinated researchers: from the artistic drawings by Ramon y Cajal to the beautiful recordings of today, neuroscientists have been striving to unravel the mysteries of these structures. Theoretical work in the 1960s predicted important dendritic effects on neuronal processing, establishing computational modelling as a powerful technique for their investigation. Since then, modelling of dendrites has been instrumental in driving neuroscience research in a targeted manner, providing experimentally testable predictions that range from the subcellular level to the systems level, and their relevance extends to fields beyond neuroscience, such as machine learning and artificial intelligence. Validation of modelling predictions often requires - and drives - new technological advances, thus closing the loop with theory-driven experimentation that moves the field forward. This Review features the most important, to our understanding, contributions of modelling of dendritic computations, including those pending experimental verification, and highlights studies of successful interactions between the modelling and experimental neuroscience communities.
Collapse
Affiliation(s)
- Panayiota Poirazi
- Institute of Molecular Biology & Biotechnology, Foundation for Research & Technology - Hellas, Heraklion, Crete, Greece.
| | - Athanasia Papoutsi
- Institute of Molecular Biology & Biotechnology, Foundation for Research & Technology - Hellas, Heraklion, Crete, Greece
| |
Collapse
|
28
|
Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex. Neuron 2020; 106:388-403.e18. [DOI: 10.1016/j.neuron.2020.01.040] [Citation(s) in RCA: 90] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 10/17/2019] [Accepted: 01/27/2020] [Indexed: 01/08/2023]
|
29
|
An electrodiffusive, ion conserving Pinsky-Rinzel model with homeostatic mechanisms. PLoS Comput Biol 2020; 16:e1007661. [PMID: 32348299 PMCID: PMC7213750 DOI: 10.1371/journal.pcbi.1007661] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 05/11/2020] [Accepted: 04/07/2020] [Indexed: 02/05/2023] Open
Abstract
In most neuronal models, ion concentrations are assumed to be constant, and effects of concentration variations on ionic reversal potentials, or of ionic diffusion on electrical potentials are not accounted for. Here, we present the electrodiffusive Pinsky-Rinzel (edPR) model, which we believe is the first multicompartmental neuron model that accounts for electrodiffusive ion concentration dynamics in a way that ensures a biophysically consistent relationship between ion concentrations, electrical charge, and electrical potentials in both the intra- and extracellular space. The edPR model is an expanded version of the two-compartment Pinsky-Rinzel (PR) model of a hippocampal CA3 neuron. Unlike the PR model, the edPR model includes homeostatic mechanisms and ion-specific leakage currents, and keeps track of all ion concentrations (Na+, K+, Ca2+, and Cl−), electrical potentials, and electrical conductivities in the intra- and extracellular space. The edPR model reproduces the membrane potential dynamics of the PR model for moderate firing activity. For higher activity levels, or when homeostatic mechanisms are impaired, the homeostatic mechanisms fail in maintaining ion concentrations close to baseline, and the edPR model diverges from the PR model as it accounts for effects of concentration changes on neuronal firing. We envision that the edPR model will be useful for the field in three main ways. Firstly, as it relaxes commonly made modeling assumptions, the edPR model can be used to test the validity of these assumptions under various firing conditions, as we show here for a few selected cases. Secondly, the edPR model should supplement the PR model when simulating scenarios where ion concentrations are expected to vary over time. Thirdly, being applicable to conditions with failed homeostasis, the edPR model opens up for simulating a range of pathological conditions, such as spreading depression or epilepsy. Neurons generate their electrical signals by letting ions pass through their membranes. Despite this fact, most models of neurons apply the simplifying assumption that ion concentrations remain effectively constant during neural activity. This assumption is often quite good, as neurons contain a set of homeostatic mechanisms that make sure that ion concentrations vary quite little under normal circumstances. However, under some conditions, these mechanisms can fail, and ion concentrations can vary quite dramatically. Standard models are thus not able to simulate such conditions. Here, we present what to our knowledge is the first multicompartmental neuron model that accounts for ion concentration variations in a way that ensures complete and consistent ion concentration and charge conservation. In this work, we use the model to explore under which activity conditions the ion concentration variations become important for predicting the neurodynamics. We expect the model to be of great value for the field of neuroscience, as it can be used to simulate a range of pathological conditions, such as spreading depression or epilepsy, which are associated with large changes in extracellular ion concentrations.
Collapse
|
30
|
Chou ZZ, Yu GJ, Berger TW. Generation of Granule Cell Dendritic Morphologies by Estimating the Spatial Heterogeneity of Dendritic Branching. Front Comput Neurosci 2020; 14:23. [PMID: 32327990 PMCID: PMC7160759 DOI: 10.3389/fncom.2020.00023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 03/13/2020] [Indexed: 11/13/2022] Open
Abstract
Biological realism of dendritic morphologies is important for simulating electrical stimulation of brain tissue. By adding point process modeling and conditional sampling to existing generation strategies, we provide a novel means of reproducing the nuanced branching behavior that occurs in different layers of granule cell dendritic morphologies. In this study, a heterogeneous Poisson point process was used to simulate branching events. Conditional distributions were then used to select branch angles depending on the orthogonal distance to the somatic plane. The proposed method was compared to an existing generation tool and a control version of the proposed method that used a homogeneous Poisson point process. Morphologies were generated with each method and then compared to a set of digitally reconstructed neurons. The introduction of a conditionally dependent branching rate resulted in the generation of morphologies that more accurately reproduced the emergent properties of dendritic material per layer, Sholl intersections, and proximal passive current flow. Conditional dependence was critically important for the generation of realistic granule cell dendritic morphologies.
Collapse
Affiliation(s)
- Zane Z Chou
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Gene J Yu
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Theodore W Berger
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
31
|
Skaar JEW, Stasik AJ, Hagen E, Ness TV, Einevoll GT. Estimation of neural network model parameters from local field potentials (LFPs). PLoS Comput Biol 2020; 16:e1007725. [PMID: 32155141 PMCID: PMC7083334 DOI: 10.1371/journal.pcbi.1007725] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 03/20/2020] [Accepted: 02/12/2020] [Indexed: 11/20/2022] Open
Abstract
Most modeling in systems neuroscience has been descriptive where neural representations such as ‘receptive fields’, have been found by statistically correlating neural activity to sensory input. In the traditional physics approach to modelling, hypotheses are represented by mechanistic models based on the underlying building blocks of the system, and candidate models are validated by comparing with experiments. Until now validation of mechanistic cortical network models has been based on comparison with neuronal spikes, found from the high-frequency part of extracellular electrical potentials. In this computational study we investigated to what extent the low-frequency part of the signal, the local field potential (LFP), can be used to validate and infer properties of mechanistic cortical network models. In particular, we asked the question whether the LFP can be used to accurately estimate synaptic connection weights in the underlying network. We considered the thoroughly analysed Brunel network comprising an excitatory and an inhibitory population of recurrently connected integrate-and-fire (LIF) neurons. This model exhibits a high diversity of spiking network dynamics depending on the values of only three network parameters. The LFP generated by the network was computed using a hybrid scheme where spikes computed from the point-neuron network were replayed on biophysically detailed multicompartmental neurons. We assessed how accurately the three model parameters could be estimated from power spectra of stationary ‘background’ LFP signals by application of convolutional neural nets (CNNs). All network parameters could be very accurately estimated, suggesting that LFPs indeed can be used for network model validation. Most of what we have learned about brain networks in vivo has come from the measurement of spikes (action potentials) recorded by extracellular electrodes. The low-frequency part of these signals, the local field potential (LFP), contains unique information about how dendrites in neuronal populations integrate synaptic inputs, but has so far played a lesser role. To investigate whether the LFP can be used to validate network models, we computed LFP signals for a recurrent network model (the Brunel network) for which the ground-truth parameters are known. By application of convolutional neural nets (CNNs) we found that the synaptic weights indeed could be accurately estimated from ‘background’ LFP signals, suggesting a future key role for LFP in development of network models.
Collapse
Affiliation(s)
- Jan-Eirik W. Skaar
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Espen Hagen
- Department of Physics, University of Oslo, Oslo, Norway
| | - Torbjørn V. Ness
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Gaute T. Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
- * E-mail:
| |
Collapse
|
32
|
Dai K, Hernando J, Billeh YN, Gratiy SL, Planas J, Davison AP, Dura-Bernal S, Gleeson P, Devresse A, Dichter BK, Gevaert M, King JG, Van Geit WAH, Povolotsky AV, Muller E, Courcol JD, Arkhipov A. The SONATA data format for efficient description of large-scale network models. PLoS Comput Biol 2020; 16:e1007696. [PMID: 32092054 PMCID: PMC7058350 DOI: 10.1371/journal.pcbi.1007696] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 03/05/2020] [Accepted: 01/28/2020] [Indexed: 12/04/2022] Open
Abstract
Increasing availability of comprehensive experimental datasets and of high-performance computing resources are driving rapid growth in scale, complexity, and biological realism of computational models in neuroscience. To support construction and simulation, as well as sharing of such large-scale models, a broadly applicable, flexible, and high-performance data format is necessary. To address this need, we have developed the Scalable Open Network Architecture TemplAte (SONATA) data format. It is designed for memory and computational efficiency and works across multiple platforms. The format represents neuronal circuits and simulation inputs and outputs via standardized files and provides much flexibility for adding new conventions or extensions. SONATA is used in multiple modeling and visualization tools, and we also provide reference Application Programming Interfaces and model examples to catalyze further adoption. SONATA format is free and open for the community to use and build upon with the goal of enabling efficient model building, sharing, and reproducibility. Neuroscience is experiencing a rapid growth of data streams characterizing composition, connectivity, and activity of brain networks in ever increasing details. Data-driven modeling will be essential to integrate these multimodal and complex data into predictive simulations to advance our understanding of brain function and mechanisms. To enable efficient development and sharing of such large-scale models utilizing diverse data types, we have developed the Scalable Open Network Architecture TemplAte (SONATA) data format. The format represents neuronal circuits and simulation inputs and outputs via standardized files and provides much flexibility for adding new conventions or extensions. SONATA is already supported by several popular tools for model building, simulations, and visualization. It is free and open for everyone to use and build upon and will enable increased efficiency, reproducibility, and scientific exchange in the community.
Collapse
Affiliation(s)
- Kael Dai
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Juan Hernando
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Yazan N. Billeh
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Sergey L. Gratiy
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Judit Planas
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Andrew P. Davison
- Paris-Saclay Institute of Neuroscience UMR, Centre National de la Recherche Scientifique/Université Paris-Saclay, Gif-sur-Yvette, France
| | - Salvador Dura-Bernal
- State University of New York Downstate Medical Center, Brooklyn, New York, United States of America
- Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Adrien Devresse
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Benjamin K. Dichter
- Department of Neurosurgery, Stanford University, Stanford, California, United States of America
- Biological Systems and Engineering, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Michael Gevaert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - James G. King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Werner A. H. Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Arseny V. Povolotsky
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Eilif Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Jean-Denis Courcol
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
33
|
Amsalem O, Eyal G, Rogozinski N, Gevaert M, Kumbhar P, Schürmann F, Segev I. An efficient analytical reduction of detailed nonlinear neuron models. Nat Commun 2020; 11:288. [PMID: 31941884 PMCID: PMC6962154 DOI: 10.1038/s41467-019-13932-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 12/09/2019] [Indexed: 12/31/2022] Open
Abstract
Detailed conductance-based nonlinear neuron models consisting of thousands of synapses are key for understanding of the computational properties of single neurons and large neuronal networks, and for interpreting experimental results. Simulations of these models are computationally expensive, considerably curtailing their utility. Neuron_Reduce is a new analytical approach to reduce the morphological complexity and computational time of nonlinear neuron models. Synapses and active membrane channels are mapped to the reduced model preserving their transfer impedance to the soma; synapses with identical transfer impedance are merged into one NEURON process still retaining their individual activation times. Neuron_Reduce accelerates the simulations by 40-250 folds for a variety of cell types and realistic number (10,000-100,000) of synapses while closely replicating voltage dynamics and specific dendritic computations. The reduced neuron-models will enable realistic simulations of neural networks at unprecedented scale, including networks emerging from micro-connectomics efforts and biologically-inspired "deep networks". Neuron_Reduce is publicly available and is straightforward to implement.
Collapse
Affiliation(s)
- Oren Amsalem
- Department of Neurobiology, Hebrew University of Jerusalem, 9190401, Jerusalem, Israel.
| | - Guy Eyal
- Department of Neurobiology, Hebrew University of Jerusalem, 9190401, Jerusalem, Israel
| | - Noa Rogozinski
- Department of Neurobiology, Hebrew University of Jerusalem, 9190401, Jerusalem, Israel
| | - Michael Gevaert
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland
| | - Pramod Kumbhar
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland
| | - Felix Schürmann
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202, Geneva, Switzerland
| | - Idan Segev
- Department of Neurobiology, Hebrew University of Jerusalem, 9190401, Jerusalem, Israel
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, 9190401, Jerusalem, Israel
| |
Collapse
|
34
|
An Optimizing Multi-platform Source-to-source Compiler Framework for the NEURON MODeling Language. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7302241 DOI: 10.1007/978-3-030-50371-0_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Domain-specific languages (DSLs) play an increasingly important role in the generation of high performing software. They allow the user to exploit domain knowledge for the generation of more efficient code on target architectures. Here, we describe a new code generation framework (NMODL) for an existing DSL in the NEURON framework, a widely used software for massively parallel simulation of biophysically detailed brain tissue models. Existing NMODL DSL transpilers lack either essential features to generate optimized code or capability to parse the diversity of existing models in the user community. Our NMODL framework has been tested against a large number of previously published user models and offers high-level domain-specific optimizations and symbolic algebraic simplifications before target code generation. NMODL implements multiple SIMD and SPMD targets optimized for modern hardware. When comparing NMODL-generated kernels with NEURON we observe a speedup of up to 20\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\times $$\end{document}, resulting in overall speedups of two different production simulations by \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\sim }{7}{\times }$$\end{document}. When compared to SIMD optimized kernels that heavily relied on auto-vectorization by the compiler still a speedup of up to \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\sim }{2}{\times }$$\end{document} is observed.
Collapse
|
35
|
Crone JC, Vindiola MM, Yu AB, Boothe DL, Beeman D, Oie KS, Franaszczuk PJ. Enabling Large-Scale Simulations With the GENESIS Neuronal Simulator. Front Neuroinform 2019; 13:69. [PMID: 31803040 PMCID: PMC6873326 DOI: 10.3389/fninf.2019.00069] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 10/30/2019] [Indexed: 11/13/2022] Open
Abstract
In this paper, we evaluate the computational performance of the GEneral NEural SImulation System (GENESIS) for large scale simulations of neural networks. While many benchmark studies have been performed for large scale simulations with leaky integrate-and-fire neurons or neuronal models with only a few compartments, this work focuses on higher fidelity neuronal models represented by 50–74 compartments per neuron. After making some modifications to the source code for GENESIS and its parallel implementation, PGENESIS, particularly to improve memory usage, we find that PGENESIS is able to efficiently scale on supercomputing resources to network sizes as large as 9 × 106 neurons with 18 × 109 synapses and 2.2 × 106 neurons with 45 × 109 synapses. The modifications to GENESIS that enabled these large scale simulations have been incorporated into the May 2019 Official Release of PGENESIS 2.4 available for download from the GENESIS web site (genesis-sim.org).
Collapse
Affiliation(s)
- Joshua C Crone
- Computational and Information Sciences Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Manuel M Vindiola
- Computational and Information Sciences Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Alfred B Yu
- Human Research and Engineering Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - David L Boothe
- Human Research and Engineering Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - David Beeman
- Department of Electrical, Computer, and Energy Engineering, University of Colorado, Boulder, CO, United States
| | - Kelvin S Oie
- Human Research and Engineering Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Piotr J Franaszczuk
- Human Research and Engineering Directorate, Army Research Laboratory, Aberdeen Proving Ground, MD, United States.,Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
36
|
Phenomenological models of Na V1.5. A side by side, procedural, hands-on comparison between Hodgkin-Huxley and kinetic formalisms. Sci Rep 2019; 9:17493. [PMID: 31767896 PMCID: PMC6877610 DOI: 10.1038/s41598-019-53662-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 10/31/2019] [Indexed: 11/08/2022] Open
Abstract
Computational models of ion channels represent the building blocks of conductance-based, biologically inspired models of neurons and neural networks. Ion channels are still widely modelled by means of the formalism developed by the seminal work of Hodgkin and Huxley (HH), although the electrophysiological features of the channels are currently known to be better fitted by means of kinetic Markov-type models. The present study is aimed at showing why simplified Markov-type kinetic models are more suitable for ion channels modelling as compared to HH ones, and how a manual optimization process can be rationally carried out for both. Previously published experimental data of an illustrative ion channel (NaV1.5) are exploited to develop a step by step optimization of the two models in close comparison. A conflicting practical limitation is recognized for the HH model, which only supplies one parameter to model two distinct electrophysiological behaviours. In addition, a step by step procedure is provided to correctly optimize the kinetic Markov-type model. Simplified Markov-type kinetic models are currently the best option to closely approximate the known complexity of the macroscopic currents of ion channels. Their optimization can be achieved through a rationally guided procedure, and allows to obtain models with a computational burden that is comparable with HH models one.
Collapse
|
37
|
Kumbhar P, Hines M, Fouriaux J, Ovcharenko A, King J, Delalondre F, Schürmann F. CoreNEURON : An Optimized Compute Engine for the NEURON Simulator. Front Neuroinform 2019; 13:63. [PMID: 31616273 PMCID: PMC6763692 DOI: 10.3389/fninf.2019.00063] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Accepted: 09/04/2019] [Indexed: 12/01/2022] Open
Abstract
The NEURON simulator has been developed over the past three decades and is widely used by neuroscientists to model the electrical activity of neuronal networks. Large network simulation projects using NEURON have supercomputer allocations that individually measure in the millions of core hours. Supercomputer centers are transitioning to next generation architectures and the work accomplished per core hour for these simulations could be improved by an order of magnitude if NEURON was able to better utilize those new hardware capabilities. In order to adapt NEURON to evolving computer architectures, the compute engine of the NEURON simulator has been extracted and has been optimized as a library called CoreNEURON. This paper presents the design, implementation, and optimizations of CoreNEURON. We describe how CoreNEURON can be used as a library with NEURON and then compare performance of different network models on multiple architectures including IBM BlueGene/Q, Intel Skylake, Intel MIC and NVIDIA GPU. We show how CoreNEURON can simulate existing NEURON network models with 4-7x less memory usage and 2-7x less execution time while maintaining binary result compatibility with NEURON.
Collapse
Affiliation(s)
- Pramod Kumbhar
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Michael Hines
- Department of Neuroscience, Yale University, New Haven, CT, United States
| | - Jeremy Fouriaux
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Aleksandr Ovcharenko
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - James King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Fabien Delalondre
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Felix Schürmann
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
38
|
Abstract
Advances in integrated circuitry from the 1950s to the present day have enabled a revolution in technology across the world. However, fundamental limits of circuitry make further improvements through historically successful methods increasingly challenging. It is becoming clear that to address new challenges and applications, new methods of computation will be required. One promising field is neuromorphic engineering, a broad field which applies biologically inspired principles to create alternative computational architectures and methods. We address why neuromorphic engineering is one of the most promising fields within emerging computational technology, elaborating on its common principles and models, and discussing its current state and future challenges.
Collapse
Affiliation(s)
- Wilkie Olin-Ammentorp
- Colleges of Nanoscale Science and Engineering, SUNY Polytechnic Institute, Albany, NY, USA
| | - Nathaniel Cady
- Colleges of Nanoscale Science and Engineering, SUNY Polytechnic Institute, Albany, NY, USA
| |
Collapse
|
39
|
Gouwens NW, Sorensen SA, Berg J, Lee C, Jarsky T, Ting J, Sunkin SM, Feng D, Anastassiou CA, Barkan E, Bickley K, Blesie N, Braun T, Brouner K, Budzillo A, Caldejon S, Casper T, Castelli D, Chong P, Crichton K, Cuhaciyan C, Daigle TL, Dalley R, Dee N, Desta T, Ding SL, Dingman S, Doperalski A, Dotson N, Egdorf T, Fisher M, de Frates RA, Garren E, Garwood M, Gary A, Gaudreault N, Godfrey K, Gorham M, Gu H, Habel C, Hadley K, Harrington J, Harris JA, Henry A, Hill D, Josephsen S, Kebede S, Kim L, Kroll M, Lee B, Lemon T, Link KE, Liu X, Long B, Mann R, McGraw M, Mihalas S, Mukora A, Murphy GJ, Ng L, Ngo K, Nguyen TN, Nicovich PR, Oldre A, Park D, Parry S, Perkins J, Potekhina L, Reid D, Robertson M, Sandman D, Schroedter M, Slaughterbeck C, Soler-Llavina G, Sulc J, Szafer A, Tasic B, Taskin N, Teeter C, Thatra N, Tung H, Wakeman W, Williams G, Young R, Zhou Z, Farrell C, Peng H, Hawrylycz MJ, Lein E, Ng L, Arkhipov A, Bernard A, Phillips JW, Zeng H, Koch C. Classification of electrophysiological and morphological neuron types in the mouse visual cortex. Nat Neurosci 2019; 22:1182-1195. [PMID: 31209381 PMCID: PMC8078853 DOI: 10.1038/s41593-019-0417-0] [Citation(s) in RCA: 219] [Impact Index Per Article: 43.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/25/2019] [Indexed: 12/21/2022]
Abstract
Understanding the diversity of cell types in the brain has been an enduring challenge and requires detailed characterization of individual neurons in multiple dimensions. To systematically profile morpho-electric properties of mammalian neurons, we established a single-cell characterization pipeline using standardized patch-clamp recordings in brain slices and biocytin-based neuronal reconstructions. We built a publicly accessible online database, the Allen Cell Types Database, to display these datasets. Intrinsic physiological properties were measured from 1,938 neurons from the adult laboratory mouse visual cortex, morphological properties were measured from 461 reconstructed neurons, and 452 neurons had both measurements available. Quantitative features were used to classify neurons into distinct types using unsupervised methods. We established a taxonomy of morphologically and electrophysiologically defined cell types for this region of the cortex, with 17 electrophysiological types, 38 morphological types and 46 morpho-electric types. There was good correspondence with previously defined transcriptomic cell types and subclasses using the same transgenic mouse lines.
Collapse
Affiliation(s)
| | | | - Jim Berg
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Changkyu Lee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tim Jarsky
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Jonathan Ting
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Susan M Sunkin
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - David Feng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Eliza Barkan
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kris Bickley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Nicole Blesie
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Thomas Braun
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Krissy Brouner
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Agata Budzillo
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Tamara Casper
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Dan Castelli
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Peter Chong
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Tanya L Daigle
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rachel Dalley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Nick Dee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tsega Desta
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Song-Lin Ding
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Samuel Dingman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Tom Egdorf
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Michael Fisher
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Emma Garren
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Amanda Gary
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Keith Godfrey
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Melissa Gorham
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Hong Gu
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Caroline Habel
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kristen Hadley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Julie A Harris
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Alex Henry
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - DiJon Hill
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sam Josephsen
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sara Kebede
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lisa Kim
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Matthew Kroll
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Brian Lee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tracy Lemon
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Xiaoxiao Liu
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Brian Long
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rusty Mann
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Medea McGraw
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Alice Mukora
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Gabe J Murphy
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lindsay Ng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kiet Ngo
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Aaron Oldre
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Daniel Park
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sheana Parry
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Jed Perkins
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - David Reid
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - David Sandman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | | | - Josef Sulc
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Aaron Szafer
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Bosiljka Tasic
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Naz Taskin
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Corinne Teeter
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Herman Tung
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Wayne Wakeman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Grace Williams
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rob Young
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Zhi Zhou
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Colin Farrell
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Hanchuan Peng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Ed Lein
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lydia Ng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Amy Bernard
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, Washington, USA.
| | - Christof Koch
- Allen Institute for Brain Science, Seattle, Washington, USA
| |
Collapse
|
40
|
Einevoll GT, Destexhe A, Diesmann M, Grün S, Jirsa V, de Kamps M, Migliore M, Ness TV, Plesser HE, Schürmann F. The Scientific Case for Brain Simulations. Neuron 2019; 102:735-744. [DOI: 10.1016/j.neuron.2019.03.027] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 02/06/2019] [Accepted: 03/18/2019] [Indexed: 01/30/2023]
|