1
|
Schmitt O. Relationships and representations of brain structures, connectivity, dynamics and functions. Prog Neuropsychopharmacol Biol Psychiatry 2025; 138:111332. [PMID: 40147809 DOI: 10.1016/j.pnpbp.2025.111332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 02/20/2025] [Accepted: 03/10/2025] [Indexed: 03/29/2025]
Abstract
The review explores the complex interplay between brain structures and their associated functions, presenting a diversity of hierarchical models that enhances our understanding of these relationships. Central to this approach are structure-function flow diagrams, which offer a visual representation of how specific neuroanatomical structures are linked to their functional roles. These diagrams are instrumental in mapping the intricate connections between different brain regions, providing a clearer understanding of how functions emerge from the underlying neural architecture. The study details innovative attempts to develop new functional hierarchies that integrate structural and functional data. These efforts leverage recent advancements in neuroimaging techniques such as fMRI, EEG, MEG, and PET, as well as computational models that simulate neural dynamics. By combining these approaches, the study seeks to create a more refined and dynamic hierarchy that can accommodate the brain's complexity, including its capacity for plasticity and adaptation. A significant focus is placed on the overlap of structures and functions within the brain. The manuscript acknowledges that many brain regions are multifunctional, contributing to different cognitive and behavioral processes depending on the context. This overlap highlights the need for a flexible, non-linear hierarchy that can capture the brain's intricate functional landscape. Moreover, the study examines the interdependence of these functions, emphasizing how the loss or impairment of one function can impact others. Another crucial aspect discussed is the brain's ability to compensate for functional deficits following neurological diseases or injuries. The investigation explores how the brain reorganizes itself, often through the recruitment of alternative neural pathways or the enhancement of existing ones, to maintain functionality despite structural damage. This compensatory mechanism underscores the brain's remarkable plasticity, demonstrating its ability to adapt and reconfigure itself in response to injury, thereby ensuring the continuation of essential functions. In conclusion, the study presents a system of brain functions that integrates structural, functional, and dynamic perspectives. It offers a robust framework for understanding how the brain's complex network of structures supports a wide range of cognitive and behavioral functions, with significant implications for both basic neuroscience and clinical applications.
Collapse
Affiliation(s)
- Oliver Schmitt
- Medical School Hamburg - University of Applied Sciences and Medical University - Institute for Systems Medicine, Am Kaiserkai 1, Hamburg 20457, Germany; University of Rostock, Department of Anatomy, Gertrudenstr. 9, Rostock, 18055 Rostock, Germany.
| |
Collapse
|
2
|
Xiao ZC, Lin KK, Young LS. Efficient models of cortical activity via local dynamic equilibria and coarse-grained interactions. Proc Natl Acad Sci U S A 2024; 121:e2320454121. [PMID: 38923983 PMCID: PMC11228477 DOI: 10.1073/pnas.2320454121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 05/14/2024] [Indexed: 06/28/2024] Open
Abstract
Biologically detailed models of brain circuitry are challenging to build and simulate due to the large number of neurons, their complex interactions, and the many unknown physiological parameters. Simplified mathematical models are more tractable, but harder to evaluate when too far removed from neuroanatomy/physiology. We propose that a multiscale model, coarse-grained (CG) while preserving local biological details, offers the best balance between biological realism and computability. This paper presents such a model. Generally, CG models focus on the interaction between groups of neurons-here termed "pixels"-rather than individual cells. In our case, dynamics are alternately updated at intra- and interpixel scales, with one informing the other, until convergence to equilibrium is achieved on both scales. An innovation is how we exploit the underlying biology: Taking advantage of the similarity in local anatomical structures across large regions of the cortex, we model intrapixel dynamics as a single dynamical system driven by "external" inputs. These inputs vary with events external to the pixel, but their ranges can be estimated a priori. Precomputing and tabulating all potential local responses speed up the updating procedure significantly compared to direct multiscale simulation. We illustrate our methodology using a model of the primate visual cortex. Except for local neuron-to-neuron variability (necessarily lost in any CG approximation) our model reproduces various features of large-scale network models at a tiny fraction of the computational cost. These include neuronal responses as a consequence of their orientation selectivity, a primary function of visual neurons.
Collapse
Affiliation(s)
- Zhuo-Cheng Xiao
- New York University - East China Normal University Institute of Mathematical Sciences, New York University, Shanghai200124, China
- Institute of Brain and Cognitive Science, New York University - East China Normal University, New York University, Shanghai200124, China
- College of Art and Sciences, New York University, Shanghai200124, China
| | - Kevin K. Lin
- Department of Mathematics, University of Arizona, Tucson, AZ85721
| | - Lai-Sang Young
- Department of Mathematics, Courant Institute of Mathematical Sciences, New York University, New York, NY10012
| |
Collapse
|
3
|
Monaco JD, Hwang GM. Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognit Comput 2022; 16:1-13. [PMID: 39129840 PMCID: PMC11306504 DOI: 10.1007/s12559-022-10081-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 11/15/2022] [Indexed: 12/28/2022]
Abstract
Artificial intelligence has not achieved defining features of biological intelligence despite models boasting more parameters than neurons in the human brain. In this perspective article, we synthesize historical approaches to understanding intelligent systems and argue that methodological and epistemic biases in these fields can be resolved by shifting away from cognitivist brain-as-computer theories and recognizing that brains exist within large, interdependent living systems. Integrating the dynamical systems view of cognition with the massive distributed feedback of perceptual control theory highlights a theoretical gap in our understanding of nonreductive neural mechanisms. Cell assemblies-properly conceived as reentrant dynamical flows and not merely as identified groups of neurons-may fill that gap by providing a minimal supraneuronal level of organization that establishes a neurodynamical base layer for computation. By considering information streams from physical embodiment and situational embedding, we discuss this computational base layer in terms of conserved oscillatory and structural properties of cortical-hippocampal networks. Our synthesis of embodied cognition, based in dynamical systems and perceptual control, aims to bypass the neurosymbolic stalemates that have arisen in artificial intelligence, cognitive science, and computational neuroscience.
Collapse
Affiliation(s)
- Joseph D. Monaco
- Dept of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Grace M. Hwang
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD USA
| |
Collapse
|
4
|
Tschirhart P, Segall K. BrainFreeze: Expanding the Capabilities of Neuromorphic Systems Using Mixed-Signal Superconducting Electronics. Front Neurosci 2021; 15:750748. [PMID: 34992515 PMCID: PMC8724521 DOI: 10.3389/fnins.2021.750748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/08/2021] [Indexed: 11/23/2022] Open
Abstract
Superconducting electronics (SCE) is uniquely suited to implement neuromorphic systems. As a result, SCE has the potential to enable a new generation of neuromorphic architectures that can simultaneously provide scalability, programmability, biological fidelity, on-line learning support, efficiency and speed. Supporting all of these capabilities simultaneously has thus far proven to be difficult using existing semiconductor technologies. However, as the fields of computational neuroscience and artificial intelligence (AI) continue to advance, the need for architectures that can provide combinations of these capabilities will grow. In this paper, we will explain how superconducting electronics could be used to address this need by combining analog and digital SCE circuits to build large scale neuromorphic systems. In particular, we will show through detailed analysis that the available SCE technology is suitable for near term neuromorphic demonstrations. Furthermore, this analysis will establish that neuromorphic architectures built using SCE will have the potential to be significantly faster and more efficient than current approaches, all while supporting capabilities such as biologically suggestive neuron models and on-line learning. In the future, SCE-based neuromorphic systems could serve as experimental platforms supporting investigations that are not feasible with current approaches. Ultimately, these systems and the experiments that they support would enable the advancement of neuroscience and the development of more sophisticated AI.
Collapse
Affiliation(s)
- Paul Tschirhart
- Advanced Technology Laboratory, Northrop Grumman, Linthicum, MD, United States
| | - Ken Segall
- Advanced Technology Laboratory, Northrop Grumman, Linthicum, MD, United States
- Department of Physics and Astronomy, Colgate University, Hamilton, NY, United States
| |
Collapse
|
5
|
Silvernagel MP, Ling AS, Nuyujukian P. A markerless platform for ambulatory systems neuroscience. Sci Robot 2021; 6:eabj7045. [PMID: 34516749 DOI: 10.1126/scirobotics.abj7045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Motor systems neuroscience seeks to understand how the brain controls movement. To minimize confounding variables, large-animal studies typically constrain body movement from areas not under observation, ensuring consistent, repeatable behaviors. Such studies have fueled decades of research, but they may be artificially limiting the richness of neural data observed, preventing generalization to more natural movements and settings. Neuroscience studies of unconstrained movement would capture a greater range of behavior and a more complete view of neuronal activity, but instrumenting an experimental rig suitable for large animals presents substantial engineering challenges. Here, we present a markerless, full-body motion tracking and synchronized wireless neural electrophysiology platform for large, ambulatory animals. Composed of four depth (RGB-D) cameras that provide a 360° view of a 4.5-square-meters enclosed area, this system is designed to record a diverse range of neuroethologically relevant behaviors. This platform also allows for the simultaneous acquisition of hundreds of wireless neural recording channels in multiple brain regions. As behavioral and neuronal data are generated at rates below 200 megabytes per second, a single desktop can facilitate hours of continuous recording. This setup is designed for systems neuroscience and neuroengineering research, where synchronized kinematic behavior and neural data are the foundation for investigation. By enabling the study of previously unexplored movement tasks, this system can generate insights into the functioning of the mammalian motor system and provide a platform to develop brain-machine interfaces for unconstrained applications.
Collapse
Affiliation(s)
| | - Alissa S Ling
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Paul Nuyujukian
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.,Department of Bioengineering, Stanford University, Stanford, CA, USA.,Department of Neurosurgery, Stanford University, Stanford, CA, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.,Stanford Bio-X, Stanford University, Stanford, CA, USA
| | | |
Collapse
|
6
|
Medaglia JD, Erickson B, Zimmerman J, Kelkar A. Personalizing neuromodulation. Int J Psychophysiol 2020; 154:101-110. [PMID: 30685229 PMCID: PMC6824943 DOI: 10.1016/j.ijpsycho.2019.01.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 11/18/2018] [Accepted: 01/10/2019] [Indexed: 02/07/2023]
Abstract
In the era of "big data", we are gaining rich person-specific information about neuroanatomy, neural function, and cognitive functions. However, the optimal ways to create precise approaches to optimize individuals' mental functions in health and disease are unclear. Multimodal analysis and modeling approaches can guide neuromodulation by combining anatomical networks, functional signal analysis, and cognitive neuroscience paradigms in single subjects. Our progress could be improved by progressing from statistical fits to mechanistic models. Using transcranial magnetic stimulation as an example, we discuss how integrating methods with a focus on mechanisms could improve our predictions TMS effects within individuals, refine our models of health and disease, and improve our treatments.
Collapse
Affiliation(s)
- John D Medaglia
- Department of Psychology, Drexel University, Philadelphia, PA 19104, USA; Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Neurology, Drexel University, Philadelphia, PA, 19104, USA.
| | - Brian Erickson
- Department of Psychology, Drexel University, Philadelphia, PA 19104, USA; Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Jared Zimmerman
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Apoorva Kelkar
- Department of Psychology, Drexel University, Philadelphia, PA 19104, USA
| |
Collapse
|
7
|
Cohen U, Chung S, Lee DD, Sompolinsky H. Separability and geometry of object manifolds in deep neural networks. Nat Commun 2020; 11:746. [PMID: 32029727 PMCID: PMC7005295 DOI: 10.1038/s41467-020-14578-5] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Accepted: 01/08/2020] [Indexed: 01/08/2023] Open
Abstract
Stimuli are represented in the brain by the collective population responses of sensory neurons, and an object presented under varying conditions gives rise to a collection of neural population responses called an ‘object manifold’. Changes in the object representation along a hierarchical sensory system are associated with changes in the geometry of those manifolds, and recent theoretical progress connects this geometry with ‘classification capacity’, a quantitative measure of the ability to support object classification. Deep neural networks trained on object classification tasks are a natural testbed for the applicability of this relation. We show how classification capacity improves along the hierarchies of deep neural networks with different architectures. We demonstrate that changes in the geometry of the associated object manifolds underlie this improved capacity, and shed light on the functional roles different levels in the hierarchy play to achieve it, through orchestrated reduction of manifolds’ radius, dimensionality and inter-manifold correlations. Neural activity space or manifold that represents object information changes across the layers of a deep neural network. Here the authors present a theoretical account of the relationship between the geometry of the manifolds and the classification capacity of the neural networks.
Collapse
Affiliation(s)
- Uri Cohen
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel
| | - SueYeon Chung
- Center for Brain Science, Harvard University, Cambridge, MA, USA.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Daniel D Lee
- Department of Electrical and Computer Engineering, Cornell Tech, New York, NY, USA
| | - Haim Sompolinsky
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel. .,Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
8
|
Weisenburger S, Tejera F, Demas J, Chen B, Manley J, Sparks FT, Martínez Traub F, Daigle T, Zeng H, Losonczy A, Vaziri A. Volumetric Ca 2+ Imaging in the Mouse Brain Using Hybrid Multiplexed Sculpted Light Microscopy. Cell 2019; 177:1050-1066.e14. [PMID: 30982596 DOI: 10.1016/j.cell.2019.03.011] [Citation(s) in RCA: 119] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 12/19/2018] [Accepted: 03/04/2019] [Indexed: 01/07/2023]
Abstract
Calcium imaging using two-photon scanning microscopy has become an essential tool in neuroscience. However, in its typical implementation, the tradeoffs between fields of view, acquisition speeds, and depth restrictions in scattering brain tissue pose severe limitations. Here, using an integrated systems-wide optimization approach combined with multiple technical innovations, we introduce a new design paradigm for optical microscopy based on maximizing biological information while maintaining the fidelity of obtained neuron signals. Our modular design utilizes hybrid multi-photon acquisition and allows volumetric recording of neuroactivity at single-cell resolution within up to 1 × 1 × 1.22 mm volumes at up to 17 Hz in awake behaving mice. We establish the capabilities and potential of the different configurations of our imaging system at depth and across brain regions by applying it to in vivo recording of up to 12,000 neurons in mouse auditory cortex, posterior parietal cortex, and hippocampus.
Collapse
Affiliation(s)
- Siegfried Weisenburger
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Frank Tejera
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Brandon Chen
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Fraser T Sparks
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | | | - Tanya Daigle
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Attila Losonczy
- Department of Neuroscience, Columbia University, New York, NY, USA; The Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA; Research Institute of Molecular Pathology, Vienna, Austria; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY, USA.
| |
Collapse
|
9
|
Henderson JA, Gong P. Functional mechanisms underlie the emergence of a diverse range of plasticity phenomena. PLoS Comput Biol 2018; 14:e1006590. [PMID: 30419014 PMCID: PMC6258383 DOI: 10.1371/journal.pcbi.1006590] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 11/26/2018] [Accepted: 10/23/2018] [Indexed: 11/18/2022] Open
Abstract
Diverse plasticity mechanisms are orchestrated to shape the spatiotemporal dynamics underlying brain functions. However, why these plasticity rules emerge and how their dynamics interact with neural activity to give rise to complex neural circuit dynamics remains largely unknown. Here we show that both Hebbian and homeostatic plasticity rules emerge from a functional perspective of neuronal dynamics whereby each neuron learns to encode its own activity in the population activity, so that the activity of the presynaptic neuron can be decoded from the activity of its postsynaptic neurons. We explain how a range of experimentally observed plasticity phenomena with widely separated time scales emerge from learning this encoding function, including STDP and its frequency dependence, and metaplasticity. We show that when implemented in neural circuits, these plasticity rules naturally give rise to essential neural response properties, including variable neural dynamics with balanced excitation and inhibition, and approximately log-normal distributions of synaptic strengths, while simultaneously encoding a complex real-world visual stimulus. These findings establish a novel function-based account of diverse plasticity mechanisms, providing a unifying framework relating plasticity, dynamics and neural computation. Many experiments have documented a variety of ways in which the connectivity strengths between neurons change in response to the activity of neurons. These changes are an important part of learning. However, it is not understood how such a diverse range of observations can be understood as consequences of an underlying algorithm used by brains for learning. In order to understand such a learning algorithm it is also necessary to understand the neural computation that is being learned, that is, how the functions of the brain are encoded in the activity of its neurons and its connectivity. In this work we propose a simple way in which information can be encoded and decoded in a network of neurons for operating on real-world stimuli, and how this can be learned using two fundamental plasticity rules that change the strength of connections between neurons in response to neural activity. Surprisingly, many experimental observations result as consequences of this approach, indicating that studying the learning of function provides a novel framework for unifying plasticity, dynamics, and neural computation.
Collapse
Affiliation(s)
- James A. Henderson
- School of Physics, The University of Sydney, Sydney, NSW, Australia
- ARC Centre of Excellence for Integrative Brain Function, The University of Sydney, Sydney, NSW, Australia
- * E-mail: (JAH); (PG)
| | - Pulin Gong
- School of Physics, The University of Sydney, Sydney, NSW, Australia
- ARC Centre of Excellence for Integrative Brain Function, The University of Sydney, Sydney, NSW, Australia
- * E-mail: (JAH); (PG)
| |
Collapse
|
10
|
|
11
|
França TFA, Monserrat JM. How the Hippocampus Represents Memories: Making Sense of Memory Allocation Studies. Bioessays 2018; 40:e800068. [PMID: 30176065 DOI: 10.1002/bies.201800068] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Revised: 08/15/2018] [Indexed: 01/11/2023]
Abstract
In recent years there has been a wealth of studies investigating how memories are allocated in the hippocampus. Some of those studies showed that it is possible to manipulate the identity of neurons recruited to represent a given memory without affecting the memory's behavioral expression. Those findings raised questions about how the hippocampus represents memories, with some researchers arguing that hippocampal neurons do not represent fixed stimuli. Herein, an alternative hypothesis is argued. Neurons in high-order brain regions can be tuned to multiple dimensions, forming complex, abstract representations. It is argued that such complex receptive fields allow those neurons to show some flexibility in their responses while still representing relatively fixed sets of stimuli. Moreover, it is pointed out that changes induced by artificial manipulation of cell assemblies are not completely redundant-the observed behavioral redundancy does not imply cognitive redundancy, as different, but similar, memories may induce the same behavior.
Collapse
Affiliation(s)
- Thiago F A França
- Programa de Pós-graduação em Ciências Fisiológicas, Universidade Federal do Rio Grande-FURG, Rio Grande, Rio Grande do Sul, Brazil
| | - José M Monserrat
- Programa de Pós-graduação em Ciências Fisiológicas, Universidade Federal do Rio Grande-FURG, Rio Grande, Rio Grande do Sul, Brazil.,Instituto de Ciências Biológicas, Universidade Federal do Rio Grande (FURG), Rio Grande, Rio Grande do Sul, Brazil
| |
Collapse
|
12
|
Neftci EO. Data and Power Efficient Intelligence with Neuromorphic Learning Machines. iScience 2018; 5:52-68. [PMID: 30240646 PMCID: PMC6123858 DOI: 10.1016/j.isci.2018.06.010] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 06/04/2018] [Accepted: 06/26/2018] [Indexed: 11/22/2022] Open
Abstract
The success of deep networks and recent industry involvement in brain-inspired computing is igniting a widespread interest in neuromorphic hardware that emulates the biological processes of the brain on an electronic substrate. This review explores interdisciplinary approaches anchored in machine learning theory that enable the applicability of neuromorphic technologies to real-world, human-centric tasks. We find that (1) recent work in binary deep networks and approximate gradient descent learning are strikingly compatible with a neuromorphic substrate; (2) where real-time adaptability and autonomy are necessary, neuromorphic technologies can achieve significant advantages over main-stream ones; and (3) challenges in memory technologies, compounded by a tradition of bottom-up approaches in the field, block the road to major breakthroughs. We suggest that a neuromorphic learning framework, tuned specifically for the spatial and temporal constraints of the neuromorphic substrate, will help guiding hardware algorithm co-design and deploying neuromorphic hardware for proactive learning of real-world data.
Collapse
Affiliation(s)
- Emre O Neftci
- Department of Cognitive Sciences, UC Irvine, Irvine, CA 92697-5100, USA; Department of Computer Science, UC Irvine, Irvine, CA 92697-5100, USA.
| |
Collapse
|
13
|
Antic SD, Hines M, Lytton WW. Embedded ensemble encoding hypothesis: The role of the "Prepared" cell. J Neurosci Res 2018; 96:1543-1559. [PMID: 29633330 DOI: 10.1002/jnr.24240] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2017] [Revised: 03/10/2018] [Accepted: 03/12/2018] [Indexed: 01/08/2023]
Abstract
We here reconsider current theories of neural ensembles in the context of recent discoveries about neuronal dendritic physiology. The key physiological observation is that the dendritic plateau potential produces sustained depolarization of the cell body (amplitude 10-20 mV, duration 200-500 ms). Our central hypothesis is that synaptically-evoked dendritic plateau potentials lead to a prepared state of a neuron that favors spike generation. The plateau both depolarizes the cell toward spike threshold, and provides faster response to inputs through a shortened membrane time constant. As a result, the speed of synaptic-to-action potential (AP) transfer is faster during the plateau phase. Our hypothesis relates the changes from "resting" to "depolarized" neuronal state to changes in ensemble dynamics and in network information flow. The plateau provides the Prepared state (sustained depolarization of the cell body) with a time window of 200-500 ms. During this time, a neuron can tune into ongoing network activity and synchronize spiking with other neurons to provide a coordinated Active state (robust firing of somatic APs), which would permit "binding" of signals through coordination of neural activity across a population. The transient Active ensemble of neurons is embedded in the longer-lasting Prepared ensemble of neurons. We hypothesize that "embedded ensemble encoding" may be an important organizing principle in networks of neurons.
Collapse
Affiliation(s)
- Srdjan D Antic
- Department of Neuroscience, Institute for Systems Genomics, Stem Cell Institute, UConn Health, Farmington, Connecticut
| | - Michael Hines
- Department of Neuroscience, Yale School of Medicine, New Haven, Connecticut
| | - William W Lytton
- Physiology and Pharmacology, Neurology, Biomedical Engineering, SUNY Downstate Medical Center, Brooklyn, New York.,Department of Neurology, Kings County Hospital, Brooklyn, New York
| |
Collapse
|
14
|
Manninen T, Havela R, Linne ML. Computational Models for Calcium-Mediated Astrocyte Functions. Front Comput Neurosci 2018; 12:14. [PMID: 29670517 PMCID: PMC5893839 DOI: 10.3389/fncom.2018.00014] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2018] [Accepted: 02/28/2018] [Indexed: 12/16/2022] Open
Abstract
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.
Collapse
Affiliation(s)
- Tiina Manninen
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
| | | | - Marja-Leena Linne
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
| |
Collapse
|
15
|
van Gerven M. Computational Foundations of Natural Intelligence. Front Comput Neurosci 2017; 11:112. [PMID: 29375355 PMCID: PMC5770642 DOI: 10.3389/fncom.2017.00112] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Accepted: 11/22/2017] [Indexed: 01/14/2023] Open
Abstract
New developments in AI and neuroscience are revitalizing the quest to understanding natural intelligence, offering insight about how to equip machines with human-like capabilities. This paper reviews some of the computational principles relevant for understanding natural intelligence and, ultimately, achieving strong AI. After reviewing basic principles, a variety of computational modeling approaches is discussed. Subsequently, I concentrate on the use of artificial neural networks as a framework for modeling cognitive processes. This paper ends by outlining some of the challenges that remain to fulfill the promise of machines that show human-like intelligence.
Collapse
Affiliation(s)
- Marcel van Gerven
- Computational Cognitive Neuroscience Lab, Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
16
|
de Vasconcelos NAP, Soares-Cunha C, Rodrigues AJ, Ribeiro S, Sousa N. Coupled variability in primary sensory areas and the hippocampus during spontaneous activity. Sci Rep 2017; 7:46077. [PMID: 28393914 PMCID: PMC5385523 DOI: 10.1038/srep46077] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Accepted: 03/10/2017] [Indexed: 12/25/2022] Open
Abstract
The cerebral cortex is an anatomically divided and functionally specialized structure. It includes distinct areas, which work on different states over time. The structural features of spiking activity in sensory cortices have been characterized during spontaneous and evoked activity. However, the coordination among cortical and sub-cortical neurons during spontaneous activity across different states remains poorly characterized. We addressed this issue by studying the temporal coupling of spiking variability recorded from primary sensory cortices and hippocampus of anesthetized or freely behaving rats. During spontaneous activity, spiking variability was highly correlated across primary cortical sensory areas at both small and large spatial scales, whereas the cortico-hippocampal correlation was modest. This general pattern of spiking variability was observed under urethane anesthesia, as well as during waking, slow-wave sleep and rapid-eye-movement sleep, and was unchanged by novel stimulation. These results support the notion that primary sensory areas are strongly coupled during spontaneous activity.
Collapse
Affiliation(s)
- Nivaldo A. P. de Vasconcelos
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, 4710-057, Portugal
- ICVS/3B’s - PT Government Associate Laboratory, Braga/Guimarães, Portugal
| | - Carina Soares-Cunha
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, 4710-057, Portugal
- ICVS/3B’s - PT Government Associate Laboratory, Braga/Guimarães, Portugal
| | - Ana João Rodrigues
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, 4710-057, Portugal
- ICVS/3B’s - PT Government Associate Laboratory, Braga/Guimarães, Portugal
| | - Sidarta Ribeiro
- Brain Institute, Federal University of Rio Grande do Norte (UFRN), Natal, RN,59056-450, Brazil
| | - Nuno Sousa
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, 4710-057, Portugal
- ICVS/3B’s - PT Government Associate Laboratory, Braga/Guimarães, Portugal
| |
Collapse
|
17
|
Naumann EA, Fitzgerald JE, Dunn TW, Rihel J, Sompolinsky H, Engert F. From Whole-Brain Data to Functional Circuit Models: The Zebrafish Optomotor Response. Cell 2017; 167:947-960.e20. [PMID: 27814522 DOI: 10.1016/j.cell.2016.10.019] [Citation(s) in RCA: 161] [Impact Index Per Article: 20.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Revised: 05/24/2016] [Accepted: 10/11/2016] [Indexed: 02/06/2023]
Abstract
Detailed descriptions of brain-scale sensorimotor circuits underlying vertebrate behavior remain elusive. Recent advances in zebrafish neuroscience offer new opportunities to dissect such circuits via whole-brain imaging, behavioral analysis, functional perturbations, and network modeling. Here, we harness these tools to generate a brain-scale circuit model of the optomotor response, an orienting behavior evoked by visual motion. We show that such motion is processed by diverse neural response types distributed across multiple brain regions. To transform sensory input into action, these regions sequentially integrate eye- and direction-specific sensory streams, refine representations via interhemispheric inhibition, and demix locomotor instructions to independently drive turning and forward swimming. While experiments revealed many neural response types throughout the brain, modeling identified the dimensions of functional connectivity most critical for the behavior. We thus reveal how distributed neurons collaborate to generate behavior and illustrate a paradigm for distilling functional circuit models from whole-brain data.
Collapse
Affiliation(s)
- Eva A Naumann
- Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Cell and Developmental Biology, University College London, London WC1E 6BT, UK
| | | | - Timothy W Dunn
- Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Jason Rihel
- Department of Cell and Developmental Biology, University College London, London WC1E 6BT, UK
| | - Haim Sompolinsky
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Racah Institute of Physics and the Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Florian Engert
- Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
18
|
A new neuroinformatics approach to personalized medicine in neurology: The Virtual Brain. Curr Opin Neurol 2016; 29:429-36. [PMID: 27224088 DOI: 10.1097/wco.0000000000000344] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
PURPOSE OF REVIEW An exciting advance in the field of neuroimaging is the acquisition and processing of very large data sets (so called 'big data'), permitting large-scale inferences that foster a greater understanding of brain function in health and disease. Yet what we are clearly lacking are quantitative integrative tools to translate this understanding to the individual level to lay the basis for personalized medicine. RECENT FINDINGS Here we address this challenge through a review on how the relatively new field of neuroinformatics modeling has the capacity to track brain network function at different levels of inquiry, from microscopic to macroscopic and from the localized to the distributed. In this context, we introduce a new and unique multiscale approach, The Virtual Brain (TVB), that effectively models individualized brain activity, linking large-scale (macroscopic) brain dynamics with biophysical parameters at the microscopic level. We also show how TVB modeling provides unique biological interpretable data in epilepsy and stroke. SUMMARY These results establish the basis for a deliberate integration of computational biology and neuroscience into clinical approaches for elucidating cellular mechanisms of disease. In the future, this can provide the means to create a collection of disease-specific models that can be applied on the individual level to personalize therapeutic interventions. VIDEO ABSTRACT.
Collapse
|
19
|
Nair SS, Paré D, Vicentic A. Biologically based neural circuit modelling for the study of fear learning and extinction. NPJ SCIENCE OF LEARNING 2016; 1:16015. [PMID: 29541482 PMCID: PMC5846682 DOI: 10.1038/npjscilearn.2016.15] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2016] [Revised: 09/09/2016] [Accepted: 09/19/2016] [Indexed: 05/25/2023]
Abstract
The neuronal systems that promote protective defensive behaviours have been studied extensively using Pavlovian conditioning. In this paradigm, an initially neutral-conditioned stimulus is paired with an aversive unconditioned stimulus leading the subjects to display behavioural signs of fear. Decades of research into the neural bases of this simple behavioural paradigm uncovered that the amygdala, a complex structure comprised of several interconnected nuclei, is an essential part of the neural circuits required for the acquisition, consolidation and expression of fear memory. However, emerging evidence from the confluence of electrophysiological, tract tracing, imaging, molecular, optogenetic and chemogenetic methodologies, reveals that fear learning is mediated by multiple connections between several amygdala nuclei and their distributed targets, dynamical changes in plasticity in local circuit elements as well as neuromodulatory mechanisms that promote synaptic plasticity. To uncover these complex relations and analyse multi-modal data sets acquired from these studies, we argue that biologically realistic computational modelling, in conjunction with experiments, offers an opportunity to advance our understanding of the neural circuit mechanisms of fear learning and to address how their dysfunction may lead to maladaptive fear responses in mental disorders.
Collapse
Affiliation(s)
- Satish S Nair
- Department of Electrical and Computer Engineering, University of Missouri, Columbia, MO, USA
| | - Denis Paré
- Center for Molecular and Behavioral Neuroscience, Rutgers University—Newark, Newark, NJ, USA
| | - Aleksandra Vicentic
- Division of Neuroscience and Basic Behavioral Science, National Institute of Mental Health, Rockville, MD, USA
| |
Collapse
|
20
|
Prevedel R, Verhoef AJ, Pernía-Andrade AJ, Weisenburger S, Huang BS, Nöbauer T, Fernández A, Delcour JE, Golshani P, Baltuska A, Vaziri A. Fast volumetric calcium imaging across multiple cortical layers using sculpted light. Nat Methods 2016; 13:1021-1028. [PMID: 27798612 DOI: 10.1038/nmeth.4040] [Citation(s) in RCA: 104] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2016] [Accepted: 09/29/2016] [Indexed: 01/17/2023]
Abstract
Although whole-organism calcium imaging in small and semi-transparent animals has been demonstrated, capturing the functional dynamics of large-scale neuronal circuits in awake behaving mammals at high speed and resolution has remained one of the main frontiers in systems neuroscience. Here we present a method based on light sculpting that enables unbiased single- and dual-plane high-speed (up to 160 Hz) calcium imaging as well as in vivo volumetric calcium imaging of a mouse cortical column (0.5 mm × 0.5 mm × 0.5 mm) at single-cell resolution and fast volume rates (3-6 Hz). We achieved this by tailoring the point-spread function of our microscope to the structures of interest while maximizing the signal-to-noise ratio using a home-built fiber laser amplifier with pulses that are synchronized to the imaging voxel speed. This enabled in vivo recording of calcium dynamics of several thousand neurons across cortical layers and in the hippocampus of awake behaving mice.
Collapse
Affiliation(s)
- Robert Prevedel
- Research Institute of Molecular Pathology, Vienna, Austria.,Max F. Perutz Laboratories Support GmbH, University of Vienna, Vienna, Austria.,Research Platform Quantum Phenomena &Nanoscale Biological Systems (QuNaBioS), University of Vienna, Vienna, Austria.,European Molecular Biology Laboratory, Heidelberg, Germany
| | - Aart J Verhoef
- Photonics Institute, TU Wien, Vienna, Austria.,Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Siegfried Weisenburger
- Research Institute of Molecular Pathology, Vienna, Austria.,The Rockefeller University, New York, New York, USA
| | - Ben S Huang
- Department of Neurology and Psychiatry, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California, USA
| | - Tobias Nöbauer
- Research Institute of Molecular Pathology, Vienna, Austria.,The Rockefeller University, New York, New York, USA
| | - Alma Fernández
- Photonics Institute, TU Wien, Vienna, Austria.,Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Peyman Golshani
- Department of Neurology and Psychiatry, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California, USA.,West Los Angeles Virginia Medical Center, Los Angeles, California, USA
| | | | - Alipasha Vaziri
- Research Institute of Molecular Pathology, Vienna, Austria.,Max F. Perutz Laboratories Support GmbH, University of Vienna, Vienna, Austria.,Research Platform Quantum Phenomena &Nanoscale Biological Systems (QuNaBioS), University of Vienna, Vienna, Austria.,The Rockefeller University, New York, New York, USA
| |
Collapse
|
21
|
Szilágyi A, Zachar I, Fedor A, de Vladar HP, Szathmáry E. Breeding novel solutions in the brain: a model of Darwinian neurodynamics. F1000Res 2016; 5:2416. [PMID: 27990266 PMCID: PMC5130073 DOI: 10.12688/f1000research.9630.2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/21/2017] [Indexed: 01/03/2023] Open
Abstract
Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.
Collapse
Affiliation(s)
- András Szilágyi
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - István Zachar
- Department of Plant Systematics, Ecology and Theoretical Biology, Institute of Biology, Eötvös University, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Anna Fedor
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Harold P de Vladar
- Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Eörs Szathmáry
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Department of Plant Systematics, Ecology and Theoretical Biology, Institute of Biology, Eötvös University, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary.,Evolutionary Systems Research Group, MTA Ecological Research Centre, Tihany, Hungary
| |
Collapse
|
22
|
Szilágyi A, Zachar I, Fedor A, de Vladar HP, Szathmáry E. Breeding novel solutions in the brain: a model of Darwinian neurodynamics. F1000Res 2016; 5:2416. [PMID: 27990266 DOI: 10.12688/f1000research.9630.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/20/2016] [Indexed: 01/15/2023] Open
Abstract
Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain - recurrent neural networks (acting as attractors), the action selection loop and implicit working memory - to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.
Collapse
Affiliation(s)
- András Szilágyi
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - István Zachar
- Department of Plant Systematics, Ecology and Theoretical Biology, Institute of Biology, Eötvös University, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Anna Fedor
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Harold P de Vladar
- Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Eörs Szathmáry
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Department of Plant Systematics, Ecology and Theoretical Biology, Institute of Biology, Eötvös University, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary.,Evolutionary Systems Research Group, MTA Ecological Research Centre, Tihany, Hungary
| |
Collapse
|
23
|
Gaiteri C, Mostafavi S, Honey CJ, De Jager PL, Bennett DA. Genetic variants in Alzheimer disease - molecular and brain network approaches. Nat Rev Neurol 2016; 12:413-27. [PMID: 27282653 PMCID: PMC5017598 DOI: 10.1038/nrneurol.2016.84] [Citation(s) in RCA: 69] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Genetic studies in late-onset Alzheimer disease (LOAD) are aimed at identifying core disease mechanisms and providing potential biomarkers and drug candidates to improve clinical care of AD. However, owing to the complexity of LOAD, including pathological heterogeneity and disease polygenicity, extraction of actionable guidance from LOAD genetics has been challenging. Past attempts to summarize the effects of LOAD-associated genetic variants have used pathway analysis and collections of small-scale experiments to hypothesize functional convergence across several variants. In this Review, we discuss how the study of molecular, cellular and brain networks provides additional information on the effects of LOAD-associated genetic variants. We then discuss emerging combinations of these omic data sets into multiscale models, which provide a more comprehensive representation of the effects of LOAD-associated genetic variants at multiple biophysical scales. Furthermore, we highlight the clinical potential of mechanistically coupling genetic variants and disease phenotypes with multiscale brain models.
Collapse
Affiliation(s)
- Chris Gaiteri
- Rush Alzheimer's Disease Center, Rush University Medical Center, 600 S Paulina Street, Chicago, Illinois 60612, USA
| | - Sara Mostafavi
- Department of Statistics, and Medical Genetics; Centre for Molecular and Medicine and Therapeutics, University of British Columbia, 950 West 28th Avenue, Vancouver, British Columbia V5Z 4H4, Canada
| | - Christopher J Honey
- Department of Psychology, University of Toronto, 100 St. George Street, 4th Floor Sidney Smith Hall, Toronto, Ontario M5S 3G3, Canada
| | - Philip L De Jager
- Program in Translational NeuroPsychiatric Genomics, Institute for the Neurosciences, Departments of Neurology and Psychiatry, Brigham and Women's Hospital, 75 Francis Street, Boston MA 02115, USA
| | - David A Bennett
- Rush Alzheimer's Disease Center, Rush University Medical Center, 600 S Paulina Street, Chicago, Illinois 60612, USA
| |
Collapse
|
24
|
Analysis of complex neural circuits with nonlinear multidimensional hidden state models. Proc Natl Acad Sci U S A 2016; 113:6538-43. [PMID: 27222584 DOI: 10.1073/pnas.1606280113] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A universal need in understanding complex networks is the identification of individual information channels and their mutual interactions under different conditions. In neuroscience, our premier example, networks made up of billions of nodes dynamically interact to bring about thought and action. Granger causality is a powerful tool for identifying linear interactions, but handling nonlinear interactions remains an unmet challenge. We present a nonlinear multidimensional hidden state (NMHS) approach that achieves interaction strength analysis and decoding of networks with nonlinear interactions by including latent state variables for each node in the network. We compare NMHS to Granger causality in analyzing neural circuit recordings and simulations, improvised music, and sociodemographic data. We conclude that NMHS significantly extends the scope of analyses of multidimensional, nonlinear networks, notably in coping with the complexity of the brain.
Collapse
|
25
|
Aćimović J, Mäki-Marttunen T, Linne ML. The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model. Front Neuroanat 2015; 9:76. [PMID: 26113811 PMCID: PMC4461825 DOI: 10.3389/fnana.2015.00076] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2014] [Accepted: 05/18/2015] [Indexed: 11/13/2022] Open
Abstract
We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.
Collapse
Affiliation(s)
- Jugoslava Aćimović
- Computational Neuroscience Group, Department of Signal Processing, Tampere University of Technology Tampere, Finland
| | - Tuomo Mäki-Marttunen
- Computational Neuroscience Group, Department of Signal Processing, Tampere University of Technology Tampere, Finland ; Psychosis Research Centre, Institute of Clinical Medicine, University of Oslo Oslo, Norway
| | - Marja-Leena Linne
- Computational Neuroscience Group, Department of Signal Processing, Tampere University of Technology Tampere, Finland
| |
Collapse
|
26
|
Ferguson KA, Huh CYL, Amilhon B, Williams S, Skinner FK. Simple, biologically-constrained CA1 pyramidal cell models using an intact, whole hippocampus context. F1000Res 2014; 3:104. [PMID: 25383182 PMCID: PMC4215760 DOI: 10.12688/f1000research.3894.1] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/06/2014] [Indexed: 01/24/2023] Open
Abstract
The hippocampus is a heavily studied brain structure due to its involvement in learning and memory. Detailed models of excitatory, pyramidal cells in hippocampus have been developed using a range of experimental data. These models have been used to help us understand, for example, the effects of synaptic integration and voltage gated channel densities and distributions on cellular responses. However, these cellular outputs need to be considered from the perspective of the networks in which they are embedded. Using modeling approaches, if cellular representations are too detailed, it quickly becomes computationally unwieldy to explore large network simulations. Thus, simple models are preferable, but at the same time they need to have a clear, experimental basis so as to allow physiologically based understandings to emerge. In this article, we describe the development of simple models of CA1 pyramidal cells, as derived in a well-defined experimental context of an intact, whole hippocampus preparation expressing population oscillations. These models are based on the intrinsic properties and frequency-current profiles of CA1 pyramidal cells, and can be used to build, fully examine, and analyze large networks.
Collapse
Affiliation(s)
- Katie A Ferguson
- Toronto Western Research Institute, University Health Network, Toronto, Ontario, M5T 2S8, Canada ; Department of Physiology, University of Toronto, Toronto, Ontario, M5S 1A1, Canada
| | - Carey Y L Huh
- Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montreal, Quebec, H4G 1X6, Canada
| | - Benedicte Amilhon
- Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montreal, Quebec, H4G 1X6, Canada
| | - Sylvain Williams
- Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montreal, Quebec, H4G 1X6, Canada
| | - Frances K Skinner
- Toronto Western Research Institute, University Health Network, Toronto, Ontario, M5T 2S8, Canada ; Department of Medicine (Neurology), Physiology, University of Toronto, Toronto, Ontario, M5S 1A1, Canada
| |
Collapse
|
27
|
|