1
|
Kaster M, Czappa F, Butz-Ostendorf M, Wolf F. Building a realistic, scalable memory model with independent engrams using a homeostatic mechanism. Front Neuroinform 2024; 18:1323203. [PMID: 38706939 PMCID: PMC11066267 DOI: 10.3389/fninf.2024.1323203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 03/27/2024] [Indexed: 05/07/2024] Open
Abstract
Memory formation is usually associated with Hebbian learning and synaptic plasticity, which changes the synaptic strengths but omits structural changes. A recent study suggests that structural plasticity can also lead to silent memory engrams, reproducing a conditioned learning paradigm with neuron ensembles. However, this study is limited by its way of synapse formation, enabling the formation of only one memory engram. Overcoming this, our model allows the formation of many engrams simultaneously while retaining high neurophysiological accuracy, e.g., as found in cortical columns. We achieve this by substituting the random synapse formation with the Model of Structural Plasticity. As a homeostatic model, neurons regulate their activity by growing and pruning synaptic elements based on their current activity. Utilizing synapse formation based on the Euclidean distance between the neurons with a scalable algorithm allows us to easily simulate 4 million neurons with 343 memory engrams. These engrams do not interfere with one another by default, yet we can change the simulation parameters to form long-reaching associations. Our model's analysis shows that homeostatic engram formation requires a certain spatiotemporal order of events. It predicts that synaptic pruning precedes and enables synaptic engram formation and that it does not occur as a mere compensatory response to enduring synapse potentiation as in Hebbian plasticity with synaptic scaling. Our model paves the way for simulations addressing further inquiries, ranging from memory chains and hierarchies to complex memory systems comprising areas with different learning mechanisms.
Collapse
Affiliation(s)
- Marvin Kaster
- Laboratory for Parallel Programming, Department of Computer Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Fabian Czappa
- Laboratory for Parallel Programming, Department of Computer Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Markus Butz-Ostendorf
- Laboratory for Parallel Programming, Department of Computer Science, Technical University of Darmstadt, Darmstadt, Germany
- Data Science, Translational Medicine and Clinical Pharmacology, Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach, Germany
| | - Felix Wolf
- Laboratory for Parallel Programming, Department of Computer Science, Technical University of Darmstadt, Darmstadt, Germany
| |
Collapse
|
2
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
3
|
Wong HHW, Watt AJ, Sjöström PJ. Synapse-specific burst coding sustained by local axonal translation. Neuron 2024; 112:264-276.e6. [PMID: 37944518 DOI: 10.1016/j.neuron.2023.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/19/2023] [Accepted: 09/13/2023] [Indexed: 11/12/2023]
Abstract
Neurotransmission in the brain is unreliable, suggesting that high-frequency spike bursts rather than individual spikes carry the neural code. For instance, cortical pyramidal neurons rely on bursts in memory formation. Protein synthesis is another key factor in long-term synaptic plasticity and learning but is widely considered unnecessary for synaptic transmission. Here, however, we show that burst neurotransmission at synapses between neocortical layer 5 pyramidal cells depends on axonal protein synthesis linked to presynaptic NMDA receptors and mTOR. We localized protein synthesis to axons with laser axotomy and puromycylation live imaging. We whole-cell recorded connected neurons to reveal how translation sustained readily releasable vesicle pool size and replenishment rate. We live imaged axons and found sparsely docked RNA granules, suggesting synapse-specific regulation. In agreement, translation boosted neurotransmission onto excitatory but not inhibitory basket or Martinotti cells. Local axonal mRNA translation is thus a hitherto unappreciated principle for sustaining burst coding at specific synapse types.
Collapse
Affiliation(s)
- Hovy Ho-Wai Wong
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Program, Department of Medicine, Department of Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC H3G 1A4, Canada.
| | - Alanna J Watt
- Biology Department, McGill University, Montreal, QC H3G 0B1, Canada
| | - P Jesper Sjöström
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Program, Department of Medicine, Department of Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC H3G 1A4, Canada.
| |
Collapse
|
4
|
Grosu GF, Hopp AV, Moca VV, Bârzan H, Ciuparu A, Ercsey-Ravasz M, Winkel M, Linde H, Mureșan RC. The fractal brain: scale-invariance in structure and dynamics. Cereb Cortex 2023; 33:4574-4605. [PMID: 36156074 PMCID: PMC10110456 DOI: 10.1093/cercor/bhac363] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 08/09/2022] [Accepted: 08/10/2022] [Indexed: 11/12/2022] Open
Abstract
The past 40 years have witnessed extensive research on fractal structure and scale-free dynamics in the brain. Although considerable progress has been made, a comprehensive picture has yet to emerge, and needs further linking to a mechanistic account of brain function. Here, we review these concepts, connecting observations across different levels of organization, from both a structural and functional perspective. We argue that, paradoxically, the level of cortical circuits is the least understood from a structural point of view and perhaps the best studied from a dynamical one. We further link observations about scale-freeness and fractality with evidence that the environment provides constraints that may explain the usefulness of fractal structure and scale-free dynamics in the brain. Moreover, we discuss evidence that behavior exhibits scale-free properties, likely emerging from similarly organized brain dynamics, enabling an organism to thrive in an environment that shares the same organizational principles. Finally, we review the sparse evidence for and try to speculate on the functional consequences of fractality and scale-freeness for brain computation. These properties may endow the brain with computational capabilities that transcend current models of neural computation and could hold the key to unraveling how the brain constructs percepts and generates behavior.
Collapse
Affiliation(s)
- George F Grosu
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | | | - Vasile V Moca
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
| | - Harald Bârzan
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | - Andrei Ciuparu
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | - Maria Ercsey-Ravasz
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Physics, Babes-Bolyai University, Str. Mihail Kogalniceanu 1, 400084 Cluj-Napoca, Romania
| | - Mathias Winkel
- Merck KGaA, Frankfurter Straße 250, 64293 Darmstadt, Germany
| | - Helmut Linde
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Merck KGaA, Frankfurter Straße 250, 64293 Darmstadt, Germany
| | - Raul C Mureșan
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
| |
Collapse
|
5
|
Chauhan K, Khaledi-Nasab A, Neiman AB, Tass PA. Dynamics of phase oscillator networks with synaptic weight and structural plasticity. Sci Rep 2022; 12:15003. [PMID: 36056151 PMCID: PMC9440105 DOI: 10.1038/s41598-022-19417-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 08/29/2022] [Indexed: 11/08/2022] Open
Abstract
We study the dynamics of Kuramoto oscillator networks with two distinct adaptation processes, one varying the coupling strengths and the other altering the network structure. Such systems model certain networks of oscillatory neurons where the neuronal dynamics, synaptic weights, and network structure interact with and shape each other. We model synaptic weight adaptation with spike-timing-dependent plasticity (STDP) that runs on a longer time scale than neuronal spiking. Structural changes that include addition and elimination of contacts occur at yet a longer time scale than the weight adaptations. First, we study the steady-state dynamics of Kuramoto networks that are bistable and can settle in synchronized or desynchronized states. To compare the impact of adding structural plasticity, we contrast the network with only STDP to one with a combination of STDP and structural plasticity. We show that the inclusion of structural plasticity optimizes the synchronized state of a network by allowing for synchronization with fewer links than a network with STDP alone. With non-identical units in the network, the addition of structural plasticity leads to the emergence of correlations between the oscillators' natural frequencies and node degrees. In the desynchronized regime, the structural plasticity decreases the number of contacts, leading to a sparse network. In this way, adding structural plasticity strengthens both synchronized and desynchronized states of a network. Second, we use desynchronizing coordinated reset stimulation and synchronizing periodic stimulation to induce desynchronized and synchronized states, respectively. Our findings indicate that a network with a combination of STDP and structural plasticity may require stronger and longer stimulation to switch between the states than a network with STDP only.
Collapse
Affiliation(s)
- Kanishk Chauhan
- Department of Physics and Astronomy, Ohio University, Athens, OH, 45701, USA.
- Neuroscience Program, Ohio University, Athens, OH, 45701, USA.
| | - Ali Khaledi-Nasab
- Department of Neurosurgery, Stanford University, Stanford, CA, 94305, USA
| | - Alexander B Neiman
- Department of Physics and Astronomy, Ohio University, Athens, OH, 45701, USA
- Neuroscience Program, Ohio University, Athens, OH, 45701, USA
| | - Peter A Tass
- Department of Neurosurgery, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
6
|
Froudist-Walsh S, Bliss DP, Ding X, Rapan L, Niu M, Knoblauch K, Zilles K, Kennedy H, Palomero-Gallagher N, Wang XJ. A dopamine gradient controls access to distributed working memory in the large-scale monkey cortex. Neuron 2021; 109:3500-3520.e13. [PMID: 34536352 PMCID: PMC8571070 DOI: 10.1016/j.neuron.2021.08.024] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 06/08/2021] [Accepted: 08/17/2021] [Indexed: 12/13/2022]
Abstract
Dopamine is required for working memory, but how it modulates the large-scale cortex is unknown. Here, we report that dopamine receptor density per neuron, measured by autoradiography, displays a macroscopic gradient along the macaque cortical hierarchy. This gradient is incorporated in a connectome-based large-scale cortex model endowed with multiple neuron types. The model captures an inverted U-shaped dependence of working memory on dopamine and spatial patterns of persistent activity observed in over 90 experimental studies. Moreover, we show that dopamine is crucial for filtering out irrelevant stimuli by enhancing inhibition from dendrite-targeting interneurons. Our model revealed that an activity-silent memory trace can be realized by facilitation of inter-areal connections and that adjusting cortical dopamine induces a switch from this internal memory state to distributed persistent activity. Our work represents a cross-level understanding from molecules and cell types to recurrent circuit dynamics underlying a core cognitive function distributed across the primate cortex.
Collapse
Affiliation(s)
| | - Daniel P Bliss
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Xingyu Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | | | - Meiqi Niu
- Research Centre Jülich, INM-1, Jülich, Germany
| | - Kenneth Knoblauch
- INSERM U846, Stem Cell & Brain Research Institute, 69500 Bron, France; Université de Lyon, Université Lyon I, 69003 Lyon, France
| | - Karl Zilles
- Research Centre Jülich, INM-1, Jülich, Germany
| | - Henry Kennedy
- INSERM U846, Stem Cell & Brain Research Institute, 69500 Bron, France; Université de Lyon, Université Lyon I, 69003 Lyon, France; Institute of Neuroscience, State Key Laboratory of Neuroscience, Chinese Academy of Sciences (CAS), Key Laboratory of Primate Neurobiology CAS, Shanghai, China
| | - Nicola Palomero-Gallagher
- Research Centre Jülich, INM-1, Jülich, Germany; C. & O. Vogt Institute for Brain Research, Heinrich-Heine-University, 40225 Düsseldorf, Germany
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
7
|
Gutman-Wei AY, Brown SP. Mechanisms Underlying Target Selectivity for Cell Types and Subcellular Domains in Developing Neocortical Circuits. Front Neural Circuits 2021; 15:728832. [PMID: 34630048 PMCID: PMC8497978 DOI: 10.3389/fncir.2021.728832] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 08/25/2021] [Indexed: 11/25/2022] Open
Abstract
The cerebral cortex contains numerous neuronal cell types, distinguished by their molecular identity as well as their electrophysiological and morphological properties. Cortical function is reliant on stereotyped patterns of synaptic connectivity and synaptic function among these neuron types, but how these patterns are established during development remains poorly understood. Selective targeting not only of different cell types but also of distinct postsynaptic neuronal domains occurs in many brain circuits and is directed by multiple mechanisms. These mechanisms include the regulation of axonal and dendritic guidance and fine-scale morphogenesis of pre- and postsynaptic processes, lineage relationships, activity dependent mechanisms and intercellular molecular determinants such as transmembrane and secreted molecules, many of which have also been implicated in neurodevelopmental disorders. However, many studies of synaptic targeting have focused on circuits in which neuronal processes target different lamina, such that cell-type-biased connectivity may be confounded with mechanisms of laminar specificity. In the cerebral cortex, each cortical layer contains cell bodies and processes from intermingled neuronal cell types, an arrangement that presents a challenge for the development of target-selective synapse formation. Here, we address progress and future directions in the study of cell-type-biased synaptic targeting in the cerebral cortex. We highlight challenges to identifying developmental mechanisms generating stereotyped patterns of intracortical connectivity, recent developments in uncovering the determinants of synaptic target selection during cortical synapse formation, and current gaps in the understanding of cortical synapse specificity.
Collapse
Affiliation(s)
- Alan Y. Gutman-Wei
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Solange P. Brown
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Kavli Neuroscience Discovery Institute, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
8
|
Zink N, Lenartowicz A, Markett S. A new era for executive function research: On the transition from centralized to distributed executive functioning. Neurosci Biobehav Rev 2021; 124:235-244. [PMID: 33582233 DOI: 10.1016/j.neubiorev.2021.02.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 02/04/2021] [Indexed: 02/07/2023]
Abstract
"Executive functions" (EFs) is an umbrella term for higher cognitive control functions such as working memory, inhibition, and cognitive flexibility. One of the most challenging problems in this field of research has been to explain how the wide range of cognitive processes subsumed as EFs are controlled without an all-powerful but ill-defined central executive in the brain. Efforts to localize control mechanisms in circumscribed brain regions have not led to a breakthrough in understanding how the brain controls and regulates itself. We propose to re-conceptualize EFs as emergent consequences of highly distributed brain processes that communicate with a pool of highly connected hub regions, thus precluding the need for a central executive. We further discuss how graph-theory driven analysis of brain networks offers a unique lens on this problem by providing a reference frame to study brain connectivity in EFs in a holistic way and helps to refine our understanding of the mechanisms underlying EFs by providing new, testable hypotheses and resolves empirical and theoretical inconsistencies in the EF literature.
Collapse
Affiliation(s)
- Nicolas Zink
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, United States.
| | - Agatha Lenartowicz
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, United States
| | - Sebastian Markett
- Department of Psychology, Humboldt University Berlin, Berlin, Germany
| |
Collapse
|
9
|
Bird AD, Deters LH, Cuntz H. Excess Neuronal Branching Allows for Local Innervation of Specific Dendritic Compartments in Mature Cortex. Cereb Cortex 2020; 31:1008-1031. [DOI: 10.1093/cercor/bhaa271] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 08/14/2020] [Accepted: 08/14/2020] [Indexed: 12/12/2022] Open
Abstract
Abstract
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behavior to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organization of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of potential local connections in mature cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given axo-dendritic connection. We show that the number of local potential contacts allows specific innervation of distinct dendritic compartments.
Collapse
Affiliation(s)
- A D Bird
- Frankfurt Institute for Advanced Studies, Frankfurt-am-Main 60438, Germany
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with the Max Planck Society, Frankfurt-am-Main 60528, Germany
| | - L H Deters
- Frankfurt Institute for Advanced Studies, Frankfurt-am-Main 60438, Germany
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with the Max Planck Society, Frankfurt-am-Main 60528, Germany
| | - H Cuntz
- Frankfurt Institute for Advanced Studies, Frankfurt-am-Main 60438, Germany
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with the Max Planck Society, Frankfurt-am-Main 60528, Germany
| |
Collapse
|
10
|
Tazerart S, Mitchell DE, Miranda-Rottmann S, Araya R. A spike-timing-dependent plasticity rule for dendritic spines. Nat Commun 2020; 11:4276. [PMID: 32848151 PMCID: PMC7449969 DOI: 10.1038/s41467-020-17861-7] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 07/22/2020] [Indexed: 12/03/2022] Open
Abstract
The structural organization of excitatory inputs supporting spike-timing-dependent plasticity (STDP) remains unknown. We performed a spine STDP protocol using two-photon (2P) glutamate uncaging (pre) paired with postsynaptic spikes (post) in layer 5 pyramidal neurons from juvenile mice. Here we report that pre-post pairings that trigger timing-dependent LTP (t-LTP) produce shrinkage of the activated spine neck and increase in synaptic strength; and post-pre pairings that trigger timing-dependent LTD (t-LTD) decrease synaptic strength without affecting spine shape. Furthermore, the induction of t-LTP with 2P glutamate uncaging in clustered spines (<5 μm apart) enhances LTP through a NMDA receptor-mediated spine calcium accumulation and actin polymerization-dependent neck shrinkage, whereas t-LTD was dependent on NMDA receptors and disrupted by the activation of clustered spines but recovered when separated by >40 μm. These results indicate that synaptic cooperativity disrupts t-LTD and extends the temporal window for the induction of t-LTP, leading to STDP only encompassing LTP.
Collapse
Affiliation(s)
- Sabrina Tazerart
- Department of Neurosciences, Faculty of Medicine, University of Montreal, Montreal, QC, Canada
- The CHU Sainte-Justine Research Center, Montreal, QC, Canada
| | - Diana E Mitchell
- Department of Neurosciences, Faculty of Medicine, University of Montreal, Montreal, QC, Canada
- The CHU Sainte-Justine Research Center, Montreal, QC, Canada
| | - Soledad Miranda-Rottmann
- Department of Neurosciences, Faculty of Medicine, University of Montreal, Montreal, QC, Canada
- The CHU Sainte-Justine Research Center, Montreal, QC, Canada
| | - Roberto Araya
- Department of Neurosciences, Faculty of Medicine, University of Montreal, Montreal, QC, Canada.
- The CHU Sainte-Justine Research Center, Montreal, QC, Canada.
| |
Collapse
|
11
|
Limbacher T, Legenstein R. Emergence of Stable Synaptic Clusters on Dendrites Through Synaptic Rewiring. Front Comput Neurosci 2020; 14:57. [PMID: 32848681 PMCID: PMC7424032 DOI: 10.3389/fncom.2020.00057] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 05/22/2020] [Indexed: 11/16/2022] Open
Abstract
The connectivity structure of neuronal networks in cortex is highly dynamic. This ongoing cortical rewiring is assumed to serve important functions for learning and memory. We analyze in this article a model for the self-organization of synaptic inputs onto dendritic branches of pyramidal cells. The model combines a generic stochastic rewiring principle with a simple synaptic plasticity rule that depends on local dendritic activity. In computer simulations, we find that this synaptic rewiring model leads to synaptic clustering, that is, temporally correlated inputs become locally clustered on dendritic branches. This empirical finding is backed up by a theoretical analysis which shows that rewiring in our model favors network configurations with synaptic clustering. We propose that synaptic clustering plays an important role in the organization of computation and memory in cortical circuits: we find that synaptic clustering through the proposed rewiring mechanism can serve as a mechanism to protect memories from subsequent modifications on a medium time scale. Rewiring of synaptic connections onto specific dendritic branches may thus counteract the general problem of catastrophic forgetting in neural networks.
Collapse
Affiliation(s)
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
12
|
Fan X, Markram H. A Brief History of Simulation Neuroscience. Front Neuroinform 2019; 13:32. [PMID: 31133838 PMCID: PMC6513977 DOI: 10.3389/fninf.2019.00032] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 04/12/2019] [Indexed: 12/19/2022] Open
Abstract
Our knowledge of the brain has evolved over millennia in philosophical, experimental and theoretical phases. We suggest that the next phase is simulation neuroscience. The main drivers of simulation neuroscience are big data generated at multiple levels of brain organization and the need to integrate these data to trace the causal chain of interactions within and across all these levels. Simulation neuroscience is currently the only methodology for systematically approaching the multiscale brain. In this review, we attempt to reconstruct the deep historical paths leading to simulation neuroscience, from the first observations of the nerve cell to modern efforts to digitally reconstruct and simulate the brain. Neuroscience began with the identification of the neuron as the fundamental unit of brain structure and function and has evolved towards understanding the role of each cell type in the brain, how brain cells are connected to each other, and how the seemingly infinite networks they form give rise to the vast diversity of brain functions. Neuronal mapping is evolving from subjective descriptions of cell types towards objective classes, subclasses and types. Connectivity mapping is evolving from loose topographic maps between brain regions towards dense anatomical and physiological maps of connections between individual genetically distinct neurons. Functional mapping is evolving from psychological and behavioral stereotypes towards a map of behaviors emerging from structural and functional connectomes. We show how industrialization of neuroscience and the resulting large disconnected datasets are generating demand for integrative neuroscience, how the scale of neuronal and connectivity maps is driving digital atlasing and digital reconstruction to piece together the multiple levels of brain organization, and how the complexity of the interactions between molecules, neurons, microcircuits and brain regions is driving brain simulation to understand the interactions in the multiscale brain.
Collapse
Affiliation(s)
- Xue Fan
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Henry Markram
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
13
|
Cheng S, Wang X, Liu Y, Su L, Quan T, Li N, Yin F, Xiong F, Liu X, Luo Q, Gong H, Zeng S. DeepBouton: Automated Identification of Single-Neuron Axonal Boutons at the Brain-Wide Scale. Front Neuroinform 2019; 13:25. [PMID: 31105547 PMCID: PMC6492499 DOI: 10.3389/fninf.2019.00025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 03/22/2019] [Indexed: 12/30/2022] Open
Abstract
Fine morphological reconstruction of individual neurons across the entire brain is essential for mapping brain circuits. Inference of presynaptic axonal boutons, as a key part of single-neuron fine reconstruction, is critical for interpreting the patterns of neural circuit wiring schemes. However, automated bouton identification remains challenging for current neuron reconstruction tools, as they focus mainly on neurite skeleton drawing and have difficulties accurately quantifying bouton morphology. Here, we developed an automated method for recognizing single-neuron axonal boutons in whole-brain fluorescence microscopy datasets. The method is based on deep convolutional neural networks and density-peak clustering. High-dimensional feature representations of bouton morphology can be learned adaptively through convolutional networks and used for bouton recognition and subtype classification. We demonstrate that the approach is effective for detecting single-neuron boutons at the brain-wide scale for both long-range pyramidal projection neurons and local interneurons.
Collapse
Affiliation(s)
- Shenghua Cheng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaojun Wang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yurong Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Lei Su
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Ning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Fangfang Yin
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Feng Xiong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaomao Liu
- School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
14
|
Henderson NT, Le Marchand SJ, Hruska M, Hippenmeyer S, Luo L, Dalva MB. Ephrin-B3 controls excitatory synapse density through cell-cell competition for EphBs. eLife 2019; 8:e41563. [PMID: 30789343 PMCID: PMC6384025 DOI: 10.7554/elife.41563] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Accepted: 01/31/2019] [Indexed: 11/13/2022] Open
Abstract
Cortical networks are characterized by sparse connectivity, with synapses found at only a subset of axo-dendritic contacts. Yet within these networks, neurons can exhibit high connection probabilities, suggesting that cell-intrinsic factors, not proximity, determine connectivity. Here, we identify ephrin-B3 (eB3) as a factor that determines synapse density by mediating a cell-cell competition that requires ephrin-B-EphB signaling. In a microisland culture system designed to isolate cell-cell competition, we find that eB3 determines winning and losing neurons in a contest for synapses. In a Mosaic Analysis with Double Markers (MADM) genetic mouse model system in vivo the relative levels of eB3 control spine density in layer 5 and 6 neurons. MADM cortical neurons in vitro reveal that eB3 controls synapse density independently of action potential-driven activity. Our findings illustrate a new class of competitive mechanism mediated by trans-synaptic organizing proteins which control the number of synapses neurons receive relative to neighboring neurons.
Collapse
Affiliation(s)
- Nathan T Henderson
- Department of Neuroscience, The Vickie and Jack Farber Institute for Neuroscience, Thomas Jefferson University, Philadelphia, United States
- Department of Neuroscience, University of Pennsylvania, Philadelphia, United States
| | | | - Martin Hruska
- Department of Neuroscience, The Vickie and Jack Farber Institute for Neuroscience, Thomas Jefferson University, Philadelphia, United States
| | - Simon Hippenmeyer
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Liqun Luo
- Department of Biology, Howard Hughes Medical Institute, Stanford University, Stanford, United States
| | - Matthew B Dalva
- Department of Neuroscience, The Vickie and Jack Farber Institute for Neuroscience, Thomas Jefferson University, Philadelphia, United States
| |
Collapse
|
15
|
How minimal variations in neuronal cytoskeletal integrity modulate cognitive control. Neuroimage 2019; 185:129-139. [DOI: 10.1016/j.neuroimage.2018.10.053] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Revised: 10/13/2018] [Accepted: 10/19/2018] [Indexed: 11/21/2022] Open
|
16
|
Yin L, Zheng R, Ke W, He Q, Zhang Y, Li J, Wang B, Mi Z, Long YS, Rasch MJ, Li T, Luan G, Shu Y. Autapses enhance bursting and coincidence detection in neocortical pyramidal cells. Nat Commun 2018; 9:4890. [PMID: 30459347 PMCID: PMC6244208 DOI: 10.1038/s41467-018-07317-4] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 10/23/2018] [Indexed: 01/19/2023] Open
Abstract
Autapses are synaptic contacts of a neuron’s axon onto its own dendrite and soma. In the neocortex, self-inhibiting autapses in GABAergic interneurons are abundant in number and play critical roles in regulating spike precision and network activity. Here we examine whether the principal glutamatergic pyramidal cells (PCs) also form functional autapses. In patch-clamp recording from both rodent and human PCs, we isolated autaptic responses and found that these occur predominantly in layer-5 PCs projecting to subcortical regions, with very few in those projecting to contralateral prefrontal cortex and layer 2/3 PCs. Moreover, PC autapses persist during development into adulthood. Surprisingly, they produce giant postsynaptic responses (∼5 fold greater than recurrent PC-PC synapses) that are exclusively mediated by AMPA receptors. Upon activation, autapses enhance burst firing, neuronal responsiveness and coincidence detection of synaptic inputs. These findings indicate that PC autapses are functional and represent an important circuit element in the neocortex. While autapses are synapses made by a neuron onto itself, its functional significance in pyramidal cells are not clear. Here, the authors show that in the mammalian neocortex, autapses of pyramidal cells can enhance burst firing and coincidence detection from other inputs.
Collapse
Affiliation(s)
- Luping Yin
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China.,Institute of Neuroscience and State Key Laboratory of Neuroscience, Chinese Academy of Sciences, University of Chinese Academy of Sciences, 320 Yueyang Road, Shanghai, 200031, China
| | - Rui Zheng
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China.,Institute of Neuroscience and State Key Laboratory of Neuroscience, Chinese Academy of Sciences, University of Chinese Academy of Sciences, 320 Yueyang Road, Shanghai, 200031, China
| | - Wei Ke
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China
| | - Quansheng He
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China
| | - Yi Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China
| | - Junlong Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China
| | - Bo Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China.,Institute of Neuroscience and State Key Laboratory of Neuroscience, Chinese Academy of Sciences, University of Chinese Academy of Sciences, 320 Yueyang Road, Shanghai, 200031, China
| | - Zhen Mi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China
| | - Yue-Sheng Long
- Institute of Neuroscience and the Second Affiliated Hospital of Guangzhou Medical University, Guangzhou, 501260, China
| | - Malte J Rasch
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China
| | - Tianfu Li
- Department of Neurology, Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, Xiangshan Yikesong 50, Beijing, 100093, China
| | - Guoming Luan
- Department of Neurosurgery, Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, Xiangshan Yikesong 50, Beijing, 100093, China
| | - Yousheng Shu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Wai Street, Beijing, 100875, China.
| |
Collapse
|
17
|
Pereira U, Brunel N. Attractor Dynamics in Networks with Learning Rules Inferred from In Vivo Data. Neuron 2018; 99:227-238.e4. [PMID: 29909997 PMCID: PMC6091895 DOI: 10.1016/j.neuron.2018.05.038] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 04/08/2018] [Accepted: 05/23/2018] [Indexed: 01/12/2023]
Abstract
The attractor neural network scenario is a popular scenario for memory storage in the association cortex, but there is still a large gap between models based on this scenario and experimental data. We study a recurrent network model in which both learning rules and distribution of stored patterns are inferred from distributions of visual responses for novel and familiar images in the inferior temporal cortex (ITC). Unlike classical attractor neural network models, our model exhibits graded activity in retrieval states, with distributions of firing rates that are close to lognormal. Inferred learning rules are close to maximizing the number of stored patterns within a family of unsupervised Hebbian learning rules, suggesting that learning rules in ITC are optimized to store a large number of attractor states. Finally, we show that there exist two types of retrieval states: one in which firing rates are constant in time and another in which firing rates fluctuate chaotically.
Collapse
Affiliation(s)
- Ulises Pereira
- Department of Statistics, The University of Chicago, Chicago, IL 60637, USA
| | - Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, IL 60637, USA; Department of Neurobiology, The University of Chicago, Chicago, IL 60637, USA; Department of Neurobiology, Duke University, Durham, NC 27710, USA; Department of Physics, Duke University, Durham, NC 27708, USA.
| |
Collapse
|
18
|
Bogdan PA, Rowley AGD, Rhodes O, Furber SB. Structural Plasticity on the SpiNNaker Many-Core Neuromorphic System. Front Neurosci 2018; 12:434. [PMID: 30034320 PMCID: PMC6043813 DOI: 10.3389/fnins.2018.00434] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 06/11/2018] [Indexed: 01/15/2023] Open
Abstract
The structural organization of cortical areas is not random, with topographic maps commonplace in sensory processing centers. This topographical organization allows optimal wiring between neurons, multimodal sensory integration, and performs input dimensionality reduction. In this work, a model of topographic map formation is implemented on the SpiNNaker neuromorphic platform, running in realtime using point neurons, and making use of both synaptic rewiring and spike-timing dependent plasticity (STDP). In agreement with Bamford et al. (2010), we demonstrate that synaptic rewiring refines an initially rough topographic map over and beyond the ability of STDP, and that input selectivity learnt through STDP is embedded into the network connectivity through rewiring. Moreover, we show the presented model can be used to generate topographic maps between layers of neurons with minimal initial connectivity, and stabilize mappings which would otherwise be unstable through the inclusion of lateral inhibition.
Collapse
Affiliation(s)
- Petruț A Bogdan
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Andrew G D Rowley
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Oliver Rhodes
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Steve B Furber
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
19
|
Lake BM, Lawrence ND, Tenenbaum JB. The Emergence of Organizing Structure in Conceptual Representation. Cogn Sci 2018; 42 Suppl 3:809-832. [PMID: 29315735 DOI: 10.1111/cogs.12580] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 09/20/2017] [Accepted: 11/06/2017] [Indexed: 12/01/2022]
Abstract
Both scientists and children make important structural discoveries, yet their computational underpinnings are not well understood. Structure discovery has previously been formalized as probabilistic inference about the right structural form-where form could be a tree, ring, chain, grid, etc. (Kemp & Tenenbaum, 2008). Although this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge. Here we introduce a new computational model of how organizing structure can be discovered, utilizing a broad hypothesis space with a preference for sparse connectivity. Given that the inductive bias is more general, the model's initial knowledge shows little qualitative resemblance to some of the discoveries it supports. As a consequence, the model can also learn complex structures for domains that lack intuitive description, as well as predict human property induction judgments without explicit structural forms. By allowing form to emerge from sparsity, our approach clarifies how both the richness and flexibility of human conceptual organization can coexist.
Collapse
Affiliation(s)
- Brenden M Lake
- Center for Data Science, New York University.,Department of Psychology, New York University
| | | | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology.,Center for Brains, Minds and Machines
| |
Collapse
|
20
|
Vitrac C, Benoit-Marand M. Monoaminergic Modulation of Motor Cortex Function. Front Neural Circuits 2017; 11:72. [PMID: 29062274 PMCID: PMC5640772 DOI: 10.3389/fncir.2017.00072] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2017] [Accepted: 09/19/2017] [Indexed: 01/09/2023] Open
Abstract
Elaboration of appropriate responses to behavioral situations rests on the ability of selecting appropriate motor outcomes in accordance to specific environmental inputs. To this end, the primary motor cortex (M1) is a key structure for the control of voluntary movements and motor skills learning. Subcortical loops regulate the activity of the motor cortex and thus contribute to the selection of appropriate motor plans. Monoamines are key mediators of arousal, attention and motivation. Their firing pattern enables a direct encoding of different states thus promoting or repressing the selection of actions adapted to the behavioral context. Monoaminergic modulation of motor systems has been extensively studied in subcortical circuits. Despite evidence of converging projections of multiple neurotransmitters systems in the motor cortex pointing to a direct modulation of local circuits, their contribution to the execution and learning of motor skills is still poorly understood. Monoaminergic dysregulation leads to impaired plasticity and motor function in several neurological and psychiatric conditions, thus it is critical to better understand how monoamines modulate neural activity in the motor cortex. This review aims to provide an update of our current understanding on the monoaminergic modulation of the motor cortex with an emphasis on motor skill learning and execution under physiological conditions.
Collapse
Affiliation(s)
- Clément Vitrac
- Laboratoire de Neurosciences Expérimentales et Cliniques, INSERM U1084, Poitiers, France.,Laboratoire de Neurosciences Expérimentales et Cliniques, Université de Poitiers, Poitiers, France
| | - Marianne Benoit-Marand
- Laboratoire de Neurosciences Expérimentales et Cliniques, INSERM U1084, Poitiers, France.,Laboratoire de Neurosciences Expérimentales et Cliniques, Université de Poitiers, Poitiers, France
| |
Collapse
|
21
|
Gauy MM, Meier F, Steger A. Multiassociative Memory: Recurrent Synapses Increase Storage Capacity. Neural Comput 2017; 29:1375-1405. [DOI: 10.1162/neco_a_00954] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The connection density of nearby neurons in the cortex has been observed to be around 0.1, whereas the longer-range connections are present with much sparser density (Kalisman, Silberberg, & Markram, 2005 ). We propose a memory association model that qualitatively explains these empirical observations. The model we consider is a multiassociative, sparse, Willshaw-like model consisting of binary threshold neurons and binary synapses. It uses recurrent synapses for iterative retrieval of stored memories. We quantify the usefulness of recurrent synapses by simulating the model for small network sizes and by doing a precise mathematical analysis for large network sizes. Given the network parameters, we can determine the precise values of recurrent and afferent synapse densities that optimize the storage capacity of the network. If the network size is like that of a cortical column, then the predicted optimal recurrent density lies in a range that is compatible with biological measurements. Furthermore, we show that our model is able to surpass the standard Willshaw model in the multiassociative case if the information capacity is normalized per strong synapse or per bits required to store the model, as considered in Knoblauch, Palm, and Sommer ( 2010 ).
Collapse
Affiliation(s)
- Marcelo Matheus Gauy
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich 8092, Switzerland
| | - Florian Meier
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich 8092, Switzerland
| | - Angelika Steger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich 8092, Switzerland, and Collegium Helveticum, Zurich 8090, Switzerland
| |
Collapse
|
22
|
Lampl I, Katz Y. Neuronal adaptation in the somatosensory system of rodents. Neuroscience 2017; 343:66-76. [DOI: 10.1016/j.neuroscience.2016.11.043] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Revised: 11/24/2016] [Accepted: 11/28/2016] [Indexed: 10/20/2022]
|
23
|
Hoffmann FZ, Triesch J. Nonrandom network connectivity comes in pairs. Netw Neurosci 2017; 1:31-41. [PMID: 29601066 PMCID: PMC5869014 DOI: 10.1162/netn_a_00004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Accepted: 12/19/2016] [Indexed: 11/10/2022] Open
Abstract
Overrepresentation of bidirectional connections in local cortical networks has been repeatedly reported and is a focus of the ongoing discussion of nonrandom connectivity. Here we show in a brief mathematical analysis that in a network in which connection probabilities are symmetric in pairs, Pij = Pji, the occurrences of bidirectional connections and nonrandom structures are inherently linked; an overabundance of reciprocally connected pairs emerges necessarily when some pairs of neurons are more likely to be connected than others. Our numerical results imply that such overrepresentation can also be sustained when connection probabilities are only approximately symmetric. Understanding the specific connectivity of neural circuits is an important challenge of modern neuroscience. In this study we address an important feature of neural connectivity, the abundance of bidirectionally connected neuron pairs, which far exceeds what would be expected in a random network. Our theoretical analysis reveals a simple condition under which such an overrepresentation of bidirectionally connected pairs necessarily occurs: Any network in which both directions of connection are equally likely to exist in any given pair of neurons, but in which some pairs are more likely to be connected than others, must exhibit an abundance of reciprocal connections. This insight should guide the analysis and interpretation of future connectomics datasets.
Collapse
Affiliation(s)
- Felix Z Hoffmann
- Frankfurt Institute for Advanced Studies (FIAS), Johann Wolfgang Goethe University, Frankfurt am Main, Germany.,International Max Planck Research School for Neural Circuits, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies (FIAS), Johann Wolfgang Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
24
|
Neural plasticity and network remodeling: From concepts to pathology. Neuroscience 2017; 344:326-345. [PMID: 28069532 DOI: 10.1016/j.neuroscience.2016.12.048] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Revised: 12/24/2016] [Accepted: 12/27/2016] [Indexed: 11/22/2022]
Abstract
Neuroplasticity has been subject to a great deal of research in the last century. Recently, significant emphasis has been placed on the global effect of localized plastic changes throughout the central nervous system, and on how these changes integrate in a pathological context. Specifically, alterations of network functionality have been described in various pathological contexts to which corresponding structural alterations have been proposed. However, considering the amount of literature and the different pathological contexts, an integration of this information is still lacking. In this paper we will review the concepts of neural plasticity as well as their repercussions on network remodeling and provide a possible explanation to how these two concepts relate to each other. We will further examine how alterations in different pathological contexts may relate to each other and will discuss the concept of plasticity diseases, its models and implications.
Collapse
|
25
|
Marblestone AH, Wayne G, Kording KP. Toward an Integration of Deep Learning and Neuroscience. Front Comput Neurosci 2016; 10:94. [PMID: 27683554 PMCID: PMC5021692 DOI: 10.3389/fncom.2016.00094] [Citation(s) in RCA: 234] [Impact Index Per Article: 29.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 08/24/2016] [Indexed: 01/22/2023] Open
Abstract
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
Collapse
Affiliation(s)
- Adam H. Marblestone
- Synthetic Neurobiology Group, Massachusetts Institute of Technology, Media LabCambridge, MA, USA
| | | | - Konrad P. Kording
- Rehabilitation Institute of Chicago, Northwestern UniversityChicago, IL, USA
| |
Collapse
|
26
|
Lu J, Zuo Y. Clustered structural and functional plasticity of dendritic spines. Brain Res Bull 2016; 129:18-22. [PMID: 27637453 DOI: 10.1016/j.brainresbull.2016.09.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Revised: 07/29/2016] [Accepted: 09/13/2016] [Indexed: 11/26/2022]
Abstract
The configuration of synaptic circuits underlies their ability to process and store information. Research on dendritic spines has revealed that their structural and functional alterations are clustered along the parent dendrite. Here we review the evidence supporting such notion of clustered synaptic plasticity, discuss its functional implications and possible contributing factors, and suggest potential strategies to deal with open challenges.
Collapse
Affiliation(s)
- Ju Lu
- Department of Molecular, Cell and Developmental Biology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA.
| | - Yi Zuo
- Department of Molecular, Cell and Developmental Biology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA.
| |
Collapse
|
27
|
Fauth M, Tetzlaff C. Opposing Effects of Neuronal Activity on Structural Plasticity. Front Neuroanat 2016; 10:75. [PMID: 27445713 PMCID: PMC4923203 DOI: 10.3389/fnana.2016.00075] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 06/16/2016] [Indexed: 12/21/2022] Open
Abstract
The connectivity of the brain is continuously adjusted to new environmental influences by several activity-dependent adaptive processes. The most investigated adaptive mechanism is activity-dependent functional or synaptic plasticity regulating the transmission efficacy of existing synapses. Another important but less prominently discussed adaptive process is structural plasticity, which changes the connectivity by the formation and deletion of synapses. In this review, we show, based on experimental evidence, that structural plasticity can be classified similar to synaptic plasticity into two categories: (i) Hebbian structural plasticity, which leads to an increase (decrease) of the number of synapses during phases of high (low) neuronal activity and (ii) homeostatic structural plasticity, which balances these changes by removing and adding synapses. Furthermore, based on experimental and theoretical insights, we argue that each type of structural plasticity fulfills a different function. While Hebbian structural changes enhance memory lifetime, storage capacity, and memory robustness, homeostatic structural plasticity self-organizes the connectivity of the neural network to assure stability. However, the link between functional synaptic and structural plasticity as well as the detailed interactions between Hebbian and homeostatic structural plasticity are more complex. This implies even richer dynamics requiring further experimental and theoretical investigations.
Collapse
Affiliation(s)
- Michael Fauth
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August UniversityGöttingen, Germany; Bernstein Center for Computational NeuroscienceGöttingen, Germany
| | - Christian Tetzlaff
- Bernstein Center for Computational NeuroscienceGöttingen, Germany; Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
| |
Collapse
|
28
|
Abstract
Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network. We recorded the electrical activity of hundreds of neurons simultaneously in brain tissue from mice and we analyzed these signals using state-of-the-art tools from information theory. These tools allowed us to ascertain which neurons were transmitting information to other neurons and to characterize the computations performed by neurons using the inputs they received from two or more other neurons. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to be recipients of information from neurons with a large number of outgoing connections. Interestingly, the number of incoming connections to a neuron was not related to the amount of information that neuron computed. To better understand these results, we built a network model to match the data. Unexpectedly, the model also maximized information transfer in the presence of network-wide correlations. This suggested a way that networks of cortical neurons could deal with common random background input. These results are the first to show that the amount of information computed by a neuron depends on where it is located in the surrounding network.
Collapse
|
29
|
Brunel N. Is cortical connectivity optimized for storing information? Nat Neurosci 2016; 19:749-755. [PMID: 27065365 DOI: 10.1038/nn.4286] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/14/2016] [Indexed: 12/13/2022]
Abstract
Cortical networks are thought to be shaped by experience-dependent synaptic plasticity. Theoretical studies have shown that synaptic plasticity allows a network to store a memory of patterns of activity such that they become attractors of the dynamics of the network. Here we study the properties of the excitatory synaptic connectivity in a network that maximizes the number of stored patterns of activity in a robust fashion. We show that the resulting synaptic connectivity matrix has the following properties: it is sparse, with a large fraction of zero synaptic weights ('potential' synapses); bidirectionally coupled pairs of neurons are over-represented in comparison to a random network; and bidirectionally connected pairs have stronger synapses on average than unidirectionally connected pairs. All these features reproduce quantitatively available data on connectivity in cortex. This suggests synaptic connectivity in cortex is optimized to store a large number of attractor states in a robust fashion.
Collapse
Affiliation(s)
- Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, Illinois, USA.,Department of Neurobiology, The University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
30
|
Storing structured sparse memories in a multi-modular cortical network model. J Comput Neurosci 2016; 40:157-75. [PMID: 26852335 DOI: 10.1007/s10827-016-0590-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2015] [Revised: 01/03/2016] [Accepted: 01/07/2016] [Indexed: 10/22/2022]
Abstract
We study the memory performance of a class of modular attractor neural networks, where modules are potentially fully-connected networks connected to each other via diluted long-range connections. On this anatomical architecture we store memory patterns of activity using a Willshaw-type learning rule. P patterns are split in categories, such that patterns of the same category activate the same set of modules. We first compute the maximal storage capacity of these networks. We then investigate their error-correction properties through an exhaustive exploration of parameter space, and identify regions where the networks behave as an associative memory device. The crucial parameters that control the retrieval abilities of the network are (1) the ratio between the number of synaptic contacts of long- and short-range origins (2) the number of categories in which a module is activated and (3) the amount of local inhibition. We discuss the relationship between our model and networks of cortical patches that have been observed in different cortical areas.
Collapse
|
31
|
Frackowiak R, Markram H. The future of human cerebral cartography: a novel approach. Philos Trans R Soc Lond B Biol Sci 2015; 370:rstb.2014.0171. [PMID: 25823868 PMCID: PMC4387512 DOI: 10.1098/rstb.2014.0171] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Cerebral cartography can be understood in a limited, static, neuroanatomical sense. Temporal information from electrical recordings contributes information on regional interactions adding a functional dimension. Selective tagging and imaging of molecules adds biochemical contributions. Cartographic detail can also be correlated with normal or abnormal psychological or behavioural data. Modern cerebral cartography is assimilating all these elements. Cartographers continue to collect ever more precise data in the hope that general principles of organization will emerge. However, even detailed cartographic data cannot generate knowledge without a multi-scale framework making it possible to relate individual observations and discoveries. We propose that, in the next quarter century, advances in cartography will result in progressively more accurate drafts of a data-led, multi-scale model of human brain structure and function. These blueprints will result from analysis of large volumes of neuroscientific and clinical data, by a process of reconstruction, modelling and simulation. This strategy will capitalize on remarkable recent developments in informatics and computer science and on the existence of much existing, addressable data and prior, though fragmented, knowledge. The models will instantiate principles that govern how the brain is organized at different levels and how different spatio-temporal scales relate to each other in an organ-centred context.
Collapse
Affiliation(s)
- Richard Frackowiak
- The Human Brain Project, Centre Hospitalier Universitaire Vaudois, University of Lausanne, Lausanne 1011, Switzerland
| | - Henry Markram
- The Human Brain Project, Ecole Polytechnique Fedérale de Lausanne, Lausanne 1015, Switzerland
| |
Collapse
|
32
|
Reimann MW, King JG, Muller EB, Ramaswamy S, Markram H. An algorithm to predict the connectome of neural microcircuits. Front Comput Neurosci 2015; 9:120. [PMID: 26500529 PMCID: PMC4597796 DOI: 10.3389/fncom.2015.00120] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Accepted: 05/22/2015] [Indexed: 11/18/2022] Open
Abstract
Experimentally mapping synaptic connections, in terms of the numbers and locations of their synapses and estimating connection probabilities, is still not a tractable task, even for small volumes of tissue. In fact, the six layers of the neocortex contain thousands of unique types of synaptic connections between the many different types of neurons, of which only a handful have been characterized experimentally. Here we present a theoretical framework and a data-driven algorithmic strategy to digitally reconstruct the complete synaptic connectivity between the different types of neurons in a small well-defined volume of tissue—the micro-scale connectome of a neural microcircuit. By enforcing a set of established principles of synaptic connectivity, and leveraging interdependencies between fundamental properties of neural microcircuits to constrain the reconstructed connectivity, the algorithm yields three parameters per connection type that predict the anatomy of all types of biologically viable synaptic connections. The predictions reproduce a spectrum of experimental data on synaptic connectivity not used by the algorithm. We conclude that an algorithmic approach to the connectome can serve as a tool to accelerate experimental mapping, indicating the minimal dataset required to make useful predictions, identifying the datasets required to improve their accuracy, testing the feasibility of experimental measurements, and making it possible to test hypotheses of synaptic connectivity.
Collapse
Affiliation(s)
- Michael W Reimann
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL) Biotech Campus Geneva, Switzerland
| | - James G King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL) Biotech Campus Geneva, Switzerland
| | - Eilif B Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL) Biotech Campus Geneva, Switzerland
| | - Srikanth Ramaswamy
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL) Biotech Campus Geneva, Switzerland
| | - Henry Markram
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL) Biotech Campus Geneva, Switzerland
| |
Collapse
|
33
|
Alemi A, Baldassi C, Brunel N, Zecchina R. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks. PLoS Comput Biol 2015; 11:e1004439. [PMID: 26291608 PMCID: PMC4546407 DOI: 10.1371/journal.pcbi.1004439] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 06/19/2015] [Indexed: 11/30/2022] Open
Abstract
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. Recurrent neural networks have been shown to be able to store memory patterns as fixed point attractors of the dynamics of the network. The prototypical learning rule for storing memories in attractor neural networks is Hebbian learning, which can store up to 0.138N uncorrelated patterns in a recurrent network of N neurons. This is very far from the maximal capacity 2N, which can be achieved by supervised rules, e.g. by the perceptron learning rule. However, these rules are problematic for neurons in the neocortex or the hippocampus, since they rely on the computation of a supervisory error signal for each neuron of the network. We show here that the total synaptic input received by a neuron during the presentation of a sufficiently strong stimulus contains implicit information about the error, which can be extracted by setting three thresholds on the total input, defining depression and potentiation regions. The resulting learning rule implements basic biological constraints, and our simulations show that a network implementing it gets very close to the maximal capacity, both in the dense and sparse regimes, across all values of storage robustness. The rule predicts that when the total synaptic inputs goes beyond a threshold, no potentiation should occur.
Collapse
Affiliation(s)
- Alireza Alemi
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
- * E-mail: ,
| | - Carlo Baldassi
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
| | - Nicolas Brunel
- Departments of Statistics and Neurobiology, University of Chicago, Chicago, Illinois, United States of America
| | - Riccardo Zecchina
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
| |
Collapse
|
34
|
Ramaswamy S, Markram H. Anatomy and physiology of the thick-tufted layer 5 pyramidal neuron. Front Cell Neurosci 2015; 9:233. [PMID: 26167146 PMCID: PMC4481152 DOI: 10.3389/fncel.2015.00233] [Citation(s) in RCA: 107] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2015] [Accepted: 06/08/2015] [Indexed: 11/13/2022] Open
Abstract
The thick-tufted layer 5 (TTL5) pyramidal neuron is one of the most extensively studied neuron types in the mammalian neocortex and has become a benchmark for understanding information processing in excitatory neurons. By virtue of having the widest local axonal and dendritic arborization, the TTL5 neuron encompasses various local neocortical neurons and thereby defines the dimensions of neocortical microcircuitry. The TTL5 neuron integrates input across all neocortical layers and is the principal output pathway funneling information flow to subcortical structures. Several studies over the past decades have investigated the anatomy, physiology, synaptology, and pathophysiology of the TTL5 neuron. This review summarizes key discoveries and identifies potential avenues of research to facilitate an integrated and unifying understanding on the role of a central neuron in the neocortex.
Collapse
Affiliation(s)
- Srikanth Ramaswamy
- Blue Brain Project, Ecole Polytechnique Fédérale de Lausanne, Campus Biotech Geneva, Switzerland
| | - Henry Markram
- Blue Brain Project, Ecole Polytechnique Fédérale de Lausanne, Campus Biotech Geneva, Switzerland
| |
Collapse
|
35
|
Craddock TJA, Priel A, Tuszynski JA. Keeping time: could quantum beating in microtubules be the basis for the neural synchrony related to consciousness? J Integr Neurosci 2015; 13:293-311. [PMID: 25012713 DOI: 10.1142/s0219635214400019] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
This paper discusses the possibility of quantum coherent oscillations playing a role in neuronal signaling. Consciousness correlates strongly with coherent neural oscillations, however the mechanisms by which neurons synchronize are not fully elucidated. Recent experimental evidence of quantum beats in light-harvesting complexes of plants (LHCII) and bacteria provided a stimulus for seeking similar effects in important structures found in animal cells, especially in neurons. We argue that microtubules (MTs), which play critical roles in all eukaryotic cells, possess structural and functional characteristics that are consistent with quantum coherent excitations in the aromatic groups of their tryptophan residues. Furthermore we outline the consequences of these findings on neuronal processes including the emergence of consciousness.
Collapse
Affiliation(s)
- Travis J A Craddock
- Center for Psychological Studies, Graduate School of Computer and Information Sciences, College of Osteophatic Medicine and the Institute for Neuro-Immune Medicine, Nova Southeastern University, Fort Lauderdale, Florida 33314-7796, USA
| | | | | |
Collapse
|
36
|
Laramée ME, Boire D. Visual cortical areas of the mouse: comparison of parcellation and network structure with primates. Front Neural Circuits 2015; 8:149. [PMID: 25620914 PMCID: PMC4286719 DOI: 10.3389/fncir.2014.00149] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2014] [Accepted: 12/09/2014] [Indexed: 12/27/2022] Open
Abstract
Brains have evolved to optimize sensory processing. In primates, complex cognitive tasks must be executed and evolution led to the development of large brains with many cortical areas. Rodents do not accomplish cognitive tasks of the same level of complexity as primates and remain with small brains both in relative and absolute terms. But is a small brain necessarily a simple brain? In this review, several aspects of the visual cortical networks have been compared between rodents and primates. The visual system has been used as a model to evaluate the level of complexity of the cortical circuits at the anatomical and functional levels. The evolutionary constraints are first presented in order to appreciate the rules for the development of the brain and its underlying circuits. The organization of sensory pathways, with their parallel and cross-modal circuits, is also examined. Other features of brain networks, often considered as imposing constraints on the development of underlying circuitry, are also discussed and their effect on the complexity of the mouse and primate brain are inspected. In this review, we discuss the common features of cortical circuits in mice and primates and see how these can be useful in understanding visual processing in these animals.
Collapse
Affiliation(s)
- Marie-Eve Laramée
- Laboratory of Neuroplasticity and Neuroproteomics, Department of Biology, KU Leuven-University of Leuven Leuven, Belgium
| | - Denis Boire
- Département d'anatomie, Université du Québec à Trois-Rivières Trois-Rivières, QC, Canada
| |
Collapse
|
37
|
Araya R. Input transformation by dendritic spines of pyramidal neurons. Front Neuroanat 2014; 8:141. [PMID: 25520626 PMCID: PMC4251451 DOI: 10.3389/fnana.2014.00141] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2014] [Accepted: 11/11/2014] [Indexed: 11/13/2022] Open
Abstract
In the mammalian brain, most inputs received by a neuron are formed on the dendritic tree. In the neocortex, the dendrites of pyramidal neurons are covered by thousands of tiny protrusions known as dendritic spines, which are the major recipient sites for excitatory synaptic information in the brain. Their peculiar morphology, with a small head connected to the dendritic shaft by a slender neck, has inspired decades of theoretical and more recently experimental work in an attempt to understand how excitatory synaptic inputs are processed, stored and integrated in pyramidal neurons. Advances in electrophysiological, optical and genetic tools are now enabling us to unravel the biophysical and molecular mechanisms controlling spine function in health and disease. Here I highlight relevant findings, challenges and hypotheses on spine function, with an emphasis on the electrical properties of spines and on how these affect the storage and integration of excitatory synaptic inputs in pyramidal neurons. In an attempt to make sense of the published data, I propose that the raison d'etre for dendritic spines lies in their ability to undergo activity-dependent structural and molecular changes that can modify synaptic strength, and hence alter the gain of the linearly integrated sub-threshold depolarizations in pyramidal neuron dendrites before the generation of a dendritic spike.
Collapse
Affiliation(s)
- Roberto Araya
- Department of Neurosciences, Faculty of Medicine, University of Montreal Montreal, QC, Canada
| |
Collapse
|
38
|
Einarsson H, Lengler J, Steger A. A high-capacity model for one shot association learning in the brain. Front Comput Neurosci 2014; 8:140. [PMID: 25426060 PMCID: PMC4224099 DOI: 10.3389/fncom.2014.00140] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2014] [Accepted: 10/16/2014] [Indexed: 11/13/2022] Open
Abstract
We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs.
Collapse
Affiliation(s)
- Hafsteinn Einarsson
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zürich Zürich, Switzerland
| | - Johannes Lengler
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zürich Zürich, Switzerland
| | - Angelika Steger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zürich Zürich, Switzerland ; Collegium Helveticum Zürich, Switzerland
| |
Collapse
|
39
|
DeBello WM, McBride TJ, Nichols GS, Pannoni KE, Sanculi D, Totten DJ. Input clustering and the microscale structure of local circuits. Front Neural Circuits 2014; 8:112. [PMID: 25309336 PMCID: PMC4162353 DOI: 10.3389/fncir.2014.00112] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Accepted: 08/28/2014] [Indexed: 11/13/2022] Open
Abstract
The recent development of powerful tools for high-throughput mapping of synaptic networks promises major advances in understanding brain function. One open question is how circuits integrate and store information. Competing models based on random vs. structured connectivity make distinct predictions regarding the dendritic addressing of synaptic inputs. In this article we review recent experimental tests of one of these models, the input clustering hypothesis. Across circuits, brain regions and species, there is growing evidence of a link between synaptic co-activation and dendritic location, although this finding is not universal. The functional implications of input clustering and future challenges are discussed.
Collapse
Affiliation(s)
- William M DeBello
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California-Davis Davis, CA, USA
| | - Thomas J McBride
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California-Davis Davis, CA, USA ; PLOS Medicine San Francisco, CA, USA
| | - Grant S Nichols
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California-Davis Davis, CA, USA
| | - Katy E Pannoni
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California-Davis Davis, CA, USA
| | - Daniel Sanculi
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California-Davis Davis, CA, USA
| | - Douglas J Totten
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California-Davis Davis, CA, USA
| |
Collapse
|
40
|
Lavigne F, Avnaïm F, Dumercy L. Inter-synaptic learning of combination rules in a cortical network model. Front Psychol 2014; 5:842. [PMID: 25221529 PMCID: PMC4148068 DOI: 10.3389/fpsyg.2014.00842] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Accepted: 07/15/2014] [Indexed: 11/28/2022] Open
Abstract
Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS) learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons. Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network.
Collapse
Affiliation(s)
- Frédéric Lavigne
- UMR 7320 CNRS, BCL, Université Nice Sophia AntipolisNice, France
| | | | - Laurent Dumercy
- UMR 7320 CNRS, BCL, Université Nice Sophia AntipolisNice, France
| |
Collapse
|
41
|
Activity-dependent dendritic spine neck changes are correlated with synaptic strength. Proc Natl Acad Sci U S A 2014; 111:E2895-904. [PMID: 24982196 DOI: 10.1073/pnas.1321869111] [Citation(s) in RCA: 139] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Most excitatory inputs in the mammalian brain are made on dendritic spines, rather than on dendritic shafts. Spines compartmentalize calcium, and this biochemical isolation can underlie input-specific synaptic plasticity, providing a raison d'etre for spines. However, recent results indicate that the spine can experience a membrane potential different from that in the parent dendrite, as though the spine neck electrically isolated the spine. Here we use two-photon calcium imaging of mouse neocortical pyramidal neurons to analyze the correlation between the morphologies of spines activated under minimal synaptic stimulation and the excitatory postsynaptic potentials they generate. We find that excitatory postsynaptic potential amplitudes are inversely correlated with spine neck lengths. Furthermore, a spike timing-dependent plasticity protocol, in which two-photon glutamate uncaging over a spine is paired with postsynaptic spikes, produces rapid shrinkage of the spine neck and concomitant increases in the amplitude of the evoked spine potentials. Using numerical simulations, we explore the parameter regimes for the spine neck resistance and synaptic conductance changes necessary to explain our observations. Our data, directly correlating synaptic and morphological plasticity, imply that long-necked spines have small or negligible somatic voltage contributions, but that, upon synaptic stimulation paired with postsynaptic activity, they can shorten their necks and increase synaptic efficacy, thus changing the input/output gain of pyramidal neurons.
Collapse
|
42
|
STRACK BEATA, JACOBS KIMBERLEM, CIOS KRZYSZTOFJ. SIMULATING VERTICAL AND HORIZONTAL INHIBITION WITH SHORT-TERM DYNAMICS IN A MULTI-COLUMN MULTI-LAYER MODEL OF NEOCORTEX. Int J Neural Syst 2014; 24:1440002. [DOI: 10.1142/s0129065714400024] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The paper introduces a multi-layer multi-column model of the cortex that uses four different neuron types and short-term plasticity dynamics. It was designed with details of neuronal connectivity available in the literature and meets these conditions: (1) biologically accurate laminar and columnar flows of activity, (2) normal function of low-threshold spiking and fast spiking neurons, and (3) ability to generate different stages of epileptiform activity. With these characteristics the model allows for modeling lesioned or malformed cortex, i.e. examine properties of developmentally malformed cortex in which the balance between inhibitory neuron subtypes is disturbed.
Collapse
Affiliation(s)
- BEATA STRACK
- Department of Computer Science, Virginia Commonwealth University, Richmond, VA, USA
| | - KIMBERLE M. JACOBS
- Department of Anatomy and Neurobiology, Virginia Commonwealth University, Richmond, VA, USA
| | - KRZYSZTOF J. CIOS
- Department of Computer Science, Virginia Commonwealth University, Richmond, VA, USA
- IITiS Polish Academy of Sciences, Poland
| |
Collapse
|
43
|
Carron R, Filipchuk A, Nardou R, Singh A, Michel FJ, Humphries MD, Hammond C. Early hypersynchrony in juvenile PINK1(-)/(-) motor cortex is rescued by antidromic stimulation. Front Syst Neurosci 2014; 8:95. [PMID: 24904316 PMCID: PMC4033197 DOI: 10.3389/fnsys.2014.00095] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2014] [Accepted: 05/05/2014] [Indexed: 11/14/2022] Open
Abstract
In Parkinson’s disease (PD), cortical networks show enhanced synchronized activity but whether this precedes motor signs is unknown. We investigated this question in PINK1−/− mice, a genetic rodent model of the PARK6 variant of familial PD which shows impaired spontaneous locomotion at 16 months. We used two-photon calcium imaging and whole-cell patch clamp in slices from juvenile (P14–P21) wild-type or PINK1−/− mice. We designed a horizontal tilted cortico-subthalamic slice where the only connection between cortex and subthalamic nucleus (STN) is the hyperdirect cortico-subthalamic pathway. We report excessive correlation and synchronization in PINK1−/− M1 cortical networks 15 months before motor impairment. The percentage of correlated pairs of neurons and their strength of correlation were higher in the PINK1−/− M1 than in the wild type network and the synchronized network events involved a higher percentage of neurons. Both features were independent of thalamo-cortical pathways, insensitive to chronic levodopa treatment of pups, but totally reversed by antidromic invasion of M1 pyramidal neurons by axonal spikes evoked by high frequency stimulation (HFS) of the STN. Our study describes an early excess of synchronization in the PINK1−/− cortex and suggests a potential role of antidromic activation of cortical interneurons in network desynchronization. Such backward effect on interneurons activity may be of importance for HFS-induced network desynchronization.
Collapse
Affiliation(s)
- Romain Carron
- Aix Marseille Université Marseille, France ; Institut National de la Recherche Médicale et de la Santé, INMED, UMR 901 Marseille, France ; APHM, Hopital de la Timone, Service de Neurochirurgie Fonctionnelle et Stereotaxique Marseille, France
| | - Anton Filipchuk
- Aix Marseille Université Marseille, France ; Institut National de la Recherche Médicale et de la Santé, INMED, UMR 901 Marseille, France ; Instituto de Neurociencias, CSIC and Universidad Miguel Hernández, San Juan de Alicante Alicante, Spain
| | | | - Abhinav Singh
- Faculty of Life Sciences, University of Manchester Manchester, UK
| | | | - Mark D Humphries
- Faculty of Life Sciences, University of Manchester Manchester, UK
| | - Constance Hammond
- Aix Marseille Université Marseille, France ; Institut National de la Recherche Médicale et de la Santé, INMED, UMR 901 Marseille, France
| |
Collapse
|
44
|
Klinshov VV, Teramae JN, Nekorkin VI, Fukai T. Dense neuron clustering explains connectivity statistics in cortical microcircuits. PLoS One 2014; 9:e94292. [PMID: 24732632 PMCID: PMC3986068 DOI: 10.1371/journal.pone.0094292] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2013] [Accepted: 03/14/2014] [Indexed: 11/17/2022] Open
Abstract
Local cortical circuits appear highly non-random, but the underlying connectivity rule remains elusive. Here, we analyze experimental data observed in layer 5 of rat neocortex and suggest a model for connectivity from which emerge essential observed non-random features of both wiring and weighting. These features include lognormal distributions of synaptic connection strength, anatomical clustering, and strong correlations between clustering and connection strength. Our model predicts that cortical microcircuits contain large groups of densely connected neurons which we call clusters. We show that such a cluster contains about one fifth of all excitatory neurons of a circuit which are very densely connected with stronger than average synapses. We demonstrate that such clustering plays an important role in the network dynamics, namely, it creates bistable neural spiking in small cortical circuits. Furthermore, introducing local clustering in large-scale networks leads to the emergence of various patterns of persistent local activity in an ongoing network activity. Thus, our results may bridge a gap between anatomical structure and persistent activity observed during working memory and other cognitive processes.
Collapse
Affiliation(s)
- Vladimir V Klinshov
- Nonlinear Dynamics Department, Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia; Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan; Laboratory for Nonlinear Oscillatory-Wave Physics, University of Nizhni Novgorod, Nizhni Novgorod, Russia
| | - Jun-nosuke Teramae
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan; Department of Bioinformatic Engineering, Osaka University, Suita, Osaka, Japan; PRESTO, Japan Science and Technology Agency, Kawaguchi, Saitama, Japan
| | - Vladimir I Nekorkin
- Nonlinear Dynamics Department, Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia; Laboratory for Nonlinear Oscillatory-Wave Physics, University of Nizhni Novgorod, Nizhni Novgorod, Russia; Department of Oscillations Theory and Automatic Control, University of Nizhni Novgorod, Nizhni Novgorod, Russia
| | - Tomoki Fukai
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan; CREST, Japan Science and Technology Agency, Kawaguchi, Saitama, Japan
| |
Collapse
|
45
|
Lengler J, Jug F, Steger A. Reliable neuronal systems: the importance of heterogeneity. PLoS One 2013; 8:e80694. [PMID: 24324621 PMCID: PMC3851464 DOI: 10.1371/journal.pone.0080694] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2013] [Accepted: 10/14/2013] [Indexed: 12/31/2022] Open
Abstract
For every engineer it goes without saying: in order to build a reliable system we need components that consistently behave precisely as they should. It is also well known that neurons, the building blocks of brains, do not satisfy this constraint. Even neurons of the same type come with huge variances in their properties and these properties also vary over time. Synapses, the connections between neurons, are highly unreliable in forwarding signals. In this paper we argue that both these fact add variance to neuronal processes, and that this variance is not a handicap of neural systems, but that instead predictable and reliable functional behavior of neural systems depends crucially on this variability. In particular, we show that higher variance allows a recurrently connected neural population to react more sensitively to incoming signals, and processes them faster and more energy efficient. This, for example, challenges the general assumption that the intrinsic variability of neurons in the brain is a defect that has to be overcome by synaptic plasticity in the process of learning.
Collapse
Affiliation(s)
- Johannes Lengler
- Institute of Theoretical Computer Science, ETH Zürich, Zürich, Switzerland
- * E-mail:
| | - Florian Jug
- Max-Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Angelika Steger
- Institute of Theoretical Computer Science, ETH Zürich, Zürich, Switzerland
- Collegium Helveticum, Zürich, Switzerland
| |
Collapse
|
46
|
Packer AM, McConnell DJ, Fino E, Yuste R. Axo-dendritic overlap and laminar projection can explain interneuron connectivity to pyramidal cells. Cereb Cortex 2013; 23:2790-802. [PMID: 22941716 PMCID: PMC3968298 DOI: 10.1093/cercor/bhs210] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Neocortical GABAergic interneurons have important roles in the normal and pathological states of the circuit. Recent work has revealed that somatostatin-positive (SOM) and parvalbumin-positive (PV) interneurons connect promiscuously to pyramidal cells (PCs). We investigated whether Peters' rule, that is, the spatial overlap of axons and dendrites, could explain this unspecific connectivity. We reconstructed the morphologies of P11-17 mouse SOM and PV interneurons and their PC targets, and performed Monte Carlo simulations to build maps of predicted connectivity based on Peters' rule. We then compared the predicted with the real connectivity maps, measured with 2-photon uncaging experiments, and found no statistical differences between them in the probability of connection as a function of distance and in the spatial structure of the maps. Finally, using reconstructions of connected SOM-PCs and PV-PCs, we investigated the subcellular targeting specificity, by analyzing the postsynaptic position of the contacts, and found that their spatial distributions match the distribution of postsynaptic PC surface area, in agreement with Peters' rule. Thus, the spatial profile of the connectivity maps and even the postsynaptic position of interneuron contacts could result from the mere overlap of axonal and dendritic arborizations and their laminar projections patterns.
Collapse
Affiliation(s)
- Adam M Packer
- HHMI, Department of Biological Sciences, Columbia University, New York, NY 10027, USA
| | | | | | | |
Collapse
|
47
|
Nagelhus EA, Amiry-Moghaddam M, Bergersen LH, Bjaalie JG, Eriksson J, Gundersen V, Leergaard TB, Morth JP, Storm-Mathisen J, Torp R, Walhovd KB, Tønjum T. The glia doctrine: addressing the role of glial cells in healthy brain ageing. Mech Ageing Dev 2013; 134:449-59. [PMID: 24141107 DOI: 10.1016/j.mad.2013.10.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Revised: 10/03/2013] [Accepted: 10/04/2013] [Indexed: 01/14/2023]
Abstract
Glial cells in their plurality pervade the human brain and impact on brain structure and function. A principal component of the emerging glial doctrine is the hypothesis that astrocytes, the most abundant type of glial cells, trigger major molecular processes leading to brain ageing. Astrocyte biology has been examined using molecular, biochemical and structural methods, as well as 3D brain imaging in live animals and humans. Exosomes are extracelluar membrane vesicles that facilitate communication between glia, and have significant potential for biomarker discovery and drug delivery. Polymorphisms in DNA repair genes may indirectly influence the structure and function of membrane proteins expressed in glial cells and predispose specific cell subgroups to degeneration. Physical exercise may reduce or retard age-related brain deterioration by a mechanism involving neuro-glial processes. It is most likely that additional information about the distribution, structure and function of glial cells will yield novel insight into human brain ageing. Systematic studies of glia and their functions are expected to eventually lead to earlier detection of ageing-related brain dysfunction and to interventions that could delay, reduce or prevent brain dysfunction.
Collapse
Affiliation(s)
- Erlend A Nagelhus
- Department of Physiology, Institute of Basic Medical Sciences, University of Oslo, Norway; Centre for Molecular Medicine Norway (NCMM), The Nordic EMBL Partnership, University of Oslo, Norway; Department of Neurology, Oslo University Hospital, Norway
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
48
|
Abstract
The mammalian brain is a complex multicellular system involving enormous numbers of neurons. The neuron is the basic functional unit of the brain, and neurons are organized by specialized intercellular connections into circuits with many other neurons. Physiological studies have revealed that individual neurons have remarkably selective response properties, and this individuality is a fundamental requirement for building complex and functionally diverse neural networks. Recent molecular biological studies have revealed genetic bases for neuronal individuality in the mammalian brain. For example, in the rodent olfactory epithelium, individual olfactory neurons express only one type of odorant receptor (OR) out of the over 1000 ORs encoded in the genome. The expressed OR determines the neuron's selective chemosensory response and specifies its axonal targeting to a particular olfactory glomerulus in the olfactory bulb. Neuronal diversity can also be generated in individual cells by the independent and stochastic expression of autosomal alleles, which leads to functional heterozygosity among neurons. Among the many genes that show autosomal stochastic monoallelic expression, approximately 50 members of the clustered protocadherins (Pcdhs) are stochastically expressed in individual neurons in distinct combinations. The clustered Pcdhs belong to a large subfamily of the cadherin superfamily of homophilic cell-adhesion proteins. Loss-of-function analyses show that the clustered Pcdhs have critical functions in the accuracy of axonal projections, synaptic formation, dendritic arborization, and neuronal survival. In addition, cis-tetramers, composed of heteromultimeric clustered Pcdh members, represent selective binding units for cell-cell interactions, and provide exponential numbers of possible cell-surface relationships between individual neurons. The extensive molecular diversity of neuronal cell-surface proteins affects neurons’ individual properties and connectivities. The molecular features of the diverse clustered Pcdh molecules suggest that they provide a genetic basis for neuronal individuality and appropriate neuronal wiring in the brain.
Collapse
Affiliation(s)
- Takeshi Yagi
- KOKORO-Biology Group, Laboratories for Integrated Biology, Graduate School of Frontier Biosciences, Osaka University, Osaka, Japan.
| |
Collapse
|
49
|
Hiratani N, Teramae JN, Fukai T. Associative memory model with long-tail-distributed Hebbian synaptic connections. Front Comput Neurosci 2013; 6:102. [PMID: 23403536 PMCID: PMC3566427 DOI: 10.3389/fncom.2012.00102] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2012] [Accepted: 12/30/2012] [Indexed: 11/17/2022] Open
Abstract
The postsynaptic potentials of pyramidal neurons have a non-Gaussian amplitude distribution with a heavy tail in both hippocampus and neocortex. Such distributions of synaptic weights were recently shown to generate spontaneous internal noise optimal for spike propagation in recurrent cortical circuits. However, whether this internal noise generation by heavy-tailed weight distributions is possible for and beneficial to other computational functions remains unknown. To clarify this point, we construct an associative memory (AM) network model of spiking neurons that stores multiple memory patterns in a connection matrix with a lognormal weight distribution. In AM networks, non-retrieved memory patterns generate a cross-talk noise that severely disturbs memory recall. We demonstrate that neurons encoding a retrieved memory pattern and those encoding non-retrieved memory patterns have different subthreshold membrane-potential distributions in our model. Consequently, the probability of responding to inputs at strong synapses increases for the encoding neurons, whereas it decreases for the non-encoding neurons. Our results imply that heavy-tailed distributions of connection weights can generate noise useful for AM recall.
Collapse
Affiliation(s)
- Naoki Hiratani
- Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, The University of Tokyo Kashiwa, Japan ; Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan
| | | | | |
Collapse
|
50
|
Nondopaminergic Neurotransmission in the Pathophysiology of Tourette Syndrome. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2013; 112:95-130. [DOI: 10.1016/b978-0-12-411546-0.00004-4] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|