1
|
Nelson AD, Catalfio AM, Gupta JP, Min L, Caballero-Florán RN, Dean KP, Elvira CC, Derderian KD, Kyoung H, Sahagun A, Sanders SJ, Bender KJ, Jenkins PM. Physical and functional convergence of the autism risk genes Scn2a and Ank2 in neocortical pyramidal cell dendrites. Neuron 2024; 112:1133-1149.e6. [PMID: 38290518 PMCID: PMC11097922 DOI: 10.1016/j.neuron.2024.01.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 04/26/2023] [Accepted: 01/03/2024] [Indexed: 02/01/2024]
Abstract
Dysfunction in sodium channels and their ankyrin scaffolding partners have both been implicated in neurodevelopmental disorders, including autism spectrum disorder (ASD). In particular, the genes SCN2A, which encodes the sodium channel NaV1.2, and ANK2, which encodes ankyrin-B, have strong ASD association. Recent studies indicate that ASD-associated haploinsufficiency in Scn2a impairs dendritic excitability and synaptic function in neocortical pyramidal cells, but how NaV1.2 is anchored within dendritic regions is unknown. Here, we show that ankyrin-B is essential for scaffolding NaV1.2 to the dendritic membrane of mouse neocortical neurons and that haploinsufficiency of Ank2 phenocopies intrinsic dendritic excitability and synaptic deficits observed in Scn2a+/- conditions. These results establish a direct, convergent link between two major ASD risk genes and reinforce an emerging framework suggesting that neocortical pyramidal cell dendritic dysfunction can contribute to neurodevelopmental disorder pathophysiology.
Collapse
Affiliation(s)
- Andrew D Nelson
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Amanda M Catalfio
- Department of Pharmacology, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Julie P Gupta
- Department of Pharmacology, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Lia Min
- Department of Pharmacology, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | - Kendall P Dean
- Cellular and Molecular Biology Program, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Carina C Elvira
- Cellular and Molecular Biology Program, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Kimberly D Derderian
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Henry Kyoung
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Atehsa Sahagun
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Stephan J Sanders
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Department of Psychiatry, University of California, San Francisco, San Francisco, CA, USA
| | - Kevin J Bender
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Department of Neurology, University of California, San Francisco, San Francisco, CA, USA.
| | - Paul M Jenkins
- Department of Pharmacology, University of Michigan Medical School, Ann Arbor, MI, USA; Department of Psychiatry, University of Michigan Medical School, Ann Arbor, MI, USA.
| |
Collapse
|
2
|
Brandalise F, Kalmbach BE, Cook EP, Brager DH. Impaired dendritic spike generation in the Fragile X prefrontal cortex is due to loss of dendritic sodium channels. J Physiol 2023; 601:831-845. [PMID: 36625320 PMCID: PMC9970745 DOI: 10.1113/jp283311] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 01/04/2023] [Indexed: 01/11/2023] Open
Abstract
Patients with Fragile X syndrome, the leading monogenetic cause of autism, suffer from impairments related to the prefrontal cortex, including working memory and attention. Synaptic inputs to the distal dendrites of layer 5 pyramidal neurons in the prefrontal cortex have a weak influence on the somatic membrane potential. To overcome this filtering, distal inputs are transformed into local dendritic Na+ spikes, which propagate to the soma and trigger action potential output. Layer 5 extratelencephalic (ET) prefrontal cortex (PFC) neurons project to the brainstem and various thalamic nuclei and are therefore well positioned to integrate task-relevant sensory signals and guide motor actions. We used current clamp and outside-out patch clamp recording to investigate dendritic spike generation in ET neurons from male wild-type and Fmr1 knockout (FX) mice. The threshold for dendritic spikes was more depolarized in FX neurons compared to wild-type. Analysis of voltage responses to simulated in vivo 'noisy' current injections showed that a larger dendritic input stimulus was required to elicit dendritic spikes in FX ET dendrites compared to wild-type. Patch clamp recordings revealed that the dendritic Na+ conductance was significantly smaller in FX ET dendrites. Taken together, our results suggest that the generation of Na+ -dependent dendritic spikes is impaired in ET neurons of the PFC in FX mice. Considering our prior findings that somatic D-type K+ and dendritic hyperpolarization-activated cyclic nucleotide-gated-channel function is reduced in ET neurons, we suggest that dendritic integration by PFC circuits is fundamentally altered in Fragile X syndrome. KEY POINTS: Dendritic spike threshold is depolarized in layer 5 prefrontal cortex neurons in Fmr1 knockout (FX) mice. Simultaneous somatic and dendritic recording with white noise current injections revealed that larger dendritic stimuli were required to elicit dendritic spikes in FX extratelencephalic (ET) neurons. Outside-out patch clamp recording revealed that dendritic sodium conductance density was lower in FX ET neurons.
Collapse
Affiliation(s)
- Federico Brandalise
- Center for Learning and Memory, University of Texas at Austin, Austin, TX 78712 USA
- Department of Neuroscience University of Texas at Austin, Austin, TX 78712 USA
- Current address: Department of Biosciences, University of Milan, Milano Italy
| | - Brian E. Kalmbach
- Center for Learning and Memory, University of Texas at Austin, Austin, TX 78712 USA
- Department of Neuroscience University of Texas at Austin, Austin, TX 78712 USA
- Current address: Allen Institute for Brain Science, Seattle, WA and Department of Physiology and Biophysics, University of Washington
| | - Erik P. Cook
- Department of Physiology, McGill University, Montreal QC, Canada
| | - Darrin H. Brager
- Center for Learning and Memory, University of Texas at Austin, Austin, TX 78712 USA
- Department of Neuroscience University of Texas at Austin, Austin, TX 78712 USA
| |
Collapse
|
3
|
Ujfalussy BB, Orbán G. Sampling motion trajectories during hippocampal theta sequences. eLife 2022; 11:74058. [DOI: 10.7554/elife.74058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 09/28/2022] [Indexed: 11/06/2022] Open
Abstract
Efficient planning in complex environments requires that uncertainty associated with current inferences and possible consequences of forthcoming actions is represented. Representation of uncertainty has been established in sensory systems during simple perceptual decision making tasks but it remains unclear if complex cognitive computations such as planning and navigation are also supported by probabilistic neural representations. Here, we capitalized on gradually changing uncertainty along planned motion trajectories during hippocampal theta sequences to capture signatures of uncertainty representation in population responses. In contrast with prominent theories, we found no evidence of encoding parameters of probability distributions in the momentary population activity recorded in an open-field navigation task in rats. Instead, uncertainty was encoded sequentially by sampling motion trajectories randomly and efficiently in subsequent theta cycles from the distribution of potential trajectories. Our analysis is the first to demonstrate that the hippocampus is well equipped to contribute to optimal planning by representing uncertainty.
Collapse
Affiliation(s)
- Balazs B Ujfalussy
- Laboratory of Biological Computation, Institute of Experimental Medicine
- Laboratory of Neuronal Signalling, Institute of Experimental Medicine, Budapest
| | - Gergő Orbán
- Computational Systems Neuroscience Lab, Wigner Research Center for Physics, Budapest
| |
Collapse
|
4
|
Moore JJ, Robert V, Rashid SK, Basu J. Assessing Local and Branch-specific Activity in Dendrites. Neuroscience 2022; 489:143-164. [PMID: 34756987 PMCID: PMC9125998 DOI: 10.1016/j.neuroscience.2021.10.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 10/09/2021] [Accepted: 10/21/2021] [Indexed: 01/12/2023]
Abstract
Dendrites are elaborate neural processes which integrate inputs from various sources in space and time. While decades of work have suggested an independent role for dendrites in driving nonlinear computations for the cell, only recently have technological advances enabled us to capture the variety of activity in dendrites and their coupling dynamics with the soma. Under certain circumstances, activity generated in a given dendritic branch remains isolated, such that the soma or even sister dendrites are not privy to these localized signals. Such branch-specific activity could radically increase the capacity and flexibility of coding for the cell as a whole. Here, we discuss these forms of localized and branch-specific activity, their functional relevance in plasticity and behavior, and their supporting biophysical and circuit-level mechanisms. We conclude by showcasing electrical and optical approaches in hippocampal area CA3, using original experimental data to discuss experimental and analytical methodology and key considerations to take when investigating the functional relevance of independent dendritic activity.
Collapse
Affiliation(s)
- Jason J Moore
- Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
| | - Vincent Robert
- Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
| | - Shannon K Rashid
- Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
| | - Jayeeta Basu
- Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA; Department of Neuroscience and Physiology, New York University Grossman School of Medicine, New York, NY 10016, USA; Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA.
| |
Collapse
|
5
|
Input rate encoding and gain control in dendrites of neocortical pyramidal neurons. Cell Rep 2022; 38:110382. [PMID: 35172157 PMCID: PMC8967317 DOI: 10.1016/j.celrep.2022.110382] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 11/15/2021] [Accepted: 01/23/2022] [Indexed: 01/06/2023] Open
Abstract
Elucidating how neurons encode network activity is essential to understanding how the brain processes information. Neocortical pyramidal cells receive excitatory input onto spines distributed along dendritic branches. Local dendritic branch nonlinearities can boost the response to spatially clustered and synchronous input, but how this translates into the integration of patterns of ongoing activity remains unclear. To examine dendritic integration under naturalistic stimulus regimes, we use two-photon glutamate uncaging to repeatedly activate multiple dendritic spines at random intervals. In the proximal dendrites of two populations of layer 5 pyramidal neurons in the mouse motor cortex, spatially restricted synchrony is not a prerequisite for dendritic boosting. Branches encode afferent inputs with distinct rate sensitivities depending upon cell and branch type. Thus, inputs distributed along a dendritic branch can recruit supralinear boosting and the window of this nonlinearity may provide a mechanism by which dendrites can preferentially amplify slow-frequency network oscillations.
Collapse
|
6
|
A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron 2021; 109:4001-4017.e10. [PMID: 34715026 PMCID: PMC8691952 DOI: 10.1016/j.neuron.2021.09.044] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 08/10/2021] [Accepted: 09/23/2021] [Indexed: 11/23/2022]
Abstract
Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons.
Collapse
|
7
|
Cazé RD, Stimberg M. Neurons with dendrites can perform linearly separable computations with low resolution synaptic weights. F1000Res 2020; 9:1174. [PMID: 33564396 PMCID: PMC7848858 DOI: 10.12688/f1000research.26486.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/30/2021] [Indexed: 11/25/2022] Open
Abstract
In theory, neurons modelled as single layer perceptrons can implement all linearly separable computations. In practice, however, these computations may require arbitrarily precise synaptic weights. This is a strong constraint since both biological neurons and their artificial counterparts have to cope with limited precision. Here, we explore how non-linear processing in dendrites helps overcome this constraint. We start by finding a class of computations which requires increasing precision with the number of inputs in a perceptron and show that it can be implemented without this constraint in a neuron with sub-linear dendritic subunits. Then, we complement this analytical study by a simulation of a biophysical neuron model with two passive dendrites and a soma, and show that it can implement this computation. This work demonstrates a new role of dendrites in neural computation: by distributing the computation across independent subunits, the same computation can be performed more efficiently with less precise tuning of the synaptic weights. This work not only offers new insight into the importance of dendrites for biological neurons, but also paves the way for new, more efficient architectures of artificial neuromorphic chips.
Collapse
Affiliation(s)
- Romain D Cazé
- IEMN, CNRS UMR 8520, Villeneuve d'asq, 59650, France
| | | |
Collapse
|
8
|
Ujfalussy BB, Makara JK. Impact of functional synapse clusters on neuronal response selectivity. Nat Commun 2020; 11:1413. [PMID: 32179739 PMCID: PMC7075899 DOI: 10.1038/s41467-020-15147-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Accepted: 02/20/2020] [Indexed: 12/24/2022] Open
Abstract
Clustering of functionally similar synapses in dendrites is thought to affect neuronal input-output transformation by triggering local nonlinearities. However, neither the in vivo impact of synaptic clusters on somatic membrane potential (sVm), nor the rules of cluster formation are elucidated. We develop a computational approach to measure the effect of functional synaptic clusters on sVm response of biophysical model CA1 and L2/3 pyramidal neurons to in vivo-like inputs. We demonstrate that small synaptic clusters appearing with random connectivity do not influence sVm. With structured connectivity, ~10-20 synapses/cluster are optimal for clustering-based tuning via state-dependent mechanisms, but larger selectivity is achieved by 2-fold potentiation of the same synapses. We further show that without nonlinear amplification of the effect of random clusters, action potential-based, global plasticity rules cannot generate functional clustering. Our results suggest that clusters likely form via local synaptic interactions, and have to be moderately large to impact sVm responses.
Collapse
Affiliation(s)
- Balázs B Ujfalussy
- Laboratory of Neuronal Signaling, Institute of Experimental Medicine, 1083, Budapest, Hungary.
| | - Judit K Makara
- Laboratory of Neuronal Signaling, Institute of Experimental Medicine, 1083, Budapest, Hungary
| |
Collapse
|
9
|
Iacobucci GJ, Popescu GK. Ca 2+-Dependent Inactivation of GluN2A and GluN2B NMDA Receptors Occurs by a Common Kinetic Mechanism. Biophys J 2020; 118:798-812. [PMID: 31629478 PMCID: PMC7036730 DOI: 10.1016/j.bpj.2019.07.057] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 06/14/2019] [Accepted: 07/10/2019] [Indexed: 12/12/2022] Open
Abstract
N-Methyl-d-aspartate (NMDA) receptors are Ca2+-permeable channels gated by glutamate and glycine that are essential for central excitatory transmission. Ca2+-dependent inactivation (CDI) is a regulatory feedback mechanism that reduces GluN2A-type NMDA receptor responses in an activity-dependent manner. Although CDI is mediated by calmodulin binding to the constitutive GluN1 subunit, prior studies suggest that GluN2B-type receptors are insensitive to CDI. We examined the mechanism of CDI subtype dependence using electrophysiological recordings of recombinant NMDA receptors expressed in HEK-293 cells. In physiological external Ca2+, we observed robust CDI of whole-cell GluN2A currents (0.42 ± 0.05) but no CDI in GluN2B currents (0.08 ± 0.07). In contrast, when Ca2+ was supplied intracellularly, robust CDI occurred for both GluN2A and GluN2B currents (0.75 ± 0.03 and 0.67 ± 0.02, respectively). To examine how the source of Ca2+ affects CDI, we recorded one-channel Na+ currents to quantify the receptor gating mechanism while simultaneously monitoring ionomycin-induced intracellular Ca2+ elevations with fluorometry. We found that CDI of both GluN2A and GluN2B receptors reflects receptor accumulation in long-lived closed (desensitized) states, suggesting that the observed subtype-dependent differences in macroscopic CDI reflect intrinsic differences in equilibrium open probabilities (Po). We tested this hypothesis by measuring substantial macroscopic CDI, in physiologic conditions, for high Po GluN2B receptors (GluN1A652Y/GluN2B). Together, these results show that Ca2+ flux produces activity-dependent inactivation for both GluN2A and GluN2B receptors and that the extent of CDI varies with channel Po. These results are consistent with CDI as an autoinhibitory feedback mechanism against excessive Ca2+ load during high Po activation.
Collapse
Affiliation(s)
- Gary J Iacobucci
- Department of Biochemistry, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, State University of New York, Buffalo, New York.
| | - Gabriela K Popescu
- Department of Biochemistry, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, State University of New York, Buffalo, New York
| |
Collapse
|
10
|
Kim R, Li Y, Sejnowski TJ. Simple framework for constructing functional spiking recurrent neural networks. Proc Natl Acad Sci U S A 2019; 116:22811-22820. [PMID: 31636215 DOI: 10.1101/579706] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023] Open
Abstract
Cortical microcircuits exhibit complex recurrent architectures that possess dynamically rich properties. The neurons that make up these microcircuits communicate mainly via discrete spikes, and it is not clear how spikes give rise to dynamics that can be used to perform computationally challenging tasks. In contrast, continuous models of rate-coding neurons can be trained to perform complex tasks. Here, we present a simple framework to construct biologically realistic spiking recurrent neural networks (RNNs) capable of learning a wide range of tasks. Our framework involves training a continuous-variable rate RNN with important biophysical constraints and transferring the learned dynamics and constraints to a spiking RNN in a one-to-one manner. The proposed framework introduces only 1 additional parameter to establish the equivalence between rate and spiking RNN models. We also study other model parameters related to the rate and spiking networks to optimize the one-to-one mapping. By establishing a close relationship between rate and spiking models, we demonstrate that spiking RNNs could be constructed to achieve similar performance as their counterpart continuous rate networks.
Collapse
Affiliation(s)
- Robert Kim
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037;
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093
- Medical Scientist Training Program, University of California San Diego, La Jolla, CA 92093
| | - Yinghao Li
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037
| | - Terrence J Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037;
- Institute for Neural Computation, University of California San Diego, La Jolla, CA 92093
- Division of Biological Sciences, University of California San Diego, La Jolla, CA 92093
| |
Collapse
|
11
|
Kim R, Li Y, Sejnowski TJ. Simple framework for constructing functional spiking recurrent neural networks. Proc Natl Acad Sci U S A 2019; 116:22811-22820. [PMID: 31636215 PMCID: PMC6842655 DOI: 10.1073/pnas.1905926116] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Cortical microcircuits exhibit complex recurrent architectures that possess dynamically rich properties. The neurons that make up these microcircuits communicate mainly via discrete spikes, and it is not clear how spikes give rise to dynamics that can be used to perform computationally challenging tasks. In contrast, continuous models of rate-coding neurons can be trained to perform complex tasks. Here, we present a simple framework to construct biologically realistic spiking recurrent neural networks (RNNs) capable of learning a wide range of tasks. Our framework involves training a continuous-variable rate RNN with important biophysical constraints and transferring the learned dynamics and constraints to a spiking RNN in a one-to-one manner. The proposed framework introduces only 1 additional parameter to establish the equivalence between rate and spiking RNN models. We also study other model parameters related to the rate and spiking networks to optimize the one-to-one mapping. By establishing a close relationship between rate and spiking models, we demonstrate that spiking RNNs could be constructed to achieve similar performance as their counterpart continuous rate networks.
Collapse
Affiliation(s)
- Robert Kim
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037;
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093
- Medical Scientist Training Program, University of California San Diego, La Jolla, CA 92093
| | - Yinghao Li
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037
| | - Terrence J Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037;
- Institute for Neural Computation, University of California San Diego, La Jolla, CA 92093
- Division of Biological Sciences, University of California San Diego, La Jolla, CA 92093
| |
Collapse
|
12
|
Dendritic Spikes Expand the Range of Well Tolerated Population Noise Structures. J Neurosci 2019; 39:9173-9184. [PMID: 31558617 DOI: 10.1523/jneurosci.0638-19.2019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Revised: 08/08/2019] [Accepted: 09/14/2019] [Indexed: 12/11/2022] Open
Abstract
The brain operates surprisingly well despite the noisy nature of individual neurons. The central mechanism for noise mitigation in the nervous system is thought to involve averaging over multiple noise-corrupted inputs. Subsequently, there has been considerable interest in identifying noise structures that can be integrated linearly in a way that preserves reliable signal encoding. By analyzing realistic synaptic integration in biophysically accurate neuronal models, I report a complementary denoising approach that is mediated by focal dendritic spikes. Dendritic spikes might seem to be unlikely candidates for noise reduction due to their miniscule integration compartments and poor averaging abilities. Nonetheless, the extra thresholding step introduced by dendritic spike generation increases neuronal tolerance for a broad category of noise structures, some of which cannot be resolved well with averaging. This property of active dendrites compensates for compartment size constraints and expands the repertoire of conditions that can be processed by neuronal populations.SIGNIFICANCE STATEMENT Noise, or random variability, is a prominent feature of the neuronal code and poses a fundamental challenge for information processing. To reconcile the surprisingly accurate output of the brain with the inherent noisiness of biological systems, previous work examined signal integration in idealized neurons. The notion that emerged from this body of work is that accurate signal representation relies largely on input averaging in neuronal dendrites. In contrast to the prevailing view, I show that denoising in simulated neurons with realistic morphology and biophysical properties follows a different strategy: dendritic spikes act as classifiers that assist in extracting information from a variety of noise structures that have been considered before to be particularly disruptive for reliable brain function.
Collapse
|
13
|
Ujfalussy BB, Makara JK, Lengyel M, Branco T. Global and Multiplexed Dendritic Computations under In Vivo-like Conditions. Neuron 2019; 100:579-592.e5. [PMID: 30408443 PMCID: PMC6226578 DOI: 10.1016/j.neuron.2018.08.032] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 07/07/2018] [Accepted: 08/21/2018] [Indexed: 10/27/2022]
Abstract
Dendrites integrate inputs nonlinearly, but it is unclear how these nonlinearities contribute to the overall input-output transformation of single neurons. We developed statistically principled methods using a hierarchical cascade of linear-nonlinear subunits (hLN) to model the dynamically evolving somatic response of neurons receiving complex, in vivo-like spatiotemporal synaptic input patterns. We used the hLN to predict the somatic membrane potential of an in vivo-validated detailed biophysical model of a L2/3 pyramidal cell. Linear input integration with a single global dendritic nonlinearity achieved above 90% prediction accuracy. A novel hLN motif, input multiplexing into parallel processing channels, could improve predictions as much as conventionally used additional layers of local nonlinearities. We obtained similar results in two other cell types. This approach provides a data-driven characterization of a key component of cortical circuit computations: the input-output transformation of neurons during in vivo-like conditions.
Collapse
Affiliation(s)
- Balázs B Ujfalussy
- MRC Laboratory of Molecular Biology, Cambridge, UK; Laboratory of Neuronal Signaling, Institute of Experimental Medicine, Budapest, Hungary; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; MTA Wigner Research Center for Physics, Budapest, Hungary.
| | - Judit K Makara
- Laboratory of Neuronal Signaling, Institute of Experimental Medicine, Budapest, Hungary
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Department of Cognitive Science, Central European University, Budapest, Hungary
| | - Tiago Branco
- MRC Laboratory of Molecular Biology, Cambridge, UK; Sainsbury Wellcome Centre, University College London, London, UK
| |
Collapse
|
14
|
Diverse synaptic and dendritic mechanisms of complex spike burst generation in hippocampal CA3 pyramidal cells. Nat Commun 2019; 10:1859. [PMID: 31015414 PMCID: PMC6478939 DOI: 10.1038/s41467-019-09767-w] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 03/27/2019] [Indexed: 01/21/2023] Open
Abstract
Complex spike bursts (CSBs) represent a characteristic firing pattern of hippocampal pyramidal cells (PCs). In CA1PCs, CSBs are driven by regenerative dendritic plateau potentials, produced by correlated entorhinal cortical and CA3 inputs that simultaneously depolarize distal and proximal dendritic domains. However, in CA3PCs neither the generation mechanisms nor the computational role of CSBs are well elucidated. We show that CSBs are induced by dendritic Ca2+ spikes in CA3PCs. Surprisingly, the ability of CA3PCs to produce CSBs is heterogeneous, with non-uniform synaptic input-output transformation rules triggering CSBs. The heterogeneity is partly related to the topographic position of CA3PCs; we identify two ion channel types, HCN and Kv2 channels, whose proximodistal activity gradients contribute to subregion-specific modulation of CSB propensity. Our results suggest that heterogeneous dendritic integrative properties, along with previously reported synaptic connectivity gradients, define functional subpopulations of CA3PCs that may support CA3 network computations underlying associative memory processes.
Collapse
|
15
|
Echeveste R, Lengyel M. The Redemption of Noise: Inference with Neural Populations. Trends Neurosci 2018; 41:767-770. [PMID: 30366563 PMCID: PMC6416224 DOI: 10.1016/j.tins.2018.09.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 09/07/2018] [Indexed: 11/30/2022]
Abstract
In 2006, Ma et al. presented an elegant theory for how populations of neurons might represent uncertainty to perform Bayesian inference. Critically, according to this theory, neural variability is no longer a nuisance, but rather a vital part of how the brain encodes probability distributions and performs computations with them.
Collapse
Affiliation(s)
- Rodrigo Echeveste
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, UK
| | - Máté Lengyel
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, UK; Department of Cognitive Science, Central European University, Budapest, Hungary.
| |
Collapse
|
16
|
Redundancy in synaptic connections enables neurons to learn optimally. Proc Natl Acad Sci U S A 2018; 115:E6871-E6879. [PMID: 29967182 DOI: 10.1073/pnas.1803274115] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Recent experimental studies suggest that, in cortical microcircuits of the mammalian brain, the majority of neuron-to-neuron connections are realized by multiple synapses. However, it is not known whether such redundant synaptic connections provide any functional benefit. Here, we show that redundant synaptic connections enable near-optimal learning in cooperation with synaptic rewiring. By constructing a simple dendritic neuron model, we demonstrate that with multisynaptic connections synaptic plasticity approximates a sample-based Bayesian filtering algorithm known as particle filtering, and wiring plasticity implements its resampling process. Extending the proposed framework to a detailed single-neuron model of perceptual learning in the primary visual cortex, we show that the model accounts for many experimental observations. In particular, the proposed model reproduces the dendritic position dependence of spike-timing-dependent plasticity and the functional synaptic organization on the dendritic tree based on the stimulus selectivity of presynaptic neurons. Our study provides a conceptual framework for synaptic plasticity and rewiring.
Collapse
|
17
|
Dewell RB, Gabbiani F. Biophysics of object segmentation in a collision-detecting neuron. eLife 2018; 7:34238. [PMID: 29667927 PMCID: PMC5947989 DOI: 10.7554/elife.34238] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Accepted: 04/04/2018] [Indexed: 12/12/2022] Open
Abstract
Collision avoidance is critical for survival, including in humans, and many species possess visual neurons exquisitely sensitive to objects approaching on a collision course. Here, we demonstrate that a collision-detecting neuron can detect the spatial coherence of a simulated impending object, thereby carrying out a computation akin to object segmentation critical for proper escape behavior. At the cellular level, object segmentation relies on a precise selection of the spatiotemporal pattern of synaptic inputs by dendritic membrane potential-activated channels. One channel type linked to dendritic computations in many neural systems, the hyperpolarization-activated cation channel, HCN, plays a central role in this computation. Pharmacological block of HCN channels abolishes the neuron's spatial selectivity and impairs the generation of visually guided escape behaviors, making it directly relevant to survival. Additionally, our results suggest that the interaction of HCN and inactivating K+ channels within active dendrites produces neuronal and behavioral object specificity by discriminating between complex spatiotemporal synaptic activation patterns.
Collapse
Affiliation(s)
| | - Fabrizio Gabbiani
- Department of Neuroscience, Baylor College of Medicine, Houston, United States.,Electrical and Computer Engineering, Rice University, Houston, United States
| |
Collapse
|
18
|
Antic SD, Hines M, Lytton WW. Embedded ensemble encoding hypothesis: The role of the "Prepared" cell. J Neurosci Res 2018; 96:1543-1559. [PMID: 29633330 DOI: 10.1002/jnr.24240] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2017] [Revised: 03/10/2018] [Accepted: 03/12/2018] [Indexed: 01/08/2023]
Abstract
We here reconsider current theories of neural ensembles in the context of recent discoveries about neuronal dendritic physiology. The key physiological observation is that the dendritic plateau potential produces sustained depolarization of the cell body (amplitude 10-20 mV, duration 200-500 ms). Our central hypothesis is that synaptically-evoked dendritic plateau potentials lead to a prepared state of a neuron that favors spike generation. The plateau both depolarizes the cell toward spike threshold, and provides faster response to inputs through a shortened membrane time constant. As a result, the speed of synaptic-to-action potential (AP) transfer is faster during the plateau phase. Our hypothesis relates the changes from "resting" to "depolarized" neuronal state to changes in ensemble dynamics and in network information flow. The plateau provides the Prepared state (sustained depolarization of the cell body) with a time window of 200-500 ms. During this time, a neuron can tune into ongoing network activity and synchronize spiking with other neurons to provide a coordinated Active state (robust firing of somatic APs), which would permit "binding" of signals through coordination of neural activity across a population. The transient Active ensemble of neurons is embedded in the longer-lasting Prepared ensemble of neurons. We hypothesize that "embedded ensemble encoding" may be an important organizing principle in networks of neurons.
Collapse
Affiliation(s)
- Srdjan D Antic
- Department of Neuroscience, Institute for Systems Genomics, Stem Cell Institute, UConn Health, Farmington, Connecticut
| | - Michael Hines
- Department of Neuroscience, Yale School of Medicine, New Haven, Connecticut
| | - William W Lytton
- Physiology and Pharmacology, Neurology, Biomedical Engineering, SUNY Downstate Medical Center, Brooklyn, New York.,Department of Neurology, Kings County Hospital, Brooklyn, New York
| |
Collapse
|
19
|
Bono J, Wilmes KA, Clopath C. Modelling plasticity in dendrites: from single cells to networks. Curr Opin Neurobiol 2017; 46:136-141. [PMID: 28888857 DOI: 10.1016/j.conb.2017.08.013] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Accepted: 08/23/2017] [Indexed: 02/06/2023]
Abstract
One of the key questions in neuroscience is how our brain self-organises to efficiently process information. To answer this question, we need to understand the underlying mechanisms of plasticity and their role in shaping synaptic connectivity. Theoretical neuroscience typically investigates plasticity on the level of neural networks. Neural network models often consist of point neurons, completely neglecting neuronal morphology for reasons of simplicity. However, during the past decades it became increasingly clear that inputs are locally processed in the dendrites before they reach the cell body. Dendritic properties enable local interactions between synapses and location-dependent modulations of inputs, rendering the position of synapses on dendrites highly important. These insights changed our view of neurons, such that we now think of them as small networks of nearly independent subunits instead of a simple point. Here, we propose that understanding how the brain processes information strongly requires that we consider the following properties: which plasticity mechanisms are present in the dendrites and how do they enable the self-organisation of synapses across the dendritic tree for efficient information processing? Ultimately, dendritic plasticity mechanisms can be studied in networks of neurons with dendrites, possibly uncovering unknown mechanisms that shape the connectivity in our brains.
Collapse
Affiliation(s)
- Jacopo Bono
- Department of Bioengineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
| | - Katharina A Wilmes
- Department of Bioengineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK.
| |
Collapse
|
20
|
Zeldenrust F, de Knecht S, Wadman WJ, Denève S, Gutkin B. Estimating the Information Extracted by a Single Spiking Neuron from a Continuous Input Time Series. Front Comput Neurosci 2017; 11:49. [PMID: 28663729 PMCID: PMC5471316 DOI: 10.3389/fncom.2017.00049] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 05/19/2017] [Indexed: 11/30/2022] Open
Abstract
Understanding the relation between (sensory) stimuli and the activity of neurons (i.e., “the neural code”) lies at heart of understanding the computational properties of the brain. However, quantifying the information between a stimulus and a spike train has proven to be challenging. We propose a new (in vitro) method to measure how much information a single neuron transfers from the input it receives to its output spike train. The input is generated by an artificial neural network that responds to a randomly appearing and disappearing “sensory stimulus”: the hidden state. The sum of this network activity is injected as current input into the neuron under investigation. The mutual information between the hidden state on the one hand and spike trains of the artificial network or the recorded spike train on the other hand can easily be estimated due to the binary shape of the hidden state. The characteristics of the input current, such as the time constant as a result of the (dis)appearance rate of the hidden state or the amplitude of the input current (the firing frequency of the neurons in the artificial network), can independently be varied. As an example, we apply this method to pyramidal neurons in the CA1 of mouse hippocampi and compare the recorded spike trains to the optimal response of the “Bayesian neuron” (BN). We conclude that like in the BN, information transfer in hippocampal pyramidal cells is non-linear and amplifying: the information loss between the artificial input and the output spike train is high if the input to the neuron (the firing of the artificial network) is not very informative about the hidden state. If the input to the neuron does contain a lot of information about the hidden state, the information loss is low. Moreover, neurons increase their firing rates in case the (dis)appearance rate is high, so that the (relative) amount of transferred information stays constant.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Department of Neurophysiology, Faculty of Science, Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - Sicco de Knecht
- Cellular and Systems Neurobiology, Swammerdam Institute for Life Sciences, University of AmsterdamAmsterdam, Netherlands
| | - Wytse J Wadman
- Cellular and Systems Neurobiology, Swammerdam Institute for Life Sciences, University of AmsterdamAmsterdam, Netherlands
| | - Sophie Denève
- Group for Neural Theory, Institut National de la Santé et de la Recherche Médicale U960, Institute of Cognitive Studies, École Normale SupérieureParis, France
| | - Boris Gutkin
- Group for Neural Theory, Institut National de la Santé et de la Recherche Médicale U960, Institute of Cognitive Studies, École Normale SupérieureParis, France.,Department of Psychology, Center for Cognition and Decision Making, National Research University Higher School of EconomicsMoscow, Russia
| |
Collapse
|
21
|
Aitchison L, Lengyel M. The Hamiltonian Brain: Efficient Probabilistic Inference with Excitatory-Inhibitory Neural Circuit Dynamics. PLoS Comput Biol 2016; 12:e1005186. [PMID: 28027294 PMCID: PMC5189947 DOI: 10.1371/journal.pcbi.1005186] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2015] [Accepted: 10/06/2016] [Indexed: 12/19/2022] Open
Abstract
Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Indeed, cortical models of probabilistic inference have typically either concentrated on tuning curve or receptive field properties and remained agnostic as to the underlying circuit dynamics, or had simplistic dynamics that gave neither oscillations nor transients. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for using probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations. Our brain operates in the face of substantial uncertainty due to ambiguity in the inputs, and inherent unpredictability in the environment. Behavioural and neural evidence indicates that the brain often uses a close approximation of the optimal strategy, probabilistic inference, to interpret sensory inputs and make decisions under uncertainty. However, the circuit dynamics underlying such probabilistic computations are unknown. In particular, two fundamental properties of cortical responses, the presence of oscillations and transients, are difficult to reconcile with probabilistic inference. We show that excitatory-inhibitory neural networks are naturally suited to implement a particular inference algorithm, Hamiltonian Monte Carlo. Our network showed oscillations and transients like those found in the cortex and took advantage of these dynamical motifs to speed up inference by an order of magnitude. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations.
Collapse
Affiliation(s)
- Laurence Aitchison
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
- * E-mail:
| | - Máté Lengyel
- Computational & Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|