1
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
2
|
Huang C, Englitz B, Reznik A, Zeldenrust F, Celikel T. Information transfer and recovery for the sense of touch. Cereb Cortex 2025; 35:bhaf073. [PMID: 40197640 PMCID: PMC11976729 DOI: 10.1093/cercor/bhaf073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2024] [Revised: 11/26/2024] [Accepted: 01/02/2025] [Indexed: 04/10/2025] Open
Abstract
Transformation of postsynaptic potentials into action potentials is the rate-limiting step of communication in neural networks. The efficiency of this intracellular information transfer also powerfully shapes stimulus representations in sensory cortices. Using whole-cell recordings and information-theoretic measures, we show herein that somatic postsynaptic potentials accurately represent stimulus location on a trial-by-trial basis in single neurons, even 4 synapses away from the sensory periphery in the whisker system. This information is largely lost during action potential generation but can be rapidly (<20 ms) recovered using complementary information in local populations in a cell-type-specific manner. These results show that as sensory information is transferred from one neural locus to another, the circuits reconstruct the stimulus with high fidelity so that sensory representations of single neurons faithfully represent the stimulus in the periphery, but only in their postsynaptic potentials, resulting in lossless information processing for the sense of touch in the primary somatosensory cortex.
Collapse
Affiliation(s)
- Chao Huang
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, the Netherlands
- Laboratory of Neural Circuits and Plasticity, University of Southern California, 3616 Watt Way, Los Angeles, CA 90089, United States
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, the Netherlands
| | - Andrey Reznik
- Laboratory of Neural Circuits and Plasticity, University of Southern California, 3616 Watt Way, Los Angeles, CA 90089, United States
| | - Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, the Netherlands
| | - Tansu Celikel
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen, the Netherlands
- School of Psychology, Georgia Institute of Technology, 654 Cherry Street, Atlanta, GA 30332-0170, United States
| |
Collapse
|
3
|
Goldberg AR, Dovas A, Torres D, Pereira B, Viswanathan A, Das Sharma S, Mela A, Merricks EM, Megino-Luque C, McInvale JJ, Olabarria M, Shokooh LA, Zhao HT, Chen C, Kotidis C, Calvaresi P, Banu MA, Razavilar A, Sudhakar TD, Saxena A, Chokran C, Humala N, Mahajan A, Xu W, Metz JB, Bushong EA, Boassa D, Ellisman MH, Hillman EMC, Hargus G, Bravo-Cordero JJ, McKhann GM, Gill BJA, Rosenfeld SS, Schevon CA, Bruce JN, Sims PA, Peterka DS, Canoll P. Glioma-induced alterations in excitatory neurons are reversed by mTOR inhibition. Neuron 2025; 113:858-875.e10. [PMID: 39837324 PMCID: PMC11925689 DOI: 10.1016/j.neuron.2024.12.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 09/27/2024] [Accepted: 12/24/2024] [Indexed: 01/23/2025]
Abstract
Gliomas are aggressive neoplasms that diffusely infiltrate the brain and cause neurological symptoms, including cognitive deficits and seizures. Increased mTOR signaling has been implicated in glioma-induced neuronal hyperexcitability, but the molecular and functional consequences have not been identified. Here, we show three types of changes in tumor-associated neurons: (1) downregulation of transcripts encoding excitatory and inhibitory postsynaptic proteins and dendritic spine development and upregulation of cytoskeletal transcripts via neuron-specific profiling of ribosome-bound mRNA, (2) marked decreases in dendritic spine density via light and electron microscopy, and (3) progressive functional alterations leading to neuronal hyperexcitability via in vivo calcium imaging. A single acute dose of AZD8055, a combined mTORC1/2 inhibitor, reversed these tumor-induced changes. These findings reveal mTOR-driven pathological plasticity in neurons at the infiltrative margin of glioma and suggest new strategies for treating glioma-associated neurological symptoms.
Collapse
Affiliation(s)
- Alexander R Goldberg
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Athanassios Dovas
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Daniela Torres
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Brianna Pereira
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Ashwin Viswanathan
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Sohani Das Sharma
- Department of Systems Biology, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Angeliki Mela
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Edward M Merricks
- Department of Neurology, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Cristina Megino-Luque
- Department of Medicine, Division of Hematology and Oncology, The Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10027, USA
| | - Julie J McInvale
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Markel Olabarria
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | | | - Hanzhi T Zhao
- Laboratory for Functional Optical Imaging, Zuckerman Mind Brain Behavior Institute, Departments of Biomedical Engineering and Radiology, Columbia University, New York, NY 10027, USA
| | - Cady Chen
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Corina Kotidis
- Department of Neurological Surgery, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Peter Calvaresi
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Matei A Banu
- Department of Neurological Surgery, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Aida Razavilar
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Tejaswi D Sudhakar
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Ankita Saxena
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Cole Chokran
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Nelson Humala
- Department of Neurological Surgery, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Aayushi Mahajan
- Department of Neurological Surgery, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Weihao Xu
- Laboratory for Functional Optical Imaging, Zuckerman Mind Brain Behavior Institute, Departments of Biomedical Engineering and Radiology, Columbia University, New York, NY 10027, USA
| | - Jordan B Metz
- Department of Systems Biology, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Eric A Bushong
- National Center for Microscopy and Imaging Research, University of California, San Diego, La Jolla, CA 92093, USA
| | - Daniela Boassa
- National Center for Microscopy and Imaging Research, University of California, San Diego, La Jolla, CA 92093, USA
| | - Mark H Ellisman
- National Center for Microscopy and Imaging Research, University of California, San Diego, La Jolla, CA 92093, USA
| | - Elizabeth M C Hillman
- Laboratory for Functional Optical Imaging, Zuckerman Mind Brain Behavior Institute, Departments of Biomedical Engineering and Radiology, Columbia University, New York, NY 10027, USA
| | - Gunnar Hargus
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Jose Javier Bravo-Cordero
- Department of Medicine, Division of Hematology and Oncology, The Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10027, USA
| | - Guy M McKhann
- Department of Neurological Surgery, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Brian J A Gill
- Department of Neurological Surgery, Columbia University Irving Medical Center, New York, NY 10032, USA
| | | | - Catherine A Schevon
- Department of Neurology, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Jeffrey N Bruce
- Department of Neurological Surgery, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Peter A Sims
- Department of Systems Biology, Columbia University Irving Medical Center, New York, NY 10032, USA; Sulzberger Columbia Genome Center, Columbia University Irving Medical Center, New York, NY 10032, USA; Department of Biochemistry & Molecular Biophysics, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Darcy S Peterka
- Irving Institute for Cancer Dynamics, Columbia University, New York, NY 10027, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Peter Canoll
- Department of Pathology and Cell Biology, Irving Cancer Research Center, Columbia University Irving Medical Center, New York, NY 10032, USA; Department of Neurological Surgery, Columbia University Irving Medical Center, New York, NY 10032, USA.
| |
Collapse
|
4
|
Becker LA, Baccelli F, Taillefumier T. Subthreshold moment analysis of neuronal populations driven by synchronous synaptic inputs. ARXIV 2025:arXiv:2503.13702v1. [PMID: 40166746 PMCID: PMC11957229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Even when driven by the same stimulus, neuronal responses are well-known to exhibit a striking level of spiking variability. In-vivo electrophysiological recordings also reveal a surprisingly large degree of variability at the subthreshold level. In prior work, we considered biophysically relevant neuronal models to account for the observed magnitude of membrane voltage fluctuations. We found that accounting for these fluctuations requires weak but nonzero synchrony in the spiking activity, in amount that are consistent with experimentally measured spiking correlations. Here we investigate whether such synchrony can explain additional statistical features of the measured neural activity, including neuronal voltage covariability and voltage skewness. Addressing this question involves conducting a generalized moment analysis of conductance-based neurons in response to input drives modeled as correlated jump processes. Technically, we perform such an analysis using fixed-point techniques from queuing theory that are applicable in the stationary regime of activity. We found that weak but nonzero synchrony can consistently explain the experimentally reported voltage covariance and skewness. This confirms the role of synchrony as a primary driver of cortical variability and supports that physiological neural activity emerges as a population-level phenomenon, especially in the spontaneous regime.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Texas, USA
- Department of Neuroscience, The University of Texas at Austin, Texas, USA
| | - François Baccelli
- Department of Mathematics, The University of Texas at Austin, Texas, USA
- Departement d’informatique, Ecole Normale Supérieure, Paris, France
- Institut national de recherche en sciences et technologies du numérique, Paris, France
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Texas, USA
- Department of Neuroscience, The University of Texas at Austin, Texas, USA
- Department of Mathematics, The University of Texas at Austin, Texas, USA
| |
Collapse
|
5
|
Becker LA, Baccelli F, Taillefumier T. Subthreshold variability of neuronal populations driven by synchronous synaptic inputs. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.16.643547. [PMID: 40161748 PMCID: PMC11952518 DOI: 10.1101/2025.03.16.643547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Even when driven by the same stimulus, neuronal responses are well-known to exhibit a striking level of spiking variability. In-vivo electrophysiological recordings also reveal a surprisingly large degree of variability at the subthreshold level. In prior work, we considered biophysically relevant neuronal models to account for the observed magnitude of membrane voltage fluctuations. We found that accounting for these fluctuations requires weak but nonzero synchrony in the spiking activity, in amount that are consistent with experimentally measured spiking correlations. Here we investigate whether such synchrony can explain additional statistical features of the measured neural activity, including neuronal voltage covariability and voltage skewness. Addressing this question involves conducting a generalized moment analysis of conductance-based neurons in response to input drives modeled as correlated jump processes. Technically, we perform such an analysis using fixed-point techniques from queuing theory that are applicable in the stationary regime of activity. We found that weak but nonzero synchrony can consistently explain the experimentally reported voltage covariance and skewness. This confirms the role of synchrony as a primary driver of cortical variability and supports that physiological neural activity emerges as a population-level phenomenon, especially in the spontaneous regime.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Texas, USA
- Department of Neuroscience, The University of Texas at Austin, Texas, USA
| | - François Baccelli
- Department of Mathematics, The University of Texas at Austin, Texas, USA
- Departement d’informatique, Ecole Normale Supérieure, Paris, France
- Institut national de recherche en sciences et technologies du numérique, Paris, France
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Texas, USA
- Department of Neuroscience, The University of Texas at Austin, Texas, USA
- Department of Mathematics, The University of Texas at Austin, Texas, USA
| |
Collapse
|
6
|
Makkeh A, Graetz M, Schneider AC, Ehrlich DA, Priesemann V, Wibral M. A general framework for interpretable neural learning based on local information-theoretic goal functions. Proc Natl Acad Sci U S A 2025; 122:e2408125122. [PMID: 40042906 PMCID: PMC11912414 DOI: 10.1073/pnas.2408125122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 12/19/2024] [Indexed: 03/19/2025] Open
Abstract
Despite the impressive performance of biological and artificial networks, an intuitive understanding of how their local learning dynamics contribute to network-level task solutions remains a challenge to this date. Efforts to bring learning to a more local scale indeed lead to valuable insights, however, a general constructive approach to describe local learning goals that is both interpretable and adaptable across diverse tasks is still missing. We have previously formulated a local information processing goal that is highly adaptable and interpretable for a model neuron with compartmental structure. Building on recent advances in Partial Information Decomposition (PID), we here derive a corresponding parametric local learning rule, which allows us to introduce "infomorphic" neural networks. We demonstrate the versatility of these networks to perform tasks from supervised, unsupervised, and memory learning. By leveraging the interpretable nature of the PID framework, infomorphic networks represent a valuable tool to advance our understanding of the intricate structure of local learning.
Collapse
Affiliation(s)
- Abdullah Makkeh
- Department of Data-driven Analysis of Biological Networks, Göttingen Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen 37077, Germany
- Complex Systems Theory, Max Planck Institute for Dynamics and Self-Organization, Göttingen 37077, Germany
| | - Marcel Graetz
- Department of Data-driven Analysis of Biological Networks, Göttingen Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen 37077, Germany
- Department of Chemistry and Applied Biosciences, ETH Zurich, Zurich 8092, Switzerland
| | - Andreas C Schneider
- Complex Systems Theory, Max Planck Institute for Dynamics and Self-Organization, Göttingen 37077, Germany
- University of Göttingen, Göttingen 37073, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen 37073, Germany
| | - David A Ehrlich
- Department of Data-driven Analysis of Biological Networks, Göttingen Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen 37077, Germany
- Complex Systems Theory, Max Planck Institute for Dynamics and Self-Organization, Göttingen 37077, Germany
| | - Viola Priesemann
- Complex Systems Theory, Max Planck Institute for Dynamics and Self-Organization, Göttingen 37077, Germany
- University of Göttingen, Göttingen 37073, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen 37073, Germany
| | - Michael Wibral
- Department of Data-driven Analysis of Biological Networks, Göttingen Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen 37077, Germany
| |
Collapse
|
7
|
Tsikonofilos K, Kumar A, Ampatzis K, Garrett DD, Månsson KNT. THE PROMISE OF INVESTIGATING NEURAL VARIABILITY IN PSYCHIATRIC DISORDERS. Biol Psychiatry 2025:S0006-3223(25)00102-7. [PMID: 39954923 DOI: 10.1016/j.biopsych.2025.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2024] [Revised: 01/15/2025] [Accepted: 02/10/2025] [Indexed: 02/17/2025]
Abstract
The synergy of psychiatry and neuroscience has recently sought to identify biomarkers that can diagnose mental health disorders, predict their progression, and forecast treatment efficacy. However, biomarkers have achieved limited success to date, potentially due to a narrow focus on specific aspects of brain signals. This highlights a critical need for methodologies that can fully exploit the potential of neuroscience to transform psychiatric practice. In recent years, there is emerging evidence of the ubiquity and importance of moment-to-moment neural variability for brain function. Single-neuron recordings and computational models have demonstrated the significance of variability even at the microscopic level. Concurrently, studies involving healthy humans using neuroimaging recording techniques have strongly indicated that neural variability, once dismissed as undesirable noise, is an important substrate for cognition. Given the cognitive disruption in several psychiatric disorders, neural variability is a promising biomarker in this context and careful consideration of design choices is necessary to advance the field. This review provides an overview of the significance and substrates of neural variability across different recording modalities and spatial scales. We also review the existing evidence supporting its relevance in the study of psychiatric disorders. Finally, we advocate for future research to investigate neural variability within disorder-relevant, task-based paradigms and longitudinal designs. Supported by computational models of brain activity, this framework holds the potential for advancing precision psychiatry in a powerful and experimentally feasible manner.
Collapse
Affiliation(s)
- Konstantinos Tsikonofilos
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Arvind Kumar
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | | | - Douglas D Garrett
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin/London; Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Kristoffer N T Månsson
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Psychology and Psychotherapy, Babeș-Bolyai University, Cluj-Napoca, Romania.
| |
Collapse
|
8
|
Khan HF, Dutta S, Scott AN, Xiao S, Yadav S, Chen X, Aryal UK, Kinzer-Ursem TL, Rochet JC, Jayant K. Site-specific seeding of Lewy pathology induces distinct pre-motor cellular and dendritic vulnerabilities in the cortex. Nat Commun 2024; 15:10775. [PMID: 39737978 DOI: 10.1038/s41467-024-54945-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 11/26/2024] [Indexed: 01/01/2025] Open
Abstract
Circuit-based biomarkers distinguishing the gradual progression of Lewy pathology across synucleinopathies remain unknown. Here, we show that seeding of α-synuclein preformed fibrils in mouse dorsal striatum and motor cortex leads to distinct prodromal-phase cortical dysfunction across months. Our findings reveal that while both seeding sites had increased cortical pathology and hyperexcitability, distinct differences in electrophysiological and cellular ensemble patterns were crucial in distinguishing pathology spread between the two seeding sites. Notably, while beta-band spike-field-coherence reflected a significant increase beginning in Layer-5 and then spreading to Layer-2/3, the rate of entrainment and the propensity of stochastic beta-burst dynamics was markedly seeding location-specific. This beta dysfunction was accompanied by gradual superficial excitatory ensemble instability following cortical, but not striatal, preformed fibrils injection. We reveal a link between Layer-5 dendritic vulnerabilities and translaminar beta event dysfunction, which could be used to differentiate symptomatically similar synucleinopathies.
Collapse
Affiliation(s)
- Hammad F Khan
- Weldon School of Biomedical Engineering, West Lafayette, Indiana, IN, USA
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
| | - Sayan Dutta
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
- Borch Department of Medicinal Chemistry and Molecular Pharmacology, Purdue University, West Lafayette, IN, USA
| | - Alicia N Scott
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
- Borch Department of Medicinal Chemistry and Molecular Pharmacology, Purdue University, West Lafayette, IN, USA
| | - Shulan Xiao
- Weldon School of Biomedical Engineering, West Lafayette, Indiana, IN, USA
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
| | - Saumitra Yadav
- Weldon School of Biomedical Engineering, West Lafayette, Indiana, IN, USA
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
| | - Xiaoling Chen
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
- Borch Department of Medicinal Chemistry and Molecular Pharmacology, Purdue University, West Lafayette, IN, USA
| | - Uma K Aryal
- Department of Comparative Pathobiology, Purdue University, West Lafayette, IN, USA
- Purdue Proteomics Facility, Bindley Bioscience Center, Purdue University, West Lafayette, IN, USA
| | - Tamara L Kinzer-Ursem
- Weldon School of Biomedical Engineering, West Lafayette, Indiana, IN, USA
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
| | - Jean-Christophe Rochet
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA.
- Borch Department of Medicinal Chemistry and Molecular Pharmacology, Purdue University, West Lafayette, IN, USA.
| | - Krishna Jayant
- Weldon School of Biomedical Engineering, West Lafayette, Indiana, IN, USA.
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA.
| |
Collapse
|
9
|
Gozel O, Doiron B. Between-area communication through the lens of within-area neuronal dynamics. SCIENCE ADVANCES 2024; 10:eadl6120. [PMID: 39413191 PMCID: PMC11482330 DOI: 10.1126/sciadv.adl6120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 09/13/2024] [Indexed: 10/18/2024]
Abstract
A core problem in systems and circuits neuroscience is deciphering the origin of shared dynamics in neuronal activity: Do they emerge through local network interactions, or are they inherited from external sources? We explore this question with large-scale networks of spatially ordered spiking neuron models where a downstream network receives input from an upstream sender network. We show that linear measures of the communication between the sender and receiver networks can discriminate between emergent or inherited population dynamics. A match in the dimensionality of the sender and receiver population activities promotes faithful communication. In contrast, a nonlinear mapping between the sender to receiver activity, for example, through downstream emergent population-wide fluctuations, can impair linear communication. Our work exposes the benefits and limitations of linear measures when analyzing between-area communication in circuits with rich population-wide neuronal dynamics.
Collapse
Affiliation(s)
- Olivia Gozel
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL 60637, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637, USA
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL 60637, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
10
|
Pronold J, van Meegen A, Shimoura RO, Vollenbröker H, Senden M, Hilgetag CC, Bakker R, van Albada SJ. Multi-scale spiking network model of human cerebral cortex. Cereb Cortex 2024; 34:bhae409. [PMID: 39428578 PMCID: PMC11491286 DOI: 10.1093/cercor/bhae409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 09/15/2024] [Accepted: 09/24/2024] [Indexed: 10/22/2024] Open
Abstract
Although the structure of cortical networks provides the necessary substrate for their neuronal activity, the structure alone does not suffice to understand the activity. Leveraging the increasing availability of human data, we developed a multi-scale, spiking network model of human cortex to investigate the relationship between structure and dynamics. In this model, each area in one hemisphere of the Desikan-Killiany parcellation is represented by a $1\,\mathrm{mm^{2}}$ column with a layered structure. The model aggregates data across multiple modalities, including electron microscopy, electrophysiology, morphological reconstructions, and diffusion tensor imaging, into a coherent framework. It predicts activity on all scales from the single-neuron spiking activity to the area-level functional connectivity. We compared the model activity with human electrophysiological data and human resting-state functional magnetic resonance imaging (fMRI) data. This comparison reveals that the model can reproduce aspects of both spiking statistics and fMRI correlations if the inter-areal connections are sufficiently strong. Furthermore, we study the propagation of a single-spike perturbation and macroscopic fluctuations through the network. The open-source model serves as an integrative platform for further refinements and future in silico studies of human cortical structure, dynamics, and function.
Collapse
Affiliation(s)
- Jari Pronold
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- RWTH Aachen University, D-52062 Aachen, Germany
| | - Alexander van Meegen
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- Institute of Zoology, University of Cologne, D-50674 Cologne, Germany
| | - Renan O Shimoura
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
| | - Hannah Vollenbröker
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- Heinrich Heine University Düsseldorf, D-40225 Düsseldorf, Germany
| | - Mario Senden
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, NL-6229 ER Maastricht, The Netherlands
- Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Centre, Maastricht University, NL-6229 ER Maastricht, The Netherlands
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, D-20246 Hamburg, Germany
| | - Rembrandt Bakker
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, NL-6525 EN Nijmegen, The Netherlands
| | - Sacha J van Albada
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, D-52428 Jülich, Germany
- Institute of Zoology, University of Cologne, D-50674 Cologne, Germany
| |
Collapse
|
11
|
Monk T, Dennler N, Ralph N, Rastogi S, Afshar S, Urbizagastegui P, Jarvis R, van Schaik A, Adamatzky A. Electrical Signaling Beyond Neurons. Neural Comput 2024; 36:1939-2029. [PMID: 39141803 DOI: 10.1162/neco_a_01696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/21/2024] [Indexed: 08/16/2024]
Abstract
Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that "simpler" neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals-for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell's assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.
Collapse
Affiliation(s)
- Travis Monk
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Nik Dennler
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
- Biocomputation Group, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, U.K.
| | - Nicholas Ralph
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Shavika Rastogi
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
- Biocomputation Group, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, U.K.
| | - Saeed Afshar
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Pablo Urbizagastegui
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Russell Jarvis
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - André van Schaik
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Andrew Adamatzky
- Unconventional Computing Laboratory, University of the West of England, Bristol BS16 1QY, U.K.
| |
Collapse
|
12
|
Rudelt L, González Marx D, Spitzner FP, Cramer B, Zierenberg J, Priesemann V. Signatures of hierarchical temporal processing in the mouse visual system. PLoS Comput Biol 2024; 20:e1012355. [PMID: 39173067 PMCID: PMC11373856 DOI: 10.1371/journal.pcbi.1012355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 09/04/2024] [Accepted: 07/23/2024] [Indexed: 08/24/2024] Open
Abstract
A core challenge for the brain is to process information across various timescales. This could be achieved by a hierarchical organization of temporal processing through intrinsic mechanisms (e.g., recurrent coupling or adaptation), but recent evidence from spike recordings of the rodent visual system seems to conflict with this hypothesis. Here, we used an optimized information-theoretic and classical autocorrelation analysis to show that information- and correlation timescales of spiking activity increase along the anatomical hierarchy of the mouse visual system under visual stimulation, while information-theoretic predictability decreases. Moreover, intrinsic timescales for spontaneous activity displayed a similar hierarchy, whereas the hierarchy of predictability was stimulus-dependent. We could reproduce these observations in a basic recurrent network model with correlated sensory input. Our findings suggest that the rodent visual system employs intrinsic mechanisms to achieve longer integration for higher cortical areas, while simultaneously reducing predictability for an efficient neural code.
Collapse
Affiliation(s)
- Lucas Rudelt
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
- Institute for the Dynamics of Complex Systems, University of Göttingen, Göttingen, Germany
| | - Daniel González Marx
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
- Institute for the Dynamics of Complex Systems, University of Göttingen, Göttingen, Germany
| | - F Paul Spitzner
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
- Institute for the Dynamics of Complex Systems, University of Göttingen, Göttingen, Germany
| | - Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Zierenberg
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
- Institute for the Dynamics of Complex Systems, University of Göttingen, Göttingen, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
- Institute for the Dynamics of Complex Systems, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience (BCCN), Göttingen, Germany
| |
Collapse
|
13
|
Xiao S, Yadav S, Jayant K. Probing multiplexed basal dendritic computations using two-photon 3D holographic uncaging. Cell Rep 2024; 43:114413. [PMID: 38943640 DOI: 10.1016/j.celrep.2024.114413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 05/06/2024] [Accepted: 06/12/2024] [Indexed: 07/01/2024] Open
Abstract
Basal dendrites of layer 5 cortical pyramidal neurons exhibit Na+ and N-methyl-D-aspartate receptor (NMDAR) regenerative spikes and are uniquely poised to influence somatic output. Nevertheless, due to technical limitations, how multibranch basal dendritic integration shapes and enables multiplexed barcoding of synaptic streams remains poorly mapped. Here, we combine 3D two-photon holographic transmitter uncaging, whole-cell dynamic clamp, and biophysical modeling to reveal how synchronously activated synapses (distributed and clustered) across multiple basal dendritic branches are multiplexed under quiescent and in vivo-like conditions. While dendritic regenerative Na+ spikes promote millisecond somatic spike precision, distributed synaptic inputs and NMDAR spikes regulate gain. These concomitantly occurring dendritic nonlinearities enable multiplexed information transfer amid an ongoing noisy background, including under back-propagating voltage resets, by barcoding the axo-somatic spike structure. Our results unveil a multibranch dendritic integration framework in which dendritic nonlinearities are critical for multiplexing different spatial-temporal synaptic input patterns, enabling optimal feature binding.
Collapse
Affiliation(s)
- Shulan Xiao
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Saumitra Yadav
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Krishna Jayant
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA; Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA.
| |
Collapse
|
14
|
Hudetz AG. Microstimulation reveals anesthetic state-dependent effective connectivity of neurons in cerebral cortex. Front Neurosci 2024; 18:1387098. [PMID: 39035779 PMCID: PMC11258030 DOI: 10.3389/fnins.2024.1387098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 06/07/2024] [Indexed: 07/23/2024] Open
Abstract
Introduction Complex neuronal interactions underlie cortical information processing that can be compromised in altered states of consciousness. Here intracortical microstimulation was applied to investigate anesthetic state-dependent effective connectivity of neurons in rat visual cortex in vivo. Methods Extracellular activity was recorded at 32 sites in layers 5/6 while stimulating with charge-balanced discrete pulses at each electrode in random order. The same stimulation pattern was applied at three levels of anesthesia with desflurane and in wakefulness. Spikes were sorted and classified by their waveform features as putative excitatory and inhibitory neurons. Network motifs were identified in graphs of effective connectivity constructed from monosynaptic cross-correlograms. Results Microstimulation caused early (<10 ms) increase followed by prolonged (11-100 ms) decrease in spiking of all neurons throughout the electrode array. The early response of excitatory but not inhibitory neurons decayed rapidly with distance from the stimulation site over 1 mm. Effective connectivity of neurons with significant stimulus response was dense in wakefulness and sparse under anesthesia. The number of network motifs, especially those of higher order, increased rapidly as the anesthesia was withdrawn indicating a substantial increase in network connectivity as the animals woke up. Conclusion The results illuminate the impact of anesthesia on functional integrity of local cortical circuits affecting the state of consciousness.
Collapse
Affiliation(s)
- Anthony G Hudetz
- Department of Anesthesiology, Center for Consciousness Science, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
15
|
Moore JJ, Genkin A, Tournoy M, Pughe-Sanford JL, de Ruyter van Steveninck RR, Chklovskii DB. The neuron as a direct data-driven controller. Proc Natl Acad Sci U S A 2024; 121:e2311893121. [PMID: 38913890 PMCID: PMC11228465 DOI: 10.1073/pnas.2311893121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 04/12/2024] [Indexed: 06/26/2024] Open
Abstract
In the quest to model neuronal function amid gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective. This study extends current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers. We posit that neurons, especially those beyond early sensory areas, steer their environment toward a specific desired state through their output. This environment comprises both synaptically interlinked neurons and external motor sensory feedback loops, enabling neurons to evaluate the effectiveness of their control via synaptic feedback. To model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states, and optimize control we utilize the contemporary direct data-driven control (DD-DC) framework. Our DD-DC neuron model explains various neurophysiological phenomena: the shift from potentiation to depression in spike-timing-dependent plasticity with its asymmetry, the duration and adaptive nature of feedforward and feedback neuronal filters, the imprecision in spike generation under constant stimulation, and the characteristic operational variability and noise in the brain. Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a modern, biologically informed fundamental unit for constructing neural networks.
Collapse
Affiliation(s)
- Jason J Moore
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Alexander Genkin
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Magnus Tournoy
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | | | | | - Dmitri B Chklovskii
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| |
Collapse
|
16
|
Sokol M, Baker C, Baker M, Joshi RP. Simple model to incorporate statistical noise based on a modified hodgkin-huxley approach for external electrical field driven neural responses. Biomed Phys Eng Express 2024; 10:045037. [PMID: 38781941 DOI: 10.1088/2057-1976/ad4f90] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 05/23/2024] [Indexed: 05/25/2024]
Abstract
Noise activity is known to affect neural networks, enhance the system response to weak external signals, and lead to stochastic resonance phenomenon that can effectively amplify signals in nonlinear systems. In most treatments, channel noise has been modeled based on multi-state Markov descriptions or the use stochastic differential equation models. Here we probe a computationally simple approach based on a minor modification of the traditional Hodgkin-Huxley approach to embed noise in neural response. Results obtained from numerous simulations with different excitation frequencies and noise amplitudes for the action potential firing show very good agreement with output obtained from well-established models. Furthermore, results from the Mann-Whitney U Test reveal a statistically insignificant difference. The distribution of the time interval between successive potential spikes obtained from this simple approach compared very well with the results of complicated Fox and Lu type methods at much reduced computational cost. This present method could also possibly be applied to the analysis of spatial variations and/or differences in characteristics of random incident electromagnetic signals.
Collapse
Affiliation(s)
- M Sokol
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX, 79409, United States of America
| | - C Baker
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX, 79409, United States of America
| | - M Baker
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX, 79409, United States of America
| | - R P Joshi
- Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, TX, 79409, United States of America
| |
Collapse
|
17
|
Hudetz AG. Microstimulation reveals anesthetic state-dependent effective connectivity of neurons in cerebral cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.29.591664. [PMID: 38746366 PMCID: PMC11092428 DOI: 10.1101/2024.04.29.591664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Complex neuronal interactions underlie cortical information processing that can be compromised in altered states of consciousness. Here intracortical microstimulation was applied to investigate the state-dependent effective connectivity of neurons in rat visual cortex in vivo. Extracellular activity was recorded at 32 sites in layers 5/6 while stimulating with charge-balanced discrete pulses at each electrode in random order. The same stimulation pattern was applied at three levels of anesthesia with desflurane and in wakefulness. Spikes were sorted and classified by their waveform features as putative excitatory and inhibitory neurons. Microstimulation caused early (<10ms) increase followed by prolonged (11-100ms) decrease in spiking of all neurons throughout the electrode array. The early response of excitatory but not inhibitory neurons decayed rapidly with distance from the stimulation site over 1mm. Effective connectivity of neurons with significant stimulus response was dense in wakefulness and sparse under anesthesia. Network motifs were identified in graphs of effective connectivity constructed from monosynaptic cross-correlograms. The number of motifs, especially those of higher order, increased rapidly as the anesthesia was withdrawn indicating a substantial increase in network connectivity as the animals woke up. The results illuminate the impact of anesthesia on functional integrity of local circuits affecting the state of consciousness.
Collapse
|
18
|
Terada Y, Toyoizumi T. Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
- The Institute for Physics of Intelligence, The University of Tokyo, Tokyo113-0033, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo113-8656, Japan
| |
Collapse
|
19
|
Ford AN, Czarny JE, Rogalla MM, Quass GL, Apostolides PF. Auditory Corticofugal Neurons Transmit Auditory and Non-auditory Information During Behavior. J Neurosci 2024; 44:e1190232023. [PMID: 38123993 PMCID: PMC10869159 DOI: 10.1523/jneurosci.1190-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/08/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023] Open
Abstract
Layer 5 pyramidal neurons of sensory cortices project "corticofugal" axons to myriad sub-cortical targets, thereby broadcasting high-level signals important for perception and learning. Recent studies suggest dendritic Ca2+ spikes as key biophysical mechanisms supporting corticofugal neuron function: these long-lasting events drive burst firing, thereby initiating uniquely powerful signals to modulate sub-cortical representations and trigger learning-related plasticity. However, the behavioral relevance of corticofugal dendritic spikes is poorly understood. We shed light on this issue using 2-photon Ca2+ imaging of auditory corticofugal dendrites as mice of either sex engage in a GO/NO-GO sound-discrimination task. Unexpectedly, only a minority of dendritic spikes were triggered by behaviorally relevant sounds under our conditions. Task related dendritic activity instead mostly followed sound cue termination and co-occurred with mice's instrumental licking during the answer period of behavioral trials, irrespective of reward consumption. Temporally selective, optogenetic silencing of corticofugal neurons during the trial answer period impaired auditory discrimination learning. Thus, auditory corticofugal systems' contribution to learning and plasticity may be partially nonsensory in nature.
Collapse
Affiliation(s)
- Alexander N Ford
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
| | - Jordyn E Czarny
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
| | - Meike M Rogalla
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
| | - Gunnar L Quass
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
| | - Pierre F Apostolides
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
- Department of Molecular and Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan 48109
| |
Collapse
|
20
|
Wong HHW, Watt AJ, Sjöström PJ. Synapse-specific burst coding sustained by local axonal translation. Neuron 2024; 112:264-276.e6. [PMID: 37944518 DOI: 10.1016/j.neuron.2023.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/19/2023] [Accepted: 09/13/2023] [Indexed: 11/12/2023]
Abstract
Neurotransmission in the brain is unreliable, suggesting that high-frequency spike bursts rather than individual spikes carry the neural code. For instance, cortical pyramidal neurons rely on bursts in memory formation. Protein synthesis is another key factor in long-term synaptic plasticity and learning but is widely considered unnecessary for synaptic transmission. Here, however, we show that burst neurotransmission at synapses between neocortical layer 5 pyramidal cells depends on axonal protein synthesis linked to presynaptic NMDA receptors and mTOR. We localized protein synthesis to axons with laser axotomy and puromycylation live imaging. We whole-cell recorded connected neurons to reveal how translation sustained readily releasable vesicle pool size and replenishment rate. We live imaged axons and found sparsely docked RNA granules, suggesting synapse-specific regulation. In agreement, translation boosted neurotransmission onto excitatory but not inhibitory basket or Martinotti cells. Local axonal mRNA translation is thus a hitherto unappreciated principle for sustaining burst coding at specific synapse types.
Collapse
Affiliation(s)
- Hovy Ho-Wai Wong
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Program, Department of Medicine, Department of Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC H3G 1A4, Canada.
| | - Alanna J Watt
- Biology Department, McGill University, Montreal, QC H3G 0B1, Canada
| | - P Jesper Sjöström
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Program, Department of Medicine, Department of Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC H3G 1A4, Canada.
| |
Collapse
|
21
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact Analysis of the Subthreshold Variability for Conductance-Based Neuronal Models with Synchronous Synaptic Inputs. PHYSICAL REVIEW. X 2024; 14:011021. [PMID: 38911939 PMCID: PMC11194039 DOI: 10.1103/physrevx.14.011021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically, we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects postspiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime yields realistic subthreshold variability (voltage variance ≃4-9 mV2) only when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that, without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Baowang Li
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Learning and Memory, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Psychology, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Nicholas J. Priebe
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Learning and Memory, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Eyal Seidemann
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Psychology, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Mathematics, The University of Texas at Austin, Austin, Texas 78712, USA
| |
Collapse
|
22
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs. ARXIV 2023:arXiv:2304.09280v3. [PMID: 37131877 PMCID: PMC10153295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects post-spiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime only yields realistic subthreshold variability (voltage variance ≃ 4 - 9 m V 2 ) when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
| | - Baowang Li
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Center for Learning and Memory, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
| | - Nicholas J. Priebe
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Learning and Memory, The University of Texas at Austin
| | - Eyal Seidemann
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Department of Mathematics, The University of Texas at Austin
| |
Collapse
|
23
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
24
|
Pasini FW, Busch AN, Mináč J, Padmanabhan K, Muller L. Algebraic approach to spike-time neural codes in the hippocampus. Phys Rev E 2023; 108:054404. [PMID: 38115483 DOI: 10.1103/physreve.108.054404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 08/14/2023] [Indexed: 12/21/2023]
Abstract
Although temporal coding through spike-time patterns has long been of interest in neuroscience, the specific structures that could be useful for spike-time codes remain highly unclear. Here, we introduce an analytical approach, using techniques from discrete mathematics, to study spike-time codes. As an initial example, we focus on the phenomenon of "phase precession" in the rodent hippocampus. During navigation and learning on a physical track, specific cells in a rodent's brain form a highly structured pattern relative to the oscillation of population activity in this region. Studies of phase precession largely focus on its role in precisely ordering spike times for synaptic plasticity, as the role of phase precession in memory formation is well established. Comparatively less attention has been paid to the fact that phase precession represents one of the best candidates for a spike-time neural code. The precise nature of this code remains an open question. Here, we derive an analytical expression for a function mapping points in physical space to complex-valued spikes by representing individual spike times as complex numbers. The properties of this function make explicit a specific relationship between past and future in spike patterns of the hippocampus. Importantly, this mathematical approach generalizes beyond the specific phenomenon studied here, providing a technique to study the neural codes within precise spike-time sequences found during sensory coding and motor behavior. We then introduce a spike-based decoding algorithm, based on this function, that successfully decodes a simulated animal's trajectory using only the animal's initial position and a pattern of spike times. This decoder is robust to noise in spike times and works on a timescale almost an order of magnitude shorter than typically used with decoders that work on average firing rate. These results illustrate the utility of a discrete approach, based on the structure and symmetries in spike patterns across finite sets of cells, to provide insight into the structure and function of neural systems.
Collapse
Affiliation(s)
- Federico W Pasini
- Department of Mathematics, Western University London, Ontario, Canada N6A 5B7
- Western Academy for Advanced Research, Western University, London, Ontario, Canada N6A 5B7
- Western Institute for Neuroscience, Western University, London, Ontario, Canada N6A 5B7
| | - Alexandra N Busch
- Department of Mathematics, Western University London, Ontario, Canada N6A 5B7
- Western Academy for Advanced Research, Western University, London, Ontario, Canada N6A 5B7
- Western Institute for Neuroscience, Western University, London, Ontario, Canada N6A 5B7
| | - Ján Mináč
- Department of Mathematics, Western University London, Ontario, Canada N6A 5B7
- Western Academy for Advanced Research, Western University, London, Ontario, Canada N6A 5B7
- Western Institute for Neuroscience, Western University, London, Ontario, Canada N6A 5B7
| | - Krishnan Padmanabhan
- Department of Neuroscience, University of Rochester Medical Center, Rochester, New York 14642, USA
| | - Lyle Muller
- Department of Mathematics, Western University London, Ontario, Canada N6A 5B7
- Western Academy for Advanced Research, Western University, London, Ontario, Canada N6A 5B7
- Western Institute for Neuroscience, Western University, London, Ontario, Canada N6A 5B7
| |
Collapse
|
25
|
Pancholi R, Sun-Yan A, Laughton M, Peron S. Sparse and distributed cortical populations mediate sensorimotor integration. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.21.558857. [PMID: 37790362 PMCID: PMC10542548 DOI: 10.1101/2023.09.21.558857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Touch information is central to sensorimotor integration, yet little is known about how cortical touch and movement representations interact. Touch- and movement-related activity is present in both somatosensory and motor cortices, making both candidate sites for touch-motor interactions. We studied touch-motor interactions in layer 2/3 of the primary vibrissal somatosensory and motor cortices of behaving mice. Volumetric two-photon calcium imaging revealed robust responses to whisker touch, whisking, and licking in both areas. Touch activity was dominated by a sparse population of broadly tuned neurons responsive to multiple whiskers that exhibited longitudinal stability and disproportionately influenced interareal communication. Movement representations were similarly dominated by sparse, stable, reciprocally projecting populations. In both areas, many broadly tuned touch cells also produced robust licking or whisking responses. These touch-licking and touch-whisking neurons showed distinct dynamics suggestive of specific roles in shaping movement. Cortical touch-motor interactions are thus mediated by specialized populations of highly responsive, broadly tuned neurons.
Collapse
Affiliation(s)
- Ravi Pancholi
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003
| | - Andrew Sun-Yan
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003
| | - Maya Laughton
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003
| | - Simon Peron
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003
| |
Collapse
|
26
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.17.536739. [PMID: 37131647 PMCID: PMC10153111 DOI: 10.1101/2023.04.17.536739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects post-spiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime only yields realistic subthreshold variability (voltage variance ≅ 4-9mV 2 ) when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
|
27
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
28
|
Pancholi R, Ryan L, Peron S. Learning in a sensory cortical microstimulation task is associated with elevated representational stability. Nat Commun 2023; 14:3860. [PMID: 37385989 PMCID: PMC10310840 DOI: 10.1038/s41467-023-39542-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 06/16/2023] [Indexed: 07/01/2023] Open
Abstract
Sensory cortical representations can be highly dynamic, raising the question of how representational stability impacts learning. We train mice to discriminate the number of photostimulation pulses delivered to opsin-expressing pyramidal neurons in layer 2/3 of primary vibrissal somatosensory cortex. We simultaneously track evoked neural activity across learning using volumetric two-photon calcium imaging. In well-trained animals, trial-to-trial fluctuations in the amount of photostimulus-evoked activity predicted animal choice. Population activity levels declined rapidly across training, with the most active neurons showing the largest declines in responsiveness. Mice learned at varied rates, with some failing to learn the task in the time provided. The photoresponsive population showed greater instability both within and across behavioral sessions among animals that failed to learn. Animals that failed to learn also exhibited a faster deterioration in stimulus decoding. Thus, greater stability in the stimulus response is associated with learning in a sensory cortical microstimulation task.
Collapse
Affiliation(s)
- Ravi Pancholi
- Center for Neural Science, New York University, 4 Washington Place Rm. 621, New York, NY, 10003, USA
| | - Lauren Ryan
- Center for Neural Science, New York University, 4 Washington Place Rm. 621, New York, NY, 10003, USA
| | - Simon Peron
- Center for Neural Science, New York University, 4 Washington Place Rm. 621, New York, NY, 10003, USA.
| |
Collapse
|
29
|
Swindale NV, Spacek MA, Krause M, Mitelut C. Spontaneous activity in cortical neurons is stereotyped and non-Poisson. Cereb Cortex 2023; 33:6508-6525. [PMID: 36708015 PMCID: PMC10233306 DOI: 10.1093/cercor/bhac521] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 12/09/2022] [Accepted: 12/10/2022] [Indexed: 01/29/2023] Open
Abstract
Neurons fire even in the absence of sensory stimulation or task demands. Numerous theoretical studies have modeled this spontaneous activity as a Poisson process with uncorrelated intervals between successive spikes and a variance in firing rate equal to the mean. Experimental tests of this hypothesis have yielded variable results, though most have concluded that firing is not Poisson. However, these tests say little about the ways firing might deviate from randomness. Nor are they definitive because many different distributions can have equal means and variances. Here, we characterized spontaneous spiking patterns in extracellular recordings from monkey, cat, and mouse cerebral cortex neurons using rate-normalized spike train autocorrelation functions (ACFs) and a logarithmic timescale. If activity was Poisson, this function should be flat. This was almost never the case. Instead, ACFs had diverse shapes, often with characteristic peaks in the 1-700 ms range. Shapes were stable over time, up to the longest recording periods used (51 min). They did not fall into obvious clusters. ACFs were often unaffected by visual stimulation, though some abruptly changed during brain state shifts. These behaviors may have their origin in the intrinsic biophysics and dendritic anatomy of the cells or in the inputs they receive.
Collapse
Affiliation(s)
- Nicholas V Swindale
- Department of Ophthalmology and Visual Sciences, University of British Columbia, 2550 Willow St., Vancouver, BC V5Z 3N9, Canada
| | - Martin A Spacek
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Matthew Krause
- Montreal Neurological Institute, McGill University, 3801 University St., Montreal, QC H3A 2B4, Canada
| | - Catalin Mitelut
- Institute of Molecular and Clinical Ophthalmology, University of Basel, Mittlere Strasse 91, CH-4031 Basel, Switzerland
| |
Collapse
|
30
|
Kim YJ, Ujfalussy BB, Lengyel M. Parallel functional architectures within a single dendritic tree. Cell Rep 2023; 42:112386. [PMID: 37060564 PMCID: PMC7614531 DOI: 10.1016/j.celrep.2023.112386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 10/31/2022] [Accepted: 03/28/2023] [Indexed: 04/16/2023] Open
Abstract
The input-output transformation of individual neurons is a key building block of neural circuit dynamics. While previous models of this transformation vary widely in their complexity, they all describe the underlying functional architecture as unitary, such that each synaptic input makes a single contribution to the neuronal response. Here, we show that the input-output transformation of CA1 pyramidal cells is instead best captured by two distinct functional architectures operating in parallel. We used statistically principled methods to fit flexible, yet interpretable, models of the transformation of input spikes into the somatic "output" voltage and to automatically select among alternative functional architectures. With dendritic Na+ channels blocked, responses are accurately captured by a single static and global nonlinearity. In contrast, dendritic Na+-dependent integration requires a functional architecture with multiple dynamic nonlinearities and clustered connectivity. These two architectures incorporate distinct morphological and biophysical properties of the neuron and its synaptic organization.
Collapse
Affiliation(s)
- Young Joon Kim
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Harvard Medical School, Boston, MA, USA.
| | - Balázs B Ujfalussy
- Laboratory of Biological Computation, Institute of Experimental Medicine, Budapest, Hungary
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
| |
Collapse
|
31
|
Sachse EM, Snyder AC. Dynamic attention signalling in V4: Relation to fast-spiking/non-fast-spiking cell class and population coupling. Eur J Neurosci 2023; 57:918-939. [PMID: 36732934 PMCID: PMC11521100 DOI: 10.1111/ejn.15928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 01/09/2023] [Accepted: 01/24/2023] [Indexed: 02/04/2023]
Abstract
The computational role of a neuron during attention depends on its firing properties, neurotransmitter expression and functional connectivity. Neurons in the visual cortical area V4 are reliably engaged by selective attention but exhibit diversity in the effect of attention on firing rates and correlated variability. It remains unclear what specific neuronal properties shape these attention effects. In this study, we quantitatively characterised the distribution of attention modulation of firing rates across populations of V4 neurons. Neurons exhibited a continuum of time-varying attention effects. At one end of the continuum, neurons' spontaneous firing rates were slightly depressed with attention (compared to when unattended), whereas their stimulus responses were enhanced with attention. The other end of the continuum showed the converse pattern: attention depressed stimulus responses but increased spontaneous activity. We tested whether the particular pattern of time-varying attention effects that a neuron exhibited was related to the shape of their actions potentials (so-called 'fast-spiking' [FS] neurons have been linked to inhibition) and the strength of their coupling to the overall population. We found an interdependence among neural attention effects, neuron type and population coupling. In particular, we found neurons for which attention enhanced spontaneous activity but suppressed stimulus responses were less likely to be fast-spiking (more likely to be non-fast-spiking) and tended to have stronger population coupling, compared to neurons with other types of attention effects. These results add important information to our understanding of visual attention circuits at the cellular level.
Collapse
Affiliation(s)
| | - Adam C. Snyder
- Brain and Cognitive Sciences, University of Rochester, Neuroscience, University of Rochester; Center for Visual Sciences, University of Rochester
| |
Collapse
|
32
|
Riquelme JL, Hemberger M, Laurent G, Gjorgjieva J. Single spikes drive sequential propagation and routing of activity in a cortical network. eLife 2023; 12:e79928. [PMID: 36780217 PMCID: PMC9925052 DOI: 10.7554/elife.79928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 12/19/2022] [Indexed: 02/14/2023] Open
Abstract
Single spikes can trigger repeatable firing sequences in cortical networks. The mechanisms that support reliable propagation of activity from such small events and their functional consequences remain unclear. By constraining a recurrent network model with experimental statistics from turtle cortex, we generate reliable and temporally precise sequences from single spike triggers. We find that rare strong connections support sequence propagation, while dense weak connections modulate propagation reliability. We identify sections of sequences corresponding to divergent branches of strongly connected neurons which can be selectively gated. Applying external inputs to specific neurons in the sparse backbone of strong connections can effectively control propagation and route activity within the network. Finally, we demonstrate that concurrent sequences interact reliably, generating a highly combinatorial space of sequence activations. Our results reveal the impact of individual spikes in cortical circuits, detailing how repeatable sequences of activity can be triggered, sustained, and controlled during cortical computations.
Collapse
Affiliation(s)
- Juan Luis Riquelme
- Max Planck Institute for Brain ResearchFrankfurt am MainGermany
- School of Life Sciences, Technical University of MunichFreisingGermany
| | - Mike Hemberger
- Max Planck Institute for Brain ResearchFrankfurt am MainGermany
| | - Gilles Laurent
- Max Planck Institute for Brain ResearchFrankfurt am MainGermany
| | - Julijana Gjorgjieva
- Max Planck Institute for Brain ResearchFrankfurt am MainGermany
- School of Life Sciences, Technical University of MunichFreisingGermany
| |
Collapse
|
33
|
Reconstruction of sparse recurrent connectivity and inputs from the nonlinear dynamics of neuronal networks. J Comput Neurosci 2023; 51:43-58. [PMID: 35849304 DOI: 10.1007/s10827-022-00831-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 06/16/2022] [Accepted: 07/13/2022] [Indexed: 01/18/2023]
Abstract
Reconstructing the recurrent structural connectivity of neuronal networks is a challenge crucial to address in characterizing neuronal computations. While directly measuring the detailed connectivity structure is generally prohibitive for large networks, we develop a novel framework for reverse-engineering large-scale recurrent network connectivity matrices from neuronal dynamics by utilizing the widespread sparsity of neuronal connections. We derive a linear input-output mapping that underlies the irregular dynamics of a model network composed of both excitatory and inhibitory integrate-and-fire neurons with pulse coupling, thereby relating network inputs to evoked neuronal activity. Using this embedded mapping and experimentally feasible measurements of the firing rate as well as voltage dynamics in response to a relatively small ensemble of random input stimuli, we efficiently reconstruct the recurrent network connectivity via compressive sensing techniques. Through analogous analysis, we then recover high dimensional natural stimuli from evoked neuronal network dynamics over a short time horizon. This work provides a generalizable methodology for rapidly recovering sparse neuronal network data and underlines the natural role of sparsity in facilitating the efficient encoding of network data in neuronal dynamics.
Collapse
|
34
|
Barkdoll K, Lu Y, Barranca VJ. New insights into binocular rivalry from the reconstruction of evolving percepts using model network dynamics. Front Comput Neurosci 2023; 17:1137015. [PMID: 37034441 PMCID: PMC10079880 DOI: 10.3389/fncom.2023.1137015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 03/07/2023] [Indexed: 04/11/2023] Open
Abstract
When the two eyes are presented with highly distinct stimuli, the resulting visual percept generally switches every few seconds between the two monocular images in an irregular fashion, giving rise to a phenomenon known as binocular rivalry. While a host of theoretical studies have explored potential mechanisms for binocular rivalry in the context of evoked model dynamics in response to simple stimuli, here we investigate binocular rivalry directly through complex stimulus reconstructions based on the activity of a two-layer neuronal network model with competing downstream pools driven by disparate monocular stimuli composed of image pixels. To estimate the dynamic percept, we derive a linear input-output mapping rooted in the non-linear network dynamics and iteratively apply compressive sensing techniques for signal recovery. Utilizing a dominance metric, we are able to identify when percept alternations occur and use data collected during each dominance period to generate a sequence of percept reconstructions. We show that despite the approximate nature of the input-output mapping and the significant reduction in neurons downstream relative to stimulus pixels, the dominant monocular image is well-encoded in the network dynamics and improvements are garnered when realistic spatial receptive field structure is incorporated into the feedforward connectivity. Our model demonstrates gamma-distributed dominance durations and well obeys Levelt's four laws for how dominance durations change with stimulus strength, agreeing with key recurring experimental observations often used to benchmark rivalry models. In light of evidence that individuals with autism exhibit relatively slow percept switching in binocular rivalry, we corroborate the ubiquitous hypothesis that autism manifests from reduced inhibition in the brain by systematically probing our model alternation rate across choices of inhibition strength. We exhibit sufficient conditions for producing binocular rivalry in the context of natural scene stimuli, opening a clearer window into the dynamic brain computations that vary with the generated percept and a potential path toward further understanding neurological disorders.
Collapse
|
35
|
Michaelis C, Lehr AB, Oed W, Tetzlaff C. Brian2Loihi: An emulator for the neuromorphic chip Loihi using the spiking neural network simulator Brian. Front Neuroinform 2022; 16:1015624. [PMID: 36439945 PMCID: PMC9682266 DOI: 10.3389/fninf.2022.1015624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 10/12/2022] [Indexed: 11/11/2022] Open
Abstract
Developing intelligent neuromorphic solutions remains a challenging endeavor. It requires a solid conceptual understanding of the hardware's fundamental building blocks. Beyond this, accessible and user-friendly prototyping is crucial to speed up the design pipeline. We developed an open source Loihi emulator based on the neural network simulator Brian that can easily be incorporated into existing simulation workflows. We demonstrate errorless Loihi emulation in software for a single neuron and for a recurrently connected spiking neural network. On-chip learning is also reviewed and implemented, with reasonable discrepancy due to stochastic rounding. This work provides a coherent presentation of Loihi's computational unit and introduces a new, easy-to-use Loihi prototyping package with the aim to help streamline conceptualization and deployment of new algorithms.
Collapse
Affiliation(s)
- Carlo Michaelis
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
- *Correspondence: Carlo Michaelis
| | - Andrew B. Lehr
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Winfried Oed
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| |
Collapse
|
36
|
Columnar Lesions in Barrel Cortex Persistently Degrade Object Location Discrimination Performance. eNeuro 2022; 9:ENEURO.0393-22.2022. [PMID: 36316120 PMCID: PMC9665881 DOI: 10.1523/eneuro.0393-22.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/03/2022] [Accepted: 10/21/2022] [Indexed: 12/24/2022] Open
Abstract
Primary sensory cortices display functional topography, suggesting that even small cortical volumes may underpin perception of specific stimuli. Traditional loss-of-function approaches have a relatively large radius of effect (>1 mm), and few studies track recovery following loss-of-function perturbations. Consequently, the behavioral necessity of smaller cortical volumes remains unclear. In the mouse primary vibrissal somatosensory cortex (vS1), "barrels" with a radius of ∼150 μm receive input predominantly from a single whisker, partitioning vS1 into a topographic map of well defined columns. Here, we train animals implanted with a cranial window over vS1 to perform single-whisker perceptual tasks. We then use high-power laser exposure centered on the barrel representing the spared whisker to produce lesions with a typical volume of one to two barrels. These columnar-scale lesions impair performance in an object location discrimination task for multiple days without disrupting vibrissal kinematics. Animals with degraded location discrimination performance can immediately perform a whisker touch detection task with high accuracy. Animals trained de novo on both simple and complex whisker touch detection tasks showed no permanent behavioral deficits following columnar-scale lesions. Thus, columnar-scale lesions permanently degrade performance in object location discrimination tasks.
Collapse
|
37
|
Gansel KS. Neural synchrony in cortical networks: mechanisms and implications for neural information processing and coding. Front Integr Neurosci 2022; 16:900715. [PMID: 36262373 PMCID: PMC9574343 DOI: 10.3389/fnint.2022.900715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/13/2022] [Indexed: 11/13/2022] Open
Abstract
Synchronization of neuronal discharges on the millisecond scale has long been recognized as a prevalent and functionally important attribute of neural activity. In this article, I review classical concepts and corresponding evidence of the mechanisms that govern the synchronization of distributed discharges in cortical networks and relate those mechanisms to their possible roles in coding and cognitive functions. To accommodate the need for a selective, directed synchronization of cells, I propose that synchronous firing of distributed neurons is a natural consequence of spike-timing-dependent plasticity (STDP) that associates cells repetitively receiving temporally coherent input: the “synchrony through synaptic plasticity” hypothesis. Neurons that are excited by a repeated sequence of synaptic inputs may learn to selectively respond to the onset of this sequence through synaptic plasticity. Multiple neurons receiving coherent input could thus actively synchronize their firing by learning to selectively respond at corresponding temporal positions. The hypothesis makes several predictions: first, the position of the cells in the network, as well as the source of their input signals, would be irrelevant as long as their input signals arrive simultaneously; second, repeating discharge patterns should get compressed until all or some part of the signals are synchronized; and third, this compression should be accompanied by a sparsening of signals. In this way, selective groups of cells could emerge that would respond to some recurring event with synchronous firing. Such a learned response pattern could further be modulated by synchronous network oscillations that provide a dynamic, flexible context for the synaptic integration of distributed signals. I conclude by suggesting experimental approaches to further test this new hypothesis.
Collapse
|
38
|
Carrillo-Reid L, Calderon V. Conceptual framework for neuronal ensemble identification and manipulation related to behavior using calcium imaging. NEUROPHOTONICS 2022; 9:041403. [PMID: 35898958 PMCID: PMC9309498 DOI: 10.1117/1.nph.9.4.041403] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 07/12/2022] [Indexed: 06/15/2023]
Abstract
Significance: The identification and manipulation of spatially identified neuronal ensembles with optical methods have been recently used to prove the causal link between neuronal ensemble activity and learned behaviors. However, the standardization of a conceptual framework to identify and manipulate neuronal ensembles from calcium imaging recordings is still lacking. Aim: We propose a conceptual framework for the identification and manipulation of neuronal ensembles using simultaneous calcium imaging and two-photon optogenetics in behaving mice. Approach: We review the computational approaches that have been used to identify and manipulate neuronal ensembles with single cell resolution during behavior in different brain regions using all-optical methods. Results: We proposed three steps as a conceptual framework that could be applied to calcium imaging recordings to identify and manipulate neuronal ensembles in behaving mice: (1) transformation of calcium transients into binary arrays; (2) identification of neuronal ensembles as similar population vectors; and (3) targeting of neuronal ensemble members that significantly impact behavioral performance. Conclusions: The use of simultaneous two-photon calcium imaging and two-photon optogenetics allowed for the experimental demonstration of the causal relation of population activity and learned behaviors. The standardization of analytical tools to identify and manipulate neuronal ensembles could accelerate interventional experiments aiming to reprogram the brain in normal and pathological conditions.
Collapse
Affiliation(s)
- Luis Carrillo-Reid
- National Autonomous University of Mexico, Neurobiology Institute, Department of Developmental Neurobiology and Neurophysiology, Querétaro, Mexico
| | - Vladimir Calderon
- National Autonomous University of Mexico, Neurobiology Institute, Department of Developmental Neurobiology and Neurophysiology, Querétaro, Mexico
| |
Collapse
|
39
|
Eybposh MH, Curtis VR, Rodríguez-Romaguera J, Pégard NC. Advances in computer-generated holography for targeted neuronal modulation. NEUROPHOTONICS 2022; 9:041409. [PMID: 35719844 PMCID: PMC9201973 DOI: 10.1117/1.nph.9.4.041409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/17/2022] [Indexed: 05/08/2023]
Abstract
Genetically encoded calcium indicators and optogenetics have revolutionized neuroscience by enabling the detection and modulation of neural activity with single-cell precision using light. To fully leverage the immense potential of these techniques, advanced optical instruments that can place a light on custom ensembles of neurons with a high level of spatial and temporal precision are required. Modern light sculpting techniques that have the capacity to shape a beam of light are preferred because they can precisely target multiple neurons simultaneously and modulate the activity of large ensembles of individual neurons at rates that match natural neuronal dynamics. The most versatile approach, computer-generated holography (CGH), relies on a computer-controlled light modulator placed in the path of a coherent laser beam to synthesize custom three-dimensional (3D) illumination patterns and illuminate neural ensembles on demand. Here, we review recent progress in the development and implementation of fast and spatiotemporally precise CGH techniques that sculpt light in 3D to optically interrogate neural circuit functions.
Collapse
Affiliation(s)
- M. Hossein Eybposh
- University of North Carolina at Chapel Hill, Department of Applied Physical Sciences, Chapel Hill, North Carolina, United States
- University of North Carolina at Chapel Hill, Department of Biomedical Engineering, Chapel Hill, North Carolina, United States
| | - Vincent R. Curtis
- University of North Carolina at Chapel Hill, Department of Applied Physical Sciences, Chapel Hill, North Carolina, United States
- University of North Carolina, Department of Psychiatry, Chapel Hill, North Carolina, United States
| | - Jose Rodríguez-Romaguera
- University of North Carolina, Department of Psychiatry, Chapel Hill, North Carolina, United States
- University of North Carolina, Neuroscience Center, Chapel Hill, North Carolina, United States
- University of North Carolina, Carolina Institute for Developmental Disabilities, Chapel Hill, North Carolina, United States
- University of North Carolina, Carolina Stress Initiative, Chapel Hill, North Carolina, United States
| | - Nicolas C. Pégard
- University of North Carolina at Chapel Hill, Department of Applied Physical Sciences, Chapel Hill, North Carolina, United States
- University of North Carolina at Chapel Hill, Department of Biomedical Engineering, Chapel Hill, North Carolina, United States
- University of North Carolina, Neuroscience Center, Chapel Hill, North Carolina, United States
- University of North Carolina, Carolina Stress Initiative, Chapel Hill, North Carolina, United States
| |
Collapse
|
40
|
Levi A, Spivak L, Sloin HE, Someck S, Stark E. Error correction and improved precision of spike timing in converging cortical networks. Cell Rep 2022; 40:111383. [PMID: 36130516 PMCID: PMC9513803 DOI: 10.1016/j.celrep.2022.111383] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 05/26/2022] [Accepted: 08/28/2022] [Indexed: 11/20/2022] Open
Abstract
The brain propagates neuronal signals accurately and rapidly. Nevertheless, whether and how a pool of cortical neurons transmits an undistorted message to a target remains unclear. We apply optogenetic white noise signals to small assemblies of cortical pyramidal cells (PYRs) in freely moving mice. The directly activated PYRs exhibit a spike timing precision of several milliseconds. Instead of losing precision, interneurons driven via synaptic activation exhibit higher precision with respect to the white noise signal. Compared with directly activated PYRs, postsynaptic interneuron spike trains allow better signal reconstruction, demonstrating error correction. Data-driven modeling shows that nonlinear amplification of coincident spikes can generate error correction and improved precision. Over multiple applications of the same signal, postsynaptic interneuron spiking is most reliable at timescales ten times shorter than those of the presynaptic PYR, exhibiting temporal coding. Similar results are observed in hippocampal region CA1. Coincidence detection of convergent inputs enables messages to be precisely propagated between cortical PYRs and interneurons. PYR-to-interneuron spike transmission exhibits error correction and improved precision Interneuron precision is higher when a larger pool of presynaptic PYRs is recruited Error correction and improved precision are consistent with coincidence detection Interneurons activated by synaptic transmission act as temporal coders
Collapse
Affiliation(s)
- Amir Levi
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Lidor Spivak
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Hadas E Sloin
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Shirly Someck
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Eran Stark
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel.
| |
Collapse
|
41
|
Physiological noise facilitates multiplexed coding of vibrotactile-like signals in somatosensory cortex. Proc Natl Acad Sci U S A 2022; 119:e2118163119. [PMID: 36067307 PMCID: PMC9478643 DOI: 10.1073/pnas.2118163119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Neurons can use different aspects of their spiking to simultaneously represent (multiplex) different features of a stimulus. For example, some pyramidal neurons in primary somatosensory cortex (S1) use the rate and timing of their spikes to, respectively, encode the intensity and frequency of vibrotactile stimuli. Doing so has several requirements. Because they fire at low rates, pyramidal neurons cannot entrain 1:1 with high-frequency (100 to 600 Hz) inputs and, instead, must skip (i.e., not respond to) some stimulus cycles. The proportion of skipped cycles must vary inversely with stimulus intensity for firing rate to encode stimulus intensity. Spikes must phase-lock to the stimulus for spike times (intervals) to encode stimulus frequency, but, in addition, skipping must occur irregularly to avoid aliasing. Using simulations and in vitro experiments in which mouse S1 pyramidal neurons were stimulated with inputs emulating those induced by vibrotactile stimuli, we show that fewer cycles are skipped as stimulus intensity increases, as required for rate coding, and that intrinsic or synaptic noise can induce irregular skipping without disrupting phase locking, as required for temporal coding. This occurs because noise can modulate the reliability without disrupting the precision of spikes evoked by small-amplitude, fast-onset signals. Specifically, in the fluctuation-driven regime associated with sparse spiking, rate and temporal coding are both paradoxically improved by the strong synaptic noise characteristic of the intact cortex. Our results demonstrate that multiplexed coding by S1 pyramidal neurons is not only feasible under in vivo conditions, but that background synaptic noise is actually beneficial.
Collapse
|
42
|
Nomura R, Fujiwara K, Ikeguchi T. Superposed recurrence plots for reconstructing a common input applied to neurons. Phys Rev E 2022; 106:034205. [PMID: 36266847 DOI: 10.1103/physreve.106.034205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 07/21/2022] [Indexed: 06/16/2023]
Abstract
In the brain, common inputs play an important role in eliciting synchronous firing in the assembly of neurons. However, common inputs are usually unknown to observers. If an unobserved common input can be reconstructed only from outputs, it would be beneficial to the understanding of communication in the brain. Thus, we have developed a method for reconstructing a common input only from output firing rates of uncoupled neuron models. To this end, we propose a superposed recurrence plot (SRP) comprising points determined by using a union of points at each pixel among multiple recurrence plots. The SRP method can reconstruct a common input when using various types of neurons with different firing rate baselines, even when using uncoupled neuron models that exhibit chaotic responses. The SRP method robustly reconstructs the common input applied to the neuron models when we select adequate time windows to calculate the firing rates in accordance with the width of the fluctuations. These results suggest that certain information is embedded in the firing rate. These findings could be a possible basis for analyzing whole-brain communication utilizing rate coding.
Collapse
Affiliation(s)
- Ryota Nomura
- Graduate School of Engineering, Tokyo University of Science, 6-3-1, Niijuku, Katsushika-ku, Tokyo 125-8585, Japan
- Faculty of Human Education, Kagoshima Immaculate Heart University, 2365, Amatatsu-cho, Satsumasendai, Kagoshima 895-0011, Japan
| | - Kantaro Fujiwara
- International Research Center for Neurointelligence, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Tohru Ikeguchi
- Graduate School of Engineering, Tokyo University of Science, 6-3-1, Niijuku, Katsushika-ku, Tokyo 125-8585, Japan
- Faculty of Engineering, Tokyo University of Science, 6-3-1, Niijuku, Katsushika-ku, Tokyo 125-8585, Japan
| |
Collapse
|
43
|
D'Angelo E, Jirsa V. The quest for multiscale brain modeling. Trends Neurosci 2022; 45:777-790. [PMID: 35906100 DOI: 10.1016/j.tins.2022.06.007] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/20/2022] [Accepted: 06/21/2022] [Indexed: 01/07/2023]
Abstract
Addressing the multiscale organization of the brain, which is fundamental to the dynamic repertoire of the organ, remains challenging. In principle, it should be possible to model neurons and synapses in detail and then connect them into large neuronal assemblies to explain the relationship between microscopic phenomena, large-scale brain functions, and behavior. It is more difficult to infer neuronal functions from ensemble measurements such as those currently obtained with brain activity recordings. In this article we consider theories and strategies for combining bottom-up models, generated from principles of neuronal biophysics, with top-down models based on ensemble representations of network activity and on functional principles. These integrative approaches are hoped to provide effective multiscale simulations in virtual brains and neurorobots, and pave the way to future applications in medicine and information technologies.
Collapse
Affiliation(s)
- Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, and Brain Connectivity Center, Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) Mondino Foundation, Pavia, Italy.
| | - Viktor Jirsa
- Institut National de la Santé et de la Recherche Médicale (INSERM) Unité 1106, Centre National de la Recherche Scientifique (CNRS), and University of Aix-Marseille, Marseille, France
| |
Collapse
|
44
|
Russell LE, Dalgleish HWP, Nutbrown R, Gauld OM, Herrmann D, Fişek M, Packer AM, Häusser M. All-optical interrogation of neural circuits in behaving mice. Nat Protoc 2022; 17:1579-1620. [PMID: 35478249 PMCID: PMC7616378 DOI: 10.1038/s41596-022-00691-w] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 02/09/2022] [Indexed: 12/22/2022]
Abstract
Recent advances combining two-photon calcium imaging and two-photon optogenetics with computer-generated holography now allow us to read and write the activity of large populations of neurons in vivo at cellular resolution and with high temporal resolution. Such 'all-optical' techniques enable experimenters to probe the effects of functionally defined neurons on neural circuit function and behavioral output with new levels of precision. This greatly increases flexibility, resolution, targeting specificity and throughput compared with alternative approaches based on electrophysiology and/or one-photon optogenetics and can interrogate larger and more densely labeled populations of neurons than current voltage imaging-based implementations. This protocol describes the experimental workflow for all-optical interrogation experiments in awake, behaving head-fixed mice. We describe modular procedures for the setup and calibration of an all-optical system (~3 h), the preparation of an indicator and opsin-expressing and task-performing animal (~3-6 weeks), the characterization of functional and photostimulation responses (~2 h per field of view) and the design and implementation of an all-optical experiment (achievable within the timescale of a normal behavioral experiment; ~3-5 h per field of view). We discuss optimizations for efficiently selecting and targeting neuronal ensembles for photostimulation sequences, as well as generating photostimulation response maps from the imaging data that can be used to examine the impact of photostimulation on the local circuit. We demonstrate the utility of this strategy in three brain areas by using different experimental setups. This approach can in principle be adapted to any brain area to probe functional connectivity in neural circuits and investigate the relationship between neural circuit activity and behavior.
Collapse
Affiliation(s)
- Lloyd E Russell
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Henry W P Dalgleish
- Wolfson Institute for Biomedical Research, University College London, London, UK
- Sainsbury Wellcome Centre, University College London, London, UK
| | - Rebecca Nutbrown
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Oliver M Gauld
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Dustin Herrmann
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Mehmet Fişek
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Adam M Packer
- Wolfson Institute for Biomedical Research, University College London, London, UK.
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK.
| | - Michael Häusser
- Wolfson Institute for Biomedical Research, University College London, London, UK.
| |
Collapse
|
45
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
46
|
Kulkarni AS, Burns MR, Brundin P, Wesson DW. Linking α-synuclein-induced synaptopathy and neural network dysfunction in early Parkinson's disease. Brain Commun 2022; 4:fcac165. [PMID: 35822101 PMCID: PMC9272065 DOI: 10.1093/braincomms/fcac165] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 03/11/2022] [Accepted: 06/20/2022] [Indexed: 01/18/2023] Open
Abstract
The prodromal phase of Parkinson's disease is characterized by aggregation of the misfolded pathogenic protein α-synuclein in select neural centres, co-occurring with non-motor symptoms including sensory and cognitive loss, and emotional disturbances. It is unclear whether neuronal loss is significant during the prodrome. Underlying these symptoms are synaptic impairments and aberrant neural network activity. However, the relationships between synaptic defects and network-level perturbations are not established. In experimental models, pathological α-synuclein not only impacts neurotransmission at the synaptic level, but also leads to changes in brain network-level oscillatory dynamics-both of which likely contribute to non-motor deficits observed in Parkinson's disease. Here we draw upon research from both human subjects and experimental models to propose a 'synapse to network prodrome cascade' wherein before overt cell death, pathological α-synuclein induces synaptic loss and contributes to aberrant network activity, which then gives rise to prodromal symptomology. As the disease progresses, abnormal patterns of neural activity ultimately lead to neuronal loss and clinical progression of disease. Finally, we outline goals and research needed to unravel the basis of functional impairments in Parkinson's disease and other α-synucleinopathies.
Collapse
Affiliation(s)
- Aishwarya S Kulkarni
- Department of Pharmacology & Therapeutics, University of Florida, 1200 Newell Dr, Gainesville, FL 32610, USA
| | - Matthew R Burns
- Department of Neurology, University of Florida, 1200 Newell Dr, Gainesville, FL 32610, USA
- Norman Fixel Institute for Neurological Disorders, University of Florida, 1200 Newell Dr, Gainesville, FL 32610, USA
| | - Patrik Brundin
- Pharma Research and Early Development (pRED), F. Hoffman-La Roche, Little Falls, NJ, USA
| | - Daniel W Wesson
- Department of Pharmacology & Therapeutics, University of Florida, 1200 Newell Dr, Gainesville, FL 32610, USA
- Norman Fixel Institute for Neurological Disorders, University of Florida, 1200 Newell Dr, Gainesville, FL 32610, USA
| |
Collapse
|
47
|
Multilayer Photonic Spiking Neural Networks: Generalized Supervised Learning Algorithm and Network Optimization. PHOTONICS 2022. [DOI: 10.3390/photonics9040217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We propose a generalized supervised learning algorithm for multilayer photonic spiking neural networks (SNNs) by combining the spike-timing dependent plasticity (STDP) rule and the gradient descent mechanism. A vertical-cavity surface-emitting laser with an embedded saturable absorber (VCSEL-SA) is employed as a photonic leaky-integrate-and-fire (LIF) neuron. The temporal coding strategy is employed to transform information into the precise firing time. With the modified supervised learning algorithm, the trained multilayer photonic SNN successfully solves the XOR problem and performs well on the Iris and Wisconsin breast cancer datasets. This indicates that a generalized supervised learning algorithm is realized for multilayer photonic SNN. In addition, network optimization is performed by considering different network sizes.
Collapse
|
48
|
Yu Q, Li S, Tang H, Wang L, Dang J, Tan KC. Toward Efficient Processing and Learning With Spikes: New Approaches for Multispike Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1364-1376. [PMID: 32356771 DOI: 10.1109/tcyb.2020.2984888] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spikes are the currency in central nervous systems for information transmission and processing. They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing. However, efficient processing and learning of discrete spikes still remain a challenging problem. In this article, we make our contributions toward this direction. A simplified spiking neuron model is first introduced with the effects of both synaptic input and firing output on the membrane potential being modeled with an impulse function. An event-driven scheme is then presented to further improve the processing efficiency. Based on the neuron model, we propose two new multispike learning rules which demonstrate better performance over other baselines on various tasks, including association, classification, and feature detection. In addition to efficiency, our learning rules demonstrate high robustness against the strong noise of different types. They can also be generalized to different spike coding schemes for the classification task, and notably, the single neuron is capable of solving multicategory classifications with our learning rules. In the feature detection task, we re-examine the ability of unsupervised spike-timing-dependent plasticity with its limitations being presented, and find a new phenomenon of losing selectivity. In contrast, our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied. Moreover, our rules cannot only detect features but also discriminate them. The improved performance of our methods would contribute to neuromorphic computing as a preferable choice.
Collapse
|
49
|
Yu Q, Song S, Ma C, Pan L, Tan KC. Synaptic Learning With Augmented Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1134-1146. [PMID: 33471768 DOI: 10.1109/tnnls.2020.3040969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Traditional neuron models use analog values for information representation and computation, while all-or-nothing spikes are employed in the spiking ones. With a more brain-like processing paradigm, spiking neurons are more promising for improvements in efficiency and computational capability. They extend the computation of traditional neurons with an additional dimension of time carried by all-or-nothing spikes. Could one benefit from both the accuracy of analog values and the time-processing capability of spikes? In this article, we introduce a concept of augmented spikes to carry complementary information with spike coefficients in addition to spike latencies. New augmented spiking neuron model and synaptic learning rules are proposed to process and learn patterns of augmented spikes. We provide systematic insights into the properties and characteristics of our methods, including classification of augmented spike patterns, learning capacity, construction of causality, feature detection, robustness, and applicability to practical tasks, such as acoustic and visual pattern recognition. Our augmented approaches show several advanced learning properties and reliably outperform the baseline ones that use typical all-or-nothing spikes. Our approaches significantly improve the accuracies of a temporal-based approach on sound and MNIST recognition tasks to 99.38% and 97.90%, respectively, highlighting the effectiveness and potential merits of our methods. More importantly, our augmented approaches are versatile and can be easily generalized to other spike-based systems, contributing to a potential development for them, including neuromorphic computing.
Collapse
|
50
|
Barranca VJ, Bhuiyan A, Sundgren M, Xing F. Functional Implications of Dale's Law in Balanced Neuronal Network Dynamics and Decision Making. Front Neurosci 2022; 16:801847. [PMID: 35295091 PMCID: PMC8919085 DOI: 10.3389/fnins.2022.801847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 02/02/2022] [Indexed: 11/28/2022] Open
Abstract
The notion that a neuron transmits the same set of neurotransmitters at all of its post-synaptic connections, typically known as Dale's law, is well supported throughout the majority of the brain and is assumed in almost all theoretical studies investigating the mechanisms for computation in neuronal networks. Dale's law has numerous functional implications in fundamental sensory processing and decision-making tasks, and it plays a key role in the current understanding of the structure-function relationship in the brain. However, since exceptions to Dale's law have been discovered for certain neurons and because other biological systems with complex network structure incorporate individual units that send both positive and negative feedback signals, we investigate the functional implications of network model dynamics that violate Dale's law by allowing each neuron to send out both excitatory and inhibitory signals to its neighbors. We show how balanced network dynamics, in which large excitatory and inhibitory inputs are dynamically adjusted such that input fluctuations produce irregular firing events, are theoretically preserved for a single population of neurons violating Dale's law. We further leverage this single-population network model in the context of two competing pools of neurons to demonstrate that effective decision-making dynamics are also produced, agreeing with experimental observations from honeybee dynamics in selecting a food source and artificial neural networks trained in optimal selection. Through direct comparison with the classical two-population balanced neuronal network, we argue that the one-population network demonstrates more robust balanced activity for systems with less computational units, such as honeybee colonies, whereas the two-population network exhibits a more rapid response to temporal variations in network inputs, as required by the brain. We expect this study will shed light on the role of neurons violating Dale's law found in experiment as well as shared design principles across biological systems that perform complex computations.
Collapse
|