1
|
Vinck M, Uran C, Dowdall JR, Rummell B, Canales-Johnson A. Large-scale interactions in predictive processing: oscillatory versus transient dynamics. Trends Cogn Sci 2025; 29:133-148. [PMID: 39424521 PMCID: PMC7616854 DOI: 10.1016/j.tics.2024.09.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 09/17/2024] [Accepted: 09/26/2024] [Indexed: 10/21/2024]
Abstract
How do the two main types of neural dynamics, aperiodic transients and oscillations, contribute to the interactions between feedforward (FF) and feedback (FB) pathways in sensory inference and predictive processing? We discuss three theoretical perspectives. First, we critically evaluate the theory that gamma and alpha/beta rhythms play a role in classic hierarchical predictive coding (HPC) by mediating FF and FB communication, respectively. Second, we outline an alternative functional model in which rapid sensory inference is mediated by aperiodic transients, whereas oscillations contribute to the stabilization of neural representations over time and plasticity processes. Third, we propose that the strong dependence of oscillations on predictability can be explained based on a biologically plausible alternative to classic HPC, namely dendritic HPC.
Collapse
Affiliation(s)
- Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience, in Cooperation with the Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University, 6525 Nijmegen, The Netherlands.
| | - Cem Uran
- Ernst Strüngmann Institute (ESI) for Neuroscience, in Cooperation with the Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University, 6525 Nijmegen, The Netherlands.
| | - Jarrod R Dowdall
- Robarts Research Institute, Western University, London, ON, Canada
| | - Brian Rummell
- Ernst Strüngmann Institute (ESI) for Neuroscience, in Cooperation with the Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Andres Canales-Johnson
- Facultad de Ciencias de la Salud, Universidad Catolica del Maule, 3480122 Talca, Chile; Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK.
| |
Collapse
|
2
|
Zohora FT, Karia V, Soures N, Kudithipudi D. Probabilistic metaplasticity for continual learning with memristors in spiking networks. Sci Rep 2024; 14:29496. [PMID: 39604461 PMCID: PMC11603065 DOI: 10.1038/s41598-024-78290-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 10/29/2024] [Indexed: 11/29/2024] Open
Abstract
Edge devices operating in dynamic environments critically need the ability to continually learn without catastrophic forgetting. The strict resource constraints in these devices pose a major challenge to achieve this, as continual learning entails memory and computational overhead. Crossbar architectures using memristor devices offer energy efficiency through compute-in-memory and hold promise to address this issue. However, memristors often exhibit low precision and high variability in conductance modulation, rendering them unsuitable for continual learning solutions that require precise modulation of weight magnitude for consolidation. Current approaches fall short to address this challenge directly and rely on auxiliary high-precision memory, leading to frequent memory access, high memory overhead, and energy dissipation. In this research, we propose probabilistic metaplasticity, which consolidates weights by modulating their update probability rather than magnitude. The proposed mechanism eliminates high-precision modification to weight magnitudes and, consequently, the need for auxiliary high-precision memory. We demonstrate the efficacy of the proposed mechanism by integrating probabilistic metaplasticity into a spiking network trained on an error threshold with low-precision memristor weights. Evaluations of continual learning benchmarks show that probabilistic metaplasticity achieves performance equivalent to state-of-the-art continual learning models with high-precision weights while consuming ~ 67% lower memory for additional parameters and up to ~ 60× lower energy during parameter updates compared to an auxiliary memory-based solution. The proposed model shows potential for energy-efficient continual learning with low-precision emerging devices.
Collapse
Affiliation(s)
- Fatima Tuz Zohora
- Neuromorphic Artificial Intelligence Lab, University of Texas at San Antonio, San Antonio, TX, 78249, USA.
| | - Vedant Karia
- Neuromorphic Artificial Intelligence Lab, University of Texas at San Antonio, San Antonio, TX, 78249, USA
| | - Nicholas Soures
- Neuromorphic Artificial Intelligence Lab, University of Texas at San Antonio, San Antonio, TX, 78249, USA
| | - Dhireesha Kudithipudi
- Neuromorphic Artificial Intelligence Lab, University of Texas at San Antonio, San Antonio, TX, 78249, USA
| |
Collapse
|
3
|
Kappel D, Tetzlaff C. Synapses learn to utilize stochastic pre-synaptic release for the prediction of postsynaptic dynamics. PLoS Comput Biol 2024; 20:e1012531. [PMID: 39495714 PMCID: PMC11534197 DOI: 10.1371/journal.pcbi.1012531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 10/01/2024] [Indexed: 11/06/2024] Open
Abstract
Synapses in the brain are highly noisy, which leads to a large trial-by-trial variability. Given how costly synapses are in terms of energy consumption these high levels of noise are surprising. Here we propose that synapses use noise to represent uncertainties about the somatic activity of the postsynaptic neuron. To show this, we developed a mathematical framework, in which the synapse as a whole interacts with the soma of the postsynaptic neuron in a similar way to an agent that is situated and behaves in an uncertain, dynamic environment. This framework suggests that synapses use an implicit internal model of the somatic membrane dynamics that is being updated by a synaptic learning rule, which resembles experimentally well-established LTP/LTD mechanisms. In addition, this approach entails that a synapse utilizes its inherently noisy synaptic release to also encode its uncertainty about the state of the somatic potential. Although each synapse strives for predicting the somatic dynamics of its postsynaptic neuron, we show that the emergent dynamics of many synapses in a neuronal network resolve different learning problems such as pattern classification or closed-loop control in a dynamic environment. Hereby, synapses coordinate themselves to represent and utilize uncertainties on the network level in behaviorally ambiguous situations.
Collapse
Affiliation(s)
- David Kappel
- III. Physikalisches Institut – Biophysik, Georg-August Universität, Göttingen, Germany
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany
| | - Christian Tetzlaff
- III. Physikalisches Institut – Biophysik, Georg-August Universität, Göttingen, Germany
- Group of Computational Synaptic Physiology, Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
4
|
Aghi K, Schultz R, Newman ZL, Mendonça P, Li R, Bakshinska D, Isacoff EY. Synapse-to-synapse plasticity variability balanced to generate input-wide constancy of transmitter release. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.11.612562. [PMID: 39314438 PMCID: PMC11419063 DOI: 10.1101/2024.09.11.612562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Basal synaptic strength can vary greatly between synapses formed by an individual neuron because of diverse probabilities of action potential (AP) evoked transmitter release ( Pr ). Optical quantal analysis on large numbers of identified Drosophila larval glutamatergic synapses shows that short-term plasticity (STP) also varies greatly between synapses made by an individual type I motor neuron (MN) onto a single body wall muscle. Synapses with high and low P r and different forms and level of STP have a random spatial distribution in the MN nerve terminal, and ones with very different properties can be located within 200 nm of one other. While synapses start off with widely diverse basal P r at low MN AP firing frequency and change P r differentially when MN firing frequency increases, the overall distribution of P r remains remarkably constant due to a balance between the numbers of synapses that facilitate and depress as well as their degree of change and basal synaptic weights. This constancy in transmitter release can ensure robustness across changing behavioral conditions.
Collapse
|
5
|
Malkin J, O'Donnell C, Houghton CJ, Aitchison L. Signatures of Bayesian inference emerge from energy-efficient synapses. eLife 2024; 12:RP92595. [PMID: 39106188 PMCID: PMC11302983 DOI: 10.7554/elife.92595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2024] Open
Abstract
Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANNs) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have (1) higher input firing rates and (2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy-efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.
Collapse
Affiliation(s)
- James Malkin
- Faculty of Engineering, University of BristolBristolUnited Kingdom
| | - Cian O'Donnell
- Faculty of Engineering, University of BristolBristolUnited Kingdom
- Intelligent Systems Research Centre, School of Computing, Engineering, and Intelligent Systems, Ulster UniversityDerry/LondonderryUnited Kingdom
| | - Conor J Houghton
- Faculty of Engineering, University of BristolBristolUnited Kingdom
| | | |
Collapse
|
6
|
Jedlicka P, Tomko M, Robins A, Abraham WC. Contributions by metaplasticity to solving the Catastrophic Forgetting Problem. Trends Neurosci 2022; 45:656-666. [PMID: 35798611 DOI: 10.1016/j.tins.2022.06.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 06/06/2022] [Accepted: 06/09/2022] [Indexed: 10/17/2022]
Abstract
Catastrophic forgetting (CF) refers to the sudden and severe loss of prior information in learning systems when acquiring new information. CF has been an Achilles heel of standard artificial neural networks (ANNs) when learning multiple tasks sequentially. The brain, by contrast, has solved this problem during evolution. Modellers now use a variety of strategies to overcome CF, many of which have parallels to cellular and circuit functions in the brain. One common strategy, based on metaplasticity phenomena, controls the future rate of change at key connections to help retain previously learned information. However, the metaplasticity properties so far used are only a subset of those existing in neurobiology. We propose that as models become more sophisticated, there could be value in drawing on a richer set of metaplasticity rules, especially when promoting continual learning in agents moving about the environment.
Collapse
Affiliation(s)
- Peter Jedlicka
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University, Giessen, Germany; Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University Frankfurt, Frankfurt/Main, Germany; Frankfurt Institute for Advanced Studies, Frankfurt 60438, Germany.
| | - Matus Tomko
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University, Giessen, Germany; Institute of Molecular Physiology and Genetics, Centre of Biosciences, Slovak Academy of Sciences, Bratislava, Slovakia
| | - Anthony Robins
- Department of Computer Science, University of Otago, Dunedin 9016, New Zealand
| | - Wickliffe C Abraham
- Department of Psychology, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand.
| |
Collapse
|
7
|
Jedlicka P, Bird AD, Cuntz H. Pareto optimality, economy-effectiveness trade-offs and ion channel degeneracy: improving population modelling for single neurons. Open Biol 2022; 12:220073. [PMID: 35857898 PMCID: PMC9277232 DOI: 10.1098/rsob.220073] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Neurons encounter unavoidable evolutionary trade-offs between multiple tasks. They must consume as little energy as possible while effectively fulfilling their functions. Cells displaying the best performance for such multi-task trade-offs are said to be Pareto optimal, with their ion channel configurations underpinning their functionality. Ion channel degeneracy, however, implies that multiple ion channel configurations can lead to functionally similar behaviour. Therefore, instead of a single model, neuroscientists often use populations of models with distinct combinations of ionic conductances. This approach is called population (database or ensemble) modelling. It remains unclear, which ion channel parameters in the vast population of functional models are more likely to be found in the brain. Here we argue that Pareto optimality can serve as a guiding principle for addressing this issue by helping to identify the subpopulations of conductance-based models that perform best for the trade-off between economy and functionality. In this way, the high-dimensional parameter space of neuronal models might be reduced to geometrically simple low-dimensional manifolds, potentially explaining experimentally observed ion channel correlations. Conversely, Pareto inference might also help deduce neuronal functions from high-dimensional Patch-seq data. In summary, Pareto optimality is a promising framework for improving population modelling of neurons and their circuits.
Collapse
Affiliation(s)
- Peter Jedlicka
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus-Liebig-University, Giessen, Germany,Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University, Frankfurt/Main, Germany,Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Alexander D. Bird
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus-Liebig-University, Giessen, Germany,Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany,Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany
| | - Hermann Cuntz
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany,Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany
| |
Collapse
|