1
|
Bhasin BJ, Raymond JL, Goldman MS. Synaptic weight dynamics underlying memory consolidation: Implications for learning rules, circuit organization, and circuit function. Proc Natl Acad Sci U S A 2024; 121:e2406010121. [PMID: 39365821 PMCID: PMC11474072 DOI: 10.1073/pnas.2406010121] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 08/12/2024] [Indexed: 10/06/2024] Open
Abstract
Systems consolidation is a common feature of learning and memory systems, in which a long-term memory initially stored in one brain region becomes persistently stored in another region. We studied the dynamics of systems consolidation in simple circuit architectures with two sites of plasticity, one in an early-learning and one in a late-learning brain area. We show that the synaptic dynamics of the circuit during consolidation of an analog memory can be understood as a temporal integration process, by which transient changes in activity driven by plasticity in the early-learning area are accumulated into persistent synaptic changes at the late-learning site. This simple principle naturally leads to a speed-accuracy tradeoff in systems consolidation and provides insight into how the circuit mitigates the stability-plasticity dilemma of storing new memories while preserving core features of older ones. Furthermore, it imposes two constraints on the circuit. First, the plasticity rule at the late-learning site must stably support a continuum of possible outputs for a given input. We show that this is readily achieved by heterosynaptic but not standard Hebbian rules. Second, to turn off the consolidation process and prevent erroneous changes at the late-learning site, neural activity in the early-learning area must be reset to its baseline activity. We provide two biologically plausible implementations for this reset that propose functional roles in stabilizing consolidation for core elements of the cerebellar circuit.
Collapse
Affiliation(s)
- Brandon J. Bhasin
- Department of Bioengineering, Stanford University, Stanford, CA94305
- Center for Neuroscience, University of California, Davis, CA95616
| | - Jennifer L. Raymond
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA94305
| | - Mark S. Goldman
- Center for Neuroscience, University of California, Davis, CA95616
- Department of Neurobiology, Physiology, and Behavior, University of California, Davis, CA95616
- Department of Ophthalmology and Vision Science, University of California, Davis, CA95616
| |
Collapse
|
2
|
Farrell M, Pehlevan C. Recall tempo of Hebbian sequences depends on the interplay of Hebbian kernel with tutor signal timing. Proc Natl Acad Sci U S A 2024; 121:e2309876121. [PMID: 39078676 PMCID: PMC11317560 DOI: 10.1073/pnas.2309876121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 06/04/2024] [Indexed: 07/31/2024] Open
Abstract
Understanding how neural circuits generate sequential activity is a longstanding challenge. While foundational theoretical models have shown how sequences can be stored as memories in neural networks with Hebbian plasticity rules, these models considered only a narrow range of Hebbian rules. Here, we introduce a model for arbitrary Hebbian plasticity rules, capturing the diversity of spike-timing-dependent synaptic plasticity seen in experiments, and show how the choice of these rules and of neural activity patterns influences sequence memory formation and retrieval. In particular, we derive a general theory that predicts the tempo of sequence replay. This theory lays a foundation for explaining how cortical tutor signals might give rise to motor actions that eventually become "automatic." Our theory also captures the impact of changing the tempo of the tutor signal. Beyond shedding light on biological circuits, this theory has relevance in artificial intelligence by laying a foundation for frameworks whereby slow and computationally expensive deliberation can be stored as memories and eventually replaced by inexpensive recall.
Collapse
Affiliation(s)
- Matthew Farrell
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA02138
- Center for Brain Science, Harvard University, Cambridge, MA02138
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA02138
- Center for Brain Science, Harvard University, Cambridge, MA02138
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA02138
| |
Collapse
|
3
|
Bhasin BJ, Raymond JL, Goldman MS. Synaptic weight dynamics underlying memory consolidation: implications for learning rules, circuit organization, and circuit function. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.586036. [PMID: 38585936 PMCID: PMC10996481 DOI: 10.1101/2024.03.20.586036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Systems consolidation is a common feature of learning and memory systems, in which a long-term memory initially stored in one brain region becomes persistently stored in another region. We studied the dynamics of systems consolidation in simple circuit architectures with two sites of plasticity, one in an early-learning and one in a late-learning brain area. We show that the synaptic dynamics of the circuit during consolidation of an analog memory can be understood as a temporal integration process, by which transient changes in activity driven by plasticity in the early-learning area are accumulated into persistent synaptic changes at the late-learning site. This simple principle naturally leads to a speed-accuracy tradeoff in systems consolidation and provides insight into how the circuit mitigates the stability-plasticity dilemma of storing new memories while preserving core features of older ones. Furthermore, it imposes two constraints on the circuit. First, the plasticity rule at the late-learning site must stably support a continuum of possible outputs for a given input. We show that this is readily achieved by heterosynaptic but not standard Hebbian rules. Second, to turn off the consolidation process and prevent erroneous changes at the late-learning site, neural activity in the early-learning area must be reset to its baseline activity. We propose two biologically plausible implementations for this reset that suggest novel roles for core elements of the cerebellar circuit. Significance Statement How are memories transformed over time? We propose a simple organizing principle for how long term memories are moved from an initial to a final site of storage. We show that successful transfer occurs when the late site of memory storage is endowed with synaptic plasticity rules that stably accumulate changes in activity occurring at the early site of memory storage. We instantiate this principle in a simple computational model that is representative of brain circuits underlying a variety of behaviors. The model suggests how a neural circuit can store new memories while preserving core features of older ones, and suggests novel roles for core elements of the cerebellar circuit.
Collapse
|
4
|
Lindsey JW, Litwin-Kumar A. Selective consolidation of learning and memory via recall-gated plasticity. eLife 2024; 12:RP90793. [PMID: 39023518 PMCID: PMC11257680 DOI: 10.7554/elife.90793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
In a variety of species and behavioral contexts, learning and memory formation recruits two neural systems, with initial plasticity in one system being consolidated into the other over time. Moreover, consolidation is known to be selective; that is, some experiences are more likely to be consolidated into long-term memory than others. Here, we propose and analyze a model that captures common computational principles underlying such phenomena. The key component of this model is a mechanism by which a long-term learning and memory system prioritizes the storage of synaptic changes that are consistent with prior updates to the short-term system. This mechanism, which we refer to as recall-gated consolidation, has the effect of shielding long-term memory from spurious synaptic changes, enabling it to focus on reliable signals in the environment. We describe neural circuit implementations of this model for different types of learning problems, including supervised learning, reinforcement learning, and autoassociative memory storage. These implementations involve synaptic plasticity rules modulated by factors such as prediction accuracy, decision confidence, or familiarity. We then develop an analytical theory of the learning and memory performance of the model, in comparison to alternatives relying only on synapse-local consolidation mechanisms. We find that recall-gated consolidation provides significant advantages, substantially amplifying the signal-to-noise ratio with which memories can be stored in noisy environments. We show that recall-gated consolidation gives rise to a number of phenomena that are present in behavioral learning paradigms, including spaced learning effects, task-dependent rates of consolidation, and differing neural representations in short- and long-term pathways.
Collapse
Affiliation(s)
- Jack W Lindsey
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
5
|
Brudner S, Pearson J, Mooney R. Generative models of birdsong learning link circadian fluctuations in song variability to changes in performance. PLoS Comput Biol 2023; 19:e1011051. [PMID: 37126511 PMCID: PMC10150982 DOI: 10.1371/journal.pcbi.1011051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 03/27/2023] [Indexed: 05/02/2023] Open
Abstract
Learning skilled behaviors requires intensive practice over days, months, or years. Behavioral hallmarks of practice include exploratory variation and long-term improvements, both of which can be impacted by circadian processes. During weeks of vocal practice, the juvenile male zebra finch transforms highly variable and simple song into a stable and precise copy of an adult tutor's complex song. Song variability and performance in juvenile finches also exhibit circadian structure that could influence this long-term learning process. In fact, one influential study reported juvenile song regresses towards immature performance overnight, while another suggested a more complex pattern of overnight change. However, neither of these studies thoroughly examined how circadian patterns of variability may structure the production of more or less mature songs. Here we relate the circadian dynamics of song maturation to circadian patterns of song variation, leveraging a combination of data-driven approaches. In particular we analyze juvenile singing in learned feature space that supports both data-driven measures of song maturity and generative developmental models of song production. These models reveal that circadian fluctuations in variability lead to especially regressive morning variants even without overall overnight regression, and highlight the utility of data-driven generative models for untangling these contributions.
Collapse
Affiliation(s)
- Samuel Brudner
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina, United States of America
| | - John Pearson
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina, United States of America
- Department of Biostatistics & Bioinformatics, Duke University, Durham, North Carolina, United States of America
| | - Richard Mooney
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina, United States of America
| |
Collapse
|
6
|
Tesileanu T, Piasini E, Balasubramanian V. Efficient processing of natural scenes in visual cortex. Front Cell Neurosci 2022; 16:1006703. [PMID: 36545653 PMCID: PMC9760692 DOI: 10.3389/fncel.2022.1006703] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022] Open
Abstract
Neural circuits in the periphery of the visual, auditory, and olfactory systems are believed to use limited resources efficiently to represent sensory information by adapting to the statistical structure of the natural environment. This "efficient coding" principle has been used to explain many aspects of early visual circuits including the distribution of photoreceptors, the mosaic geometry and center-surround structure of retinal receptive fields, the excess OFF pathways relative to ON pathways, saccade statistics, and the structure of simple cell receptive fields in V1. We know less about the extent to which such adaptations may occur in deeper areas of cortex beyond V1. We thus review recent developments showing that the perception of visual textures, which depends on processing in V2 and beyond in mammals, is adapted in rats and humans to the multi-point statistics of luminance in natural scenes. These results suggest that central circuits in the visual brain are adapted for seeing key aspects of natural scenes. We conclude by discussing how adaptation to natural temporal statistics may aid in learning and representing visual objects, and propose two challenges for the future: (1) explaining the distribution of shape sensitivity in the ventral visual stream from the statistics of object shape in natural images, and (2) explaining cell types of the vertebrate retina in terms of feature detectors that are adapted to the spatio-temporal structures of natural stimuli. We also discuss how new methods based on machine learning may complement the normative, principles-based approach to theoretical neuroscience.
Collapse
Affiliation(s)
- Tiberiu Tesileanu
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, United States
| | - Eugenio Piasini
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, Italy
| | - Vijay Balasubramanian
- Department of Physics and Astronomy, David Rittenhouse Laboratory, University of Pennsylvania, Philadelphia, PA, United States
- Santa Fe Institute, Santa Fe, NM, United States
| |
Collapse
|
7
|
Wang MB, Halassa MM. Thalamocortical contribution to flexible learning in neural systems. Netw Neurosci 2022; 6:980-997. [PMID: 36875011 PMCID: PMC9976647 DOI: 10.1162/netn_a_00235] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 01/19/2022] [Indexed: 11/04/2022] Open
Abstract
Animal brains evolved to optimize behavior in dynamic environments, flexibly selecting actions that maximize future rewards in different contexts. A large body of experimental work indicates that such optimization changes the wiring of neural circuits, appropriately mapping environmental input onto behavioral outputs. A major unsolved scientific question is how optimal wiring adjustments, which must target the connections responsible for rewards, can be accomplished when the relation between sensory inputs, action taken, and environmental context with rewards is ambiguous. The credit assignment problem can be categorized into context-independent structural credit assignment and context-dependent continual learning. In this perspective, we survey prior approaches to these two problems and advance the notion that the brain's specialized neural architectures provide efficient solutions. Within this framework, the thalamus with its cortical and basal ganglia interactions serves as a systems-level solution to credit assignment. Specifically, we propose that thalamocortical interaction is the locus of meta-learning where the thalamus provides cortical control functions that parametrize the cortical activity association space. By selecting among these control functions, the basal ganglia hierarchically guide thalamocortical plasticity across two timescales to enable meta-learning. The faster timescale establishes contextual associations to enable behavioral flexibility, while the slower one enables generalization to new contexts.
Collapse
Affiliation(s)
- Mien Brabeeba Wang
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
8
|
Seenivasan P, Narayanan R. Efficient information coding and degeneracy in the nervous system. Curr Opin Neurobiol 2022; 76:102620. [PMID: 35985074 PMCID: PMC7613645 DOI: 10.1016/j.conb.2022.102620] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/01/2022] [Accepted: 07/07/2022] [Indexed: 11/25/2022]
Abstract
Efficient information coding (EIC) is a universal biological framework rooted in the fundamental principle that system responses should match their natural stimulus statistics for maximizing environmental information. Quantitatively assessed through information theory, such adaptation to the environment occurs at all biological levels and timescales. The context dependence of environmental stimuli and the need for stable adaptations make EIC a daunting task. We argue that biological complexity is the principal architect that subserves deft execution of stable EIC. Complexity in a system is characterized by several functionally segregated subsystems that show a high degree of functional integration when they interact with each other. Complex biological systems manifest heterogeneities and degeneracy, wherein structurally different subsystems could interact to yield the same functional outcome. We argue that complex systems offer several choices that effectively implement EIC and homeostasis for each of the different contexts encountered by the system.
Collapse
Affiliation(s)
- Pavithraa Seenivasan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India. https://twitter.com/PaveeSeeni
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India.
| |
Collapse
|
9
|
Strategy updating mediated by specific retrosplenial-parafascicular-basal ganglia networks. Curr Biol 2022; 32:3477-3492.e5. [DOI: 10.1016/j.cub.2022.06.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 06/07/2022] [Accepted: 06/10/2022] [Indexed: 10/17/2022]
|
10
|
Stowell D. Computational bioacoustics with deep learning: a review and roadmap. PeerJ 2022; 10:e13152. [PMID: 35341043 PMCID: PMC8944344 DOI: 10.7717/peerj.13152] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/01/2022] [Indexed: 01/20/2023] Open
Abstract
Animal vocalisations and natural soundscapes are fascinating objects of study, and contain valuable evidence about animal behaviours, populations and ecosystems. They are studied in bioacoustics and ecoacoustics, with signal processing and analysis an important component. Computational bioacoustics has accelerated in recent decades due to the growth of affordable digital sound recording devices, and to huge progress in informatics such as big data, signal processing and machine learning. Methods are inherited from the wider field of deep learning, including speech and image processing. However, the tasks, demands and data characteristics are often different from those addressed in speech or music analysis. There remain unsolved problems, and tasks for which evidence is surely present in many acoustic signals, but not yet realised. In this paper I perform a review of the state of the art in deep learning for computational bioacoustics, aiming to clarify key concepts and identify and analyse knowledge gaps. Based on this, I offer a subjective but principled roadmap for computational bioacoustics with deep learning: topics that the community should aim to address, in order to make the most of future developments in AI and informatics, and to use audio data in answering zoological and ecological questions.
Collapse
Affiliation(s)
- Dan Stowell
- Department of Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, The Netherlands,Naturalis Biodiversity Center, Leiden, The Netherlands
| |
Collapse
|
11
|
Remme MWH, Bergmann U, Alevi D, Schreiber S, Sprekeler H, Kempter R. Hebbian plasticity in parallel synaptic pathways: A circuit mechanism for systems memory consolidation. PLoS Comput Biol 2021; 17:e1009681. [PMID: 34874938 PMCID: PMC8683039 DOI: 10.1371/journal.pcbi.1009681] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 12/17/2021] [Accepted: 11/24/2021] [Indexed: 12/03/2022] Open
Abstract
Systems memory consolidation involves the transfer of memories across brain regions and the transformation of memory content. For example, declarative memories that transiently depend on the hippocampal formation are transformed into long-term memory traces in neocortical networks, and procedural memories are transformed within cortico-striatal networks. These consolidation processes are thought to rely on replay and repetition of recently acquired memories, but the cellular and network mechanisms that mediate the changes of memories are poorly understood. Here, we suggest that systems memory consolidation could arise from Hebbian plasticity in networks with parallel synaptic pathways-two ubiquitous features of neural circuits in the brain. We explore this hypothesis in the context of hippocampus-dependent memories. Using computational models and mathematical analyses, we illustrate how memories are transferred across circuits and discuss why their representations could change. The analyses suggest that Hebbian plasticity mediates consolidation by transferring a linear approximation of a previously acquired memory into a parallel pathway. Our modelling results are further in quantitative agreement with lesion studies in rodents. Moreover, a hierarchical iteration of the mechanism yields power-law forgetting-as observed in psychophysical studies in humans. The predicted circuit mechanism thus bridges spatial scales from single cells to cortical areas and time scales from milliseconds to years.
Collapse
Affiliation(s)
- Michiel W. H. Remme
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Urs Bergmann
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Denis Alevi
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Susanne Schreiber
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
| | - Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
- Excellence Cluster Science of Intelligence, Berlin, Germany
| | - Richard Kempter
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
| |
Collapse
|
12
|
Sankar R, Rougier NP, Leblois A. Computational benefits of structural plasticity, illustrated in songbirds. Neurosci Biobehav Rev 2021; 132:1183-1196. [PMID: 34801257 DOI: 10.1016/j.neubiorev.2021.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 10/13/2021] [Accepted: 10/25/2021] [Indexed: 11/29/2022]
Abstract
The plasticity of nervous systems allows animals to quickly adapt to a changing environment. In particular, the structural plasticity of brain networks is often critical to the development of the central nervous system and the acquisition of complex behaviors. As an example, structural plasticity is central to the development of song-related brain circuits and may be critical for song acquisition in juvenile songbirds. Here, we review current evidences for structural plasticity and their significance from a computational point of view. We start by reviewing evidence for structural plasticity across species and categorizing them along the spatial axes as well as the along the time course during development. We introduce the vocal learning circuitry in zebra finches, as a useful example of structural plasticity, and use this specific case to explore the possible contributions of structural plasticity to computational models. Finally, we discuss current modeling studies incorporating structural plasticity and unexplored questions which are raised by such models.
Collapse
Affiliation(s)
- Remya Sankar
- Inria Bordeaux Sud-Ouest, Talence, France; Institut des Maladies Neurodégénératives, Université de Bordeaux, Bordeaux, France; Institut des Maladies Neurodégénératives, CNRS, UMR 5293, France; LaBRI, Université de Bordeaux, INP, CNRS, UMR 5800, Talence, France
| | - Nicolas P Rougier
- Inria Bordeaux Sud-Ouest, Talence, France; Institut des Maladies Neurodégénératives, Université de Bordeaux, Bordeaux, France; Institut des Maladies Neurodégénératives, CNRS, UMR 5293, France; LaBRI, Université de Bordeaux, INP, CNRS, UMR 5800, Talence, France
| | - Arthur Leblois
- Institut des Maladies Neurodégénératives, Université de Bordeaux, Bordeaux, France; Institut des Maladies Neurodégénératives, CNRS, UMR 5293, France.
| |
Collapse
|
13
|
Song learning and plasticity in songbirds. Curr Opin Neurobiol 2021; 67:228-239. [PMID: 33667874 DOI: 10.1016/j.conb.2021.02.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 02/05/2021] [Accepted: 02/05/2021] [Indexed: 11/20/2022]
Abstract
Birdsong provides a fascinating system to study both behavioral and neural plasticity. Oscine songbirds learn to sing, exhibiting behavioral plasticity both during and after the song-learning process. As a bird learns, its song progresses from a plastic and highly variable vocalization into a more stereotyped, crystallized song. However, even after crystallization, song plasticity can occur: some species' songs become more stereotyped over time, whereas other species can incorporate new song elements. Alongside the changes in song, songbirds' brains are also plastic. Both song and neural connections change with the seasons in many species, and new neurons can be added to the song system throughout life. In this review, we highlight important research on behavioral and neural plasticity at multiple timescales, from song development in juveniles to lifelong modifications of learned song.
Collapse
|
14
|
Murray JM, Escola GS. Remembrance of things practiced with fast and slow learning in cortical and subcortical pathways. Nat Commun 2020; 11:6441. [PMID: 33361766 PMCID: PMC7758336 DOI: 10.1038/s41467-020-19788-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2020] [Accepted: 10/21/2020] [Indexed: 11/20/2022] Open
Abstract
The learning of motor skills unfolds over multiple timescales, with rapid initial gains in performance followed by a longer period in which the behavior becomes more refined, habitual, and automatized. While recent lesion and inactivation experiments have provided hints about how various brain areas might contribute to such learning, their precise roles and the neural mechanisms underlying them are not well understood. In this work, we propose neural- and circuit-level mechanisms by which motor cortex, thalamus, and striatum support motor learning. In this model, the combination of fast cortical learning and slow subcortical learning gives rise to a covert learning process through which control of behavior is gradually transferred from cortical to subcortical circuits, while protecting learned behaviors that are practiced repeatedly against overwriting by future learning. Together, these results point to a new computational role for thalamus in motor learning and, more broadly, provide a framework for understanding the neural basis of habit formation and the automatization of behavior through practice.
Collapse
Affiliation(s)
- James M Murray
- Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY, 10027, USA.
- Institute of Neuroscience, University of Oregon, Eugene, OR, 97403, USA.
| | - G Sean Escola
- Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY, 10027, USA
- Department of Psychiatry, Columbia University, New York, NY, 10032, USA
| |
Collapse
|
15
|
An avian cortical circuit for chunking tutor song syllables into simple vocal-motor units. Nat Commun 2020; 11:5029. [PMID: 33024101 PMCID: PMC7538968 DOI: 10.1038/s41467-020-18732-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 08/24/2020] [Indexed: 12/24/2022] Open
Abstract
How are brain circuits constructed to achieve complex goals? The brains of young songbirds develop motor circuits that achieve the goal of imitating a specific tutor song to which they are exposed. Here, we set out to examine how song-generating circuits may be influenced early in song learning by a cortical region (NIf) at the interface between auditory and motor systems. Single-unit recordings reveal that, during juvenile babbling, NIf neurons burst at syllable onsets, with some neurons exhibiting selectivity for particular emerging syllable types. When juvenile birds listen to their tutor, NIf neurons are also activated at tutor syllable onsets, and are often selective for particular syllable types. We examine a simple computational model in which tutor exposure imprints the correct number of syllable patterns as ensembles in an interconnected NIf network. These ensembles are then reactivated during singing to train a set of syllable sequences in the motor network. Young songbirds learn to imitate their parents’ songs. Here, the authors find that, in baby birds, neurons in a brain region at the interface of auditory and motor circuits signal the onsets of song syllables during both tutoring and babbling, suggesting a specific neural mechanism for vocal imitation.
Collapse
|
16
|
Corticobasal ganglia projecting neurons are required for juvenile vocal learning but not for adult vocal plasticity in songbirds. Proc Natl Acad Sci U S A 2019; 116:22833-22843. [PMID: 31636217 DOI: 10.1073/pnas.1913575116] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Birdsong, like human speech, consists of a sequence of temporally precise movements acquired through vocal learning. The learning of such sequential vocalizations depends on the neural function of the motor cortex and basal ganglia. However, it is unknown how the connections between cortical and basal ganglia components contribute to vocal motor skill learning, as mammalian motor cortices serve multiple types of motor action and most experimentally tractable animals do not exhibit vocal learning. Here, we leveraged the zebra finch, a songbird, as an animal model to explore the function of the connectivity between cortex-like (HVC) and basal ganglia (area X), connected by HVC(X) projection neurons with temporally precise firing during singing. By specifically ablating HVC(X) neurons, juvenile zebra finches failed to copy tutored syllable acoustics and developed temporally unstable songs with less sequence consistency. In contrast, HVC(X)-ablated adults did not alter their learned song structure, but generated acoustic fluctuations and responded to auditory feedback disruption by the introduction of song deterioration, as did normal adults. These results indicate that the corticobasal ganglia input is important for learning the acoustic and temporal aspects of song structure, but not for generating vocal fluctuations that contribute to the maintenance of an already learned vocal pattern.
Collapse
|
17
|
Guerguiev J, Lillicrap TP, Richards BA. Towards deep learning with segregated dendrites. eLife 2017; 6. [PMID: 29205151 PMCID: PMC5716677 DOI: 10.7554/elife.22901] [Citation(s) in RCA: 159] [Impact Index Per Article: 19.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Accepted: 10/22/2017] [Indexed: 01/24/2023] Open
Abstract
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons. Artificial intelligence has made major progress in recent years thanks to a technique known as deep learning, which works by mimicking the human brain. When computers employ deep learning, they learn by using networks made up of many layers of simulated neurons. Deep learning has opened the door to computers with human – or even super-human – levels of skill in recognizing images, processing speech and controlling vehicles. But many neuroscientists are skeptical about whether the brain itself performs deep learning. The patterns of activity that occur in computer networks during deep learning resemble those seen in human brains. But some features of deep learning seem incompatible with how the brain works. Moreover, neurons in artificial networks are much simpler than our own neurons. For instance, in the region of the brain responsible for thinking and planning, most neurons have complex tree-like shapes. Each cell has ‘roots’ deep inside the brain and ‘branches’ close to the surface. By contrast, simulated neurons have a uniform structure. To find out whether networks made up of more realistic simulated neurons could be used to make deep learning more biologically realistic, Guerguiev et al. designed artificial neurons with two compartments, similar to the ‘roots’ and ‘branches’. The network learned to recognize hand-written digits more easily when it had many layers than when it had only a few. This shows that artificial neurons more like those in the brain can enable deep learning. It even suggests that our own neurons may have evolved their shape to support this process. If confirmed, the link between neuronal shape and deep learning could help us develop better brain-computer interfaces. These allow people to use their brain activity to control devices such as artificial limbs. Despite advances in computing, we are still superior to computers when it comes to learning. Understanding how our own brains show deep learning could thus help us develop better, more human-like artificial intelligence in the future.
Collapse
Affiliation(s)
- Jordan Guerguiev
- Department of Biological Sciences, University of Toronto Scarborough, Toronto, Canada.,Department of Cell and Systems Biology, University of Toronto, Toronto, Canada
| | | | - Blake A Richards
- Department of Biological Sciences, University of Toronto Scarborough, Toronto, Canada.,Department of Cell and Systems Biology, University of Toronto, Toronto, Canada.,Learning in Machines and Brains Program, Canadian Institute for Advanced Research, Toronto, Canada
| |
Collapse
|
18
|
Abstract
Trial-to-trial variability in the execution of movements and motor skills is ubiquitous and widely considered to be the unwanted consequence of a noisy nervous system. However, recent studies have suggested that motor variability may also be a feature of how sensorimotor systems operate and learn. This view, rooted in reinforcement learning theory, equates motor variability with purposeful exploration of motor space that, when coupled with reinforcement, can drive motor learning. Here we review studies that explore the relationship between motor variability and motor learning in both humans and animal models. We discuss neural circuit mechanisms that underlie the generation and regulation of motor variability and consider the implications that this work has for our understanding of motor learning.
Collapse
Affiliation(s)
- Ashesh K Dhawale
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, Massachusetts 02138;
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
| | - Maurice A Smith
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, Massachusetts 02138;
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
| |
Collapse
|