1
|
Boscaglia M, Gastaldi C, Gerstner W, Quian Quiroga R. A dynamic attractor network model of memory formation, reinforcement and forgetting. PLoS Comput Biol 2023; 19:e1011727. [PMID: 38117859 PMCID: PMC10766193 DOI: 10.1371/journal.pcbi.1011727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 01/04/2024] [Accepted: 12/02/2023] [Indexed: 12/22/2023] Open
Abstract
Empirical evidence shows that memories that are frequently revisited are easy to recall, and that familiar items involve larger hippocampal representations than less familiar ones. In line with these observations, here we develop a modelling approach to provide a mechanistic understanding of how hippocampal neural assemblies evolve differently, depending on the frequency of presentation of the stimuli. For this, we added an online Hebbian learning rule, background firing activity, neural adaptation and heterosynaptic plasticity to a rate attractor network model, thus creating dynamic memory representations that can persist, increase or fade according to the frequency of presentation of the corresponding memory patterns. Specifically, we show that a dynamic interplay between Hebbian learning and background firing activity can explain the relationship between the memory assembly sizes and their frequency of stimulation. Frequently stimulated assemblies increase their size independently from each other (i.e. creating orthogonal representations that do not share neurons, thus avoiding interference). Importantly, connections between neurons of assemblies that are not further stimulated become labile so that these neurons can be recruited by other assemblies, providing a neuronal mechanism of forgetting.
Collapse
Affiliation(s)
- Marta Boscaglia
- Centre for Systems Neuroscience, University of Leicester, United Kingdom
- School of Psychology and Vision Sciences, University of Leicester, United Kingdom
| | - Chiara Gastaldi
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Rodrigo Quian Quiroga
- Centre for Systems Neuroscience, University of Leicester, United Kingdom
- Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
- Ruijin hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, People’s Republic of China
| |
Collapse
|
2
|
McFarlan AR, Chou CYC, Watanabe A, Cherepacha N, Haddad M, Owens H, Sjöström PJ. The plasticitome of cortical interneurons. Nat Rev Neurosci 2023; 24:80-97. [PMID: 36585520 DOI: 10.1038/s41583-022-00663-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/21/2022] [Indexed: 12/31/2022]
Abstract
Hebb postulated that, to store information in the brain, assemblies of excitatory neurons coding for a percept are bound together via associative long-term synaptic plasticity. In this view, it is unclear what role, if any, is carried out by inhibitory interneurons. Indeed, some have argued that inhibitory interneurons are not plastic. Yet numerous recent studies have demonstrated that, similar to excitatory neurons, inhibitory interneurons also undergo long-term plasticity. Here, we discuss the many diverse forms of long-term plasticity that are found at inputs to and outputs from several types of cortical inhibitory interneuron, including their plasticity of intrinsic excitability and their homeostatic plasticity. We explain key plasticity terminology, highlight key interneuron plasticity mechanisms, extract overarching principles and point out implications for healthy brain functionality as well as for neuropathology. We introduce the concept of the plasticitome - the synaptic plasticity counterpart to the genome or the connectome - as well as nomenclature and definitions for dealing with this rich diversity of plasticity. We argue that the great diversity of interneuron plasticity rules is best understood at the circuit level, for example as a way of elucidating how the credit-assignment problem is solved in deep biological neural networks.
Collapse
Affiliation(s)
- Amanda R McFarlan
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - Christina Y C Chou
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - Airi Watanabe
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - Nicole Cherepacha
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Maria Haddad
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - Hannah Owens
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - P Jesper Sjöström
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.
| |
Collapse
|
3
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
4
|
Gu J, Lim S. Unsupervised learning for robust working memory. PLoS Comput Biol 2022; 18:e1009083. [PMID: 35500033 PMCID: PMC9098088 DOI: 10.1371/journal.pcbi.1009083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 05/12/2022] [Accepted: 03/16/2022] [Indexed: 11/18/2022] Open
Abstract
Working memory is a core component of critical cognitive functions such as planning and decision-making. Persistent activity that lasts long after the stimulus offset has been considered a neural substrate for working memory. Attractor dynamics based on network interactions can successfully reproduce such persistent activity. However, it requires a fine-tuning of network connectivity, in particular, to form continuous attractors which were suggested for encoding continuous signals in working memory. Here, we investigate whether a specific form of synaptic plasticity rules can mitigate such tuning problems in two representative working memory models, namely, rate-coded and location-coded persistent activity. We consider two prominent types of plasticity rules, differential plasticity correcting the rapid activity changes and homeostatic plasticity regularizing the long-term average of activity, both of which have been proposed to fine-tune the weights in an unsupervised manner. Consistent with the findings of previous works, differential plasticity alone was enough to recover a graded-level persistent activity after perturbations in the connectivity. For the location-coded memory, differential plasticity could also recover persistent activity. However, its pattern can be irregular for different stimulus locations under slow learning speed or large perturbation in the connectivity. On the other hand, homeostatic plasticity shows a robust recovery of smooth spatial patterns under particular types of synaptic perturbations, such as perturbations in incoming synapses onto the entire or local populations. However, homeostatic plasticity was not effective against perturbations in outgoing synapses from local populations. Instead, combining it with differential plasticity recovers location-coded persistent activity for a broader range of perturbations, suggesting compensation between two plasticity rules.
Collapse
Affiliation(s)
- Jintao Gu
- Neural Science, New York University Shanghai, Shanghai, China
| | - Sukbin Lim
- Neural Science, New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
- * E-mail:
| |
Collapse
|
5
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
6
|
Berberian N, Ross M, Chartier S. Embodied working memory during ongoing input streams. PLoS One 2021; 16:e0244822. [PMID: 33400724 PMCID: PMC7785253 DOI: 10.1371/journal.pone.0244822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 12/16/2020] [Indexed: 11/18/2022] Open
Abstract
Sensory stimuli endow animals with the ability to generate an internal representation. This representation can be maintained for a certain duration in the absence of previously elicited inputs. The reliance on an internal representation rather than purely on the basis of external stimuli is a hallmark feature of higher-order functions such as working memory. Patterns of neural activity produced in response to sensory inputs can continue long after the disappearance of previous inputs. Experimental and theoretical studies have largely invested in understanding how animals faithfully maintain sensory representations during ongoing reverberations of neural activity. However, these studies have focused on preassigned protocols of stimulus presentation, leaving out by default the possibility of exploring how the content of working memory interacts with ongoing input streams. Here, we study working memory using a network of spiking neurons with dynamic synapses subject to short-term and long-term synaptic plasticity. The formal model is embodied in a physical robot as a companion approach under which neuronal activity is directly linked to motor output. The artificial agent is used as a methodological tool for studying the formation of working memory capacity. To this end, we devise a keyboard listening framework to delineate the context under which working memory content is (1) refined, (2) overwritten or (3) resisted by ongoing new input streams. Ultimately, this study takes a neurorobotic perspective to resurface the long-standing implication of working memory in flexible cognition.
Collapse
Affiliation(s)
- Nareg Berberian
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Matt Ross
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Sylvain Chartier
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
7
|
Spontaneous Activity Induced by Gaussian Noise in the Network-Organized FitzHugh-Nagumo Model. Neural Plast 2020; 2020:6651441. [PMID: 33299394 PMCID: PMC7707970 DOI: 10.1155/2020/6651441] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 10/28/2020] [Accepted: 11/16/2020] [Indexed: 11/18/2022] Open
Abstract
In this paper, we show some dynamical and biological mechanisms of the short-term memory (the fixed point attractor) through the toggle switch in the FitzHugh-Nagumo model (FN). Firstly, we obtain the bistable conditions, show the effect of Gaussian noise on the toggle switch, and explain the short-term memory's switch mechanism by mean first passage time (MFPT). Then, we obtain a Fokker-Planck equation and illustrate the meaning of the monostable and bistable state in the short-term memory. Furthermore, we study the toggle switch under the interaction of network and noise. Meanwhile, we show that network structure and noise play a vital role in the toggle switch based on network mean first passage time (NMFPT). And we illustrate that the modest clustering coefficient and noise are necessary to maintain memories. Finally, the numerical simulation shows that the analytical results agree with it.
Collapse
|
8
|
Pereira U, Brunel N. Unsupervised Learning of Persistent and Sequential Activity. Front Comput Neurosci 2020; 13:97. [PMID: 32009924 PMCID: PMC6978734 DOI: 10.3389/fncom.2019.00097] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 12/23/2019] [Indexed: 11/25/2022] Open
Abstract
Two strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be “learned” by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism is necessary. We show that a generalized version of the standard multiplicative homeostatic plasticity (Renart et al., 2003; Toyoizumi et al., 2014) stabilizes learning by effectively masking excitatory connections during stimulation and unmasking those connections during retrieval. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.
Collapse
Affiliation(s)
- Ulises Pereira
- Department of Statistics, The University of Chicago, Chicago, IL, United States
| | - Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, Duke University, Durham, NC, United States.,Department of Physics, Duke University, Durham, NC, United States
| |
Collapse
|
9
|
Ocker GK, Doiron B. Training and Spontaneous Reinforcement of Neuronal Assemblies by Spike Timing Plasticity. Cereb Cortex 2019; 29:937-951. [PMID: 29415191 PMCID: PMC7963120 DOI: 10.1093/cercor/bhy001] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Revised: 01/01/2018] [Accepted: 01/05/2018] [Indexed: 12/15/2022] Open
Abstract
The synaptic connectivity of cortex is plastic, with experience shaping the ongoing interactions between neurons. Theoretical studies of spike timing-dependent plasticity (STDP) have focused on either just pairs of neurons or large-scale simulations. A simple analytic account for how fast spike time correlations affect both microscopic and macroscopic network structure is lacking. We develop a low-dimensional mean field theory for STDP in recurrent networks and show the emergence of assemblies of strongly coupled neurons with shared stimulus preferences. After training, this connectivity is actively reinforced by spike train correlations during the spontaneous dynamics. Furthermore, the stimulus coding by cell assemblies is actively maintained by these internally generated spiking correlations, suggesting a new role for noise correlations in neural coding. Assembly formation has often been associated with firing rate-based plasticity schemes; our theory provides an alternative and complementary framework, where fine temporal correlations and STDP form and actively maintain learned structure in cortical networks.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, USA
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Brent Doiron
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
10
|
Brea J, Gerstner W. Does computational neuroscience need new synaptic learning paradigms? Curr Opin Behav Sci 2016. [DOI: 10.1016/j.cobeha.2016.05.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
11
|
Sweeney Y, Clopath C. Emergent spatial synaptic structure from diffusive plasticity. Eur J Neurosci 2016; 45:1057-1067. [DOI: 10.1111/ejn.13279] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Revised: 05/04/2016] [Accepted: 05/13/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Yann Sweeney
- Department of Bioengineering; Imperial College London, South Kensington Campus; London SW7 2AZ UK
| | - Claudia Clopath
- Department of Bioengineering; Imperial College London, South Kensington Campus; London SW7 2AZ UK
| |
Collapse
|
12
|
Giulioni M, Corradi F, Dante V, del Giudice P. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems. Sci Rep 2015; 5:14730. [PMID: 26463272 PMCID: PMC4604465 DOI: 10.1038/srep14730] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Accepted: 08/12/2015] [Indexed: 11/10/2022] Open
Abstract
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Collapse
Affiliation(s)
| | - Federico Corradi
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Switzerland
| | - Vittorio Dante
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
| | - Paolo del Giudice
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
- National Institute for Nuclear Physics, Rome, Italy
| |
Collapse
|
13
|
Ocker GK, Litwin-Kumar A, Doiron B. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses. PLoS Comput Biol 2015; 11:e1004458. [PMID: 26291697 PMCID: PMC4546203 DOI: 10.1371/journal.pcbi.1004458] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 07/19/2015] [Indexed: 11/18/2022] Open
Abstract
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
| | - Ashok Litwin-Kumar
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Brent Doiron
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
14
|
Zenke F, Agnes EJ, Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat Commun 2015; 6:6922. [PMID: 25897632 PMCID: PMC4411307 DOI: 10.1038/ncomms7922] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2014] [Accepted: 03/13/2015] [Indexed: 11/23/2022] Open
Abstract
Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.
Collapse
Affiliation(s)
- Friedemann Zenke
- School of Computer and Communication Sciences and School of Life Sciences, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne EPFL 1015, Switzerland
| | - Everton J. Agnes
- School of Computer and Communication Sciences and School of Life Sciences, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne EPFL 1015, Switzerland
- Instituto de Física, Universidade Federal do Rio Grande do Sul, Caixa Postal 15051, Porto Alegre RS 91501-970, Brazil
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne EPFL 1015, Switzerland
| |
Collapse
|
15
|
Litwin-Kumar A, Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nat Commun 2014; 5:5319. [PMID: 25395015 DOI: 10.1038/ncomms6319] [Citation(s) in RCA: 159] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2014] [Accepted: 09/18/2014] [Indexed: 01/12/2023] Open
Abstract
The architecture of cortex is flexible, permitting neuronal networks to store recent sensory experiences as specific synaptic connectivity patterns. However, it is unclear how these patterns are maintained in the face of the high spike time variability associated with cortex. Here we demonstrate, using a large-scale cortical network model, that realistic synaptic plasticity rules coupled with homeostatic mechanisms lead to the formation of neuronal assemblies that reflect previously experienced stimuli. Further, reverberation of past evoked states in spontaneous spiking activity stabilizes, rather than erases, this learned architecture. Spontaneous and evoked spiking activity contains a signature of learned assembly structures, leading to testable predictions about the effect of recent sensory experience on spike train statistics. Our work outlines requirements for synaptic plasticity rules capable of modifying spontaneous dynamics and shows that this modification is beneficial for stability of learned network architectures.
Collapse
Affiliation(s)
- Ashok Litwin-Kumar
- 1] Program for Neural Computation, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA [2] Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA [3] Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania 15213, USA
| | - Brent Doiron
- 1] Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA [2] Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania 15213, USA
| |
Collapse
|
16
|
Beyeler M, Dutt ND, Krichmar JL. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule. Neural Netw 2013; 48:109-24. [DOI: 10.1016/j.neunet.2013.07.012] [Citation(s) in RCA: 77] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2013] [Revised: 07/28/2013] [Accepted: 07/31/2013] [Indexed: 11/26/2022]
|
17
|
Bush D, Philippides A, Husbands P, O'Shea M. Reconciling the STDP and BCM models of synaptic plasticity in a spiking recurrent neural network. Neural Comput 2010; 22:2059-85. [PMID: 20438333 DOI: 10.1162/neco_a_00003-bush] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Rate-coded Hebbian learning, as characterized by the BCM formulation, is an established computational model of synaptic plasticity. Recently it has been demonstrated that changes in the strength of synapses in vivo can also depend explicitly on the relative timing of pre- and postsynaptic firing. Computational modeling of this spike-timing-dependent plasticity (STDP) has demonstrated that it can provide inherent stability or competition based on local synaptic variables. However, it has also been demonstrated that these properties rely on synaptic weights being either depressed or unchanged by an increase in mean stochastic firing rates, which directly contradicts empirical data. Several analytical studies have addressed this apparent dichotomy and identified conditions under which distinct and disparate STDP rules can be reconciled with rate-coded Hebbian learning. The aim of this research is to verify, unify, and expand on these previous findings by manipulating each element of a standard computational STDP model in turn. This allows us to identify the conditions under which this plasticity rule can replicate experimental data obtained using both rate and temporal stimulation protocols in a spiking recurrent neural network. Our results describe how the relative scale of mean synaptic weights and their dependence on stochastic pre- or postsynaptic firing rates can be manipulated by adjusting the exact profile of the asymmetric learning window and temporal restrictions on spike pair interactions respectively. These findings imply that previously disparate models of rate-coded autoassociative learning and temporally coded heteroassociative learning, mediated by symmetric and asymmetric connections respectively, can be implemented in a single network using a single plasticity rule. However, we also demonstrate that forms of STDP that can be reconciled with rate-coded Hebbian learning do not generate inherent synaptic competition, and thus some additional mechanism is required to guarantee long-term input-output selectivity.
Collapse
Affiliation(s)
- Daniel Bush
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, Sussex, UK
| | | | | | | |
Collapse
|
18
|
Bush D, Philippides A, Husbands P, O'Shea M. Spike-timing dependent plasticity and the cognitive map. Front Comput Neurosci 2010; 4:142. [PMID: 21060719 PMCID: PMC2972746 DOI: 10.3389/fncom.2010.00142] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2010] [Accepted: 09/24/2010] [Indexed: 11/13/2022] Open
Abstract
Since the discovery of place cells – single pyramidal neurons that encode spatial location – it has been hypothesized that the hippocampus may act as a cognitive map of known environments. This putative function has been extensively modeled using auto-associative networks, which utilize rate-coded synaptic plasticity rules in order to generate strong bi-directional connections between concurrently active place cells that encode for neighboring place fields. However, empirical studies using hippocampal cultures have demonstrated that the magnitude and direction of changes in synaptic strength can also be dictated by the relative timing of pre- and post-synaptic firing according to a spike-timing dependent plasticity (STDP) rule. Furthermore, electrophysiology studies have identified persistent “theta-coded” temporal correlations in place cell activity in vivo, characterized by phase precession of firing as the corresponding place field is traversed. It is not yet clear if STDP and theta-coded neural dynamics are compatible with cognitive map theory and previous rate-coded models of spatial learning in the hippocampus. Here, we demonstrate that an STDP rule based on empirical data obtained from the hippocampus can mediate rate-coded Hebbian learning when pre- and post-synaptic activity is stochastic and has no persistent sequence bias. We subsequently demonstrate that a spiking recurrent neural network that utilizes this STDP rule, alongside theta-coded neural activity, allows the rapid development of a cognitive map during directed or random exploration of an environment of overlapping place fields. Hence, we establish that STDP and phase precession are compatible with rate-coded models of cognitive map development.
Collapse
Affiliation(s)
- Daniel Bush
- Department of Physics and Astronomy, University of California Los Angeles Los Angeles, CA, USA
| | | | | | | |
Collapse
|
19
|
Dual coding with STDP in a spiking recurrent neural network model of the hippocampus. PLoS Comput Biol 2010; 6:e1000839. [PMID: 20617201 PMCID: PMC2895637 DOI: 10.1371/journal.pcbi.1000839] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2010] [Accepted: 05/27/2010] [Indexed: 11/19/2022] Open
Abstract
The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal) coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain. Changes in the strength of synaptic connections between neurons are believed to mediate processes of learning and memory in the brain. A computational theory of this synaptic plasticity was first provided by Donald Hebb within the context of a more general neural coding mechanism, whereby phase sequences of activity directed by ongoing external and internal dynamics propagate in mutually exciting ensembles of neurons. Empirical evidence for this cell assembly model has been obtained in the hippocampus, where neuronal ensembles encoding for spatial location repeatedly fire in sequence at different phases of the ongoing theta oscillation. To investigate the encoding and reactivation of these dual coded activity patterns, we examine a biologically inspired spiking neural network model of the hippocampus with a novel synaptic plasticity rule. We demonstrate that this allows the rapid development of both symmetric and asymmetric connections between neurons that fire at concurrent or consecutive theta phase respectively. Recall activity, corresponding to both pattern completion and sequence prediction, can subsequently be produced by partial external cues. This allows the reconciliation of two previously disparate classes of hippocampal model and provides a framework for further examination of cell assembly dynamics in spiking neural networks.
Collapse
|
20
|
Abstract
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.
Collapse
Affiliation(s)
- Sandro Romani
- Human Physiology, Università di Roma La Sapienza, Rome 00185, Italy.
| | | | | |
Collapse
|
21
|
Abstract
Macaque monkeys were tested on a delayed-match-to-multiple-sample task, with either a limited set of well trained images (in randomized sequence) or with never-before-seen images. They performed much better with novel images. False positives were mostly limited to catch-trial image repetitions from the preceding trial. This result implies extremely effective one-shot learning, resembling Standing's finding that people detect familiarity for 10,000 once-seen pictures (with 80% accuracy) (Standing, 1973). Familiarity memory may differ essentially from identification, which embeds and generates contextual information. When encountering another person, we can say immediately whether his or her face is familiar. However, it may be difficult for us to identify the same person. To accompany the psychophysical findings, we present a generic neural network model reproducing these behaviors, based on the same conservative Hebbian synaptic plasticity that generates delay activity identification memory. Familiarity becomes the first step toward establishing identification. Adding an inter-trial reset mechanism limits false positives for previous-trial images. The model, unlike previous proposals, relates repetition-recognition with enhanced neural activity, as recently observed experimentally in 92% of differential cells in prefrontal cortex, an area directly involved in familiarity recognition. There may be an essential functional difference between enhanced responses to novel versus to familiar images: The maximal signal from temporal cortex is for novel stimuli, facilitating additional sensory processing of newly acquired stimuli. The maximal signal for familiar stimuli arising in prefrontal cortex facilitates the formation of selective delay activity, as well as additional consolidation of the memory of the image in an upstream cortical module.
Collapse
|
22
|
Ben Dayan Rubin DD, Fusi S. Long memory lifetimes require complex synapses and limited sparseness. Front Comput Neurosci 2007; 1:7. [PMID: 18946529 PMCID: PMC2525933 DOI: 10.3389/neuro.10.007.2007] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2007] [Accepted: 10/22/2007] [Indexed: 11/13/2022] Open
Abstract
Theoretical studies have shown that memories last longer if the neural representations are sparse, that is, when each neuron is selective for a small fraction of the events creating the memories. Sparseness reduces both the interference between stored memories and the number of synaptic modifications which are necessary for memory storage. Paradoxically, in cortical areas like the inferotemporal cortex, where presumably memory lifetimes are longer than in the medial temporal lobe, neural representations are less sparse. We resolve this paradox by analyzing the effects of sparseness on complex models of synaptic dynamics in which there are metaplastic states with different degrees of plasticity. For these models, memory retention in a large number of synapses across multiple neurons is significantly more efficient in case of many metaplastic states, that is, for an elevated degree of complexity. In other words, larger brain regions allow to retain memories for significantly longer times only if the synaptic complexity increases with the total number of synapses. However, the initial memory trace, the one experienced immediately after memory storage, becomes weaker both when the number of metaplastic states increases and when the neural representations become sparser. Such a memory trace must be above a given threshold in order to permit every single neuron to retrieve the information stored in its synapses. As a consequence, if the initial memory trace is reduced because of the increased synaptic complexity, then the neural representations must be less sparse. We conclude that long memory lifetimes allowed by a larger number of synapses require more complex synapses, and hence, less sparse representations, which is what is observed in the brain.
Collapse
|
23
|
Barbieri F, Brunel N. Irregular persistent activity induced by synaptic excitatory feedback. Front Comput Neurosci 2007; 1:5. [PMID: 18946527 PMCID: PMC2525938 DOI: 10.3389/neuro.10.005.2007] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2007] [Accepted: 10/10/2007] [Indexed: 11/24/2022] Open
Abstract
Neurophysiological experiments on monkeys have reported highly irregular persistent activity during the performance of an oculomotor delayed-response task. These experiments show that during the delay period the coefficient of variation (CV) of interspike intervals (ISI) of prefrontal neurons is above 1, on average, and larger than during the fixation period. In the present paper, we show that this feature can be reproduced in a network in which persistent activity is induced by excitatory feedback, provided that (i) the post-spike reset is close enough to threshold , (ii) synaptic efficacies are a non-linear function of the pre-synaptic firing rate. Non-linearity between pre-synaptic rate and effective synaptic strength is implemented by a standard short-term depression mechanism (STD). First, we consider the simplest possible network with excitatory feedback: a fully connected homogeneous network of excitatory leaky integrate-and-fire neurons, using both numerical simulations and analytical techniques. The results are then confirmed in a network with selective excitatory neurons and inhibition. In both the cases there is a large range of values of the synaptic efficacies for which the statistics of firing of single cells is similar to experimental data.
Collapse
|
24
|
Brader JM, Senn W, Fusi S. Learning Real-World Stimuli in a Neural Network with Spike-Driven Synaptic Dynamics. Neural Comput 2007; 19:2881-912. [PMID: 17883345 DOI: 10.1162/neco.2007.19.11.2881] [Citation(s) in RCA: 143] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).
Collapse
|
25
|
Warden MR, Miller EK. The Representation of Multiple Objects in Prefrontal Neuronal Delay Activity. Cereb Cortex 2007; 17 Suppl 1:i41-50. [PMID: 17726003 DOI: 10.1093/cercor/bhm070] [Citation(s) in RCA: 83] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The ability to retain multiple items in short-term memory is fundamental for intelligent behavior, yet little is known about its neural basis. To explore the mechanisms underlying this ability, we trained 2 monkeys to remember a sequence of 2 objects across a short delay. We then recorded the activity of neurons from the lateral prefrontal cortex during task performance and found that most neurons had activity that depended on the identity of both objects while a minority reflected just one object. Further, the activity driven by a particular combination of objects was not a simple addition of the activity elicited by individual objects. Instead, the representation of the first object was altered by the addition of the second object to memory, and the form of this change was not systematically predictable. These results indicate that multiple objects are not stored in separate groups of prefrontal neurons. Rather, they are represented by a single population of neurons in a complex fashion. We also found that the strength of the memory trace associated with each object decayed over time, leading to a relatively stronger representation of more recently seen objects. This is a potential mechanism for representing the temporal order of objects.
Collapse
Affiliation(s)
- Melissa R Warden
- The Picower Institute for Learning and Memory, RIKEN-MIT Neuroscience Research Center, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | | |
Collapse
|
26
|
Abstract
In a recent experiment, functional magnetic resonance imaging blood oxygen level-dependent (fMRI BOLD) signals were compared in different cortical areas (primary-visual and associative), when subjects were required covertly to name images in two protocols: sequences of images, with and without intervening delays. The amplitude of the BOLD signal in protocols with delay was found to be closer to that without delays in associative areas than in primary areas. The present study provides an exploratory proposal for the identification of the neural activity substrate of the BOLD signal in quasi-realistic networks of spiking neurons, in networks sustaining selective delay activity (associative) and in networks responsive to stimuli, but whose unique stationary state is one of spontaneous activity (primary). A variety of observables are 'recorded' in the network simulations, applying the experimental stimulation protocol. The ratios of the candidate BOLD signals, in the two protocols, are compared in networks with and without delay activity. There are several options for recovering the experimental result in the model networks. One common conclusion is that the distinguishing factor is the presence of delay activity. The effect of NMDAr is marginal. The ultimate quantitative agreement with the experiment results depends on a distinction of the baseline signal level from its value in delay-period spontaneous activity. This may be attributable to the subjects' attention. Modifying the baseline results in a quantitative agreement for the ratios, and provided a definite choice of the candidate signals. The proposed framework produces predictions for the BOLD signal in fMRI experiments, upon modification of the protocol presentation rate and the form of the response function.
Collapse
Affiliation(s)
- Daniel J Amit
- INFM, Dip di Fisica, Universita' di Roma La Sapienza, Rome, Italy
| | | |
Collapse
|
27
|
Shafi M, Zhou Y, Quintana J, Chow C, Fuster J, Bodner M. Variability in neuronal activity in primate cortex during working memory tasks. Neuroscience 2007; 146:1082-108. [PMID: 17418956 DOI: 10.1016/j.neuroscience.2006.12.072] [Citation(s) in RCA: 146] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2004] [Revised: 11/22/2006] [Accepted: 12/24/2006] [Indexed: 11/22/2022]
Abstract
Persistent elevated neuronal activity has been identified as the neuronal correlate of working memory. It is generally assumed in the literature and in computational and theoretical models of working memory that memory-cell activity is stable and replicable; however, this assumption may be an artifact of the averaging of data collected across trials, and needs experimental verification. In this study, we introduce a classification scheme to characterize the firing frequency trends of cells recorded from the cortex of monkeys during performance of working memory tasks. We examine the frequency statistics and variability of firing during baseline and memory periods. We also study the behavior of cells on individual trials and across trials, and explore the stability of cellular firing during the memory period. We find that cells from different firing-trend classes possess markedly different statistics. We also find that individual cells show substantial variability in their firing behavior across trials, and that firing frequency also varies markedly over the course of a single trial. Finally, the average frequency distribution is wider, the magnitude of the frequency increases from baseline to memory smaller, and the magnitude of frequency decreases larger than is generally assumed. These results may serve as a guide in the evaluation of current theories of the cortical mechanisms of working memory.
Collapse
Affiliation(s)
- M Shafi
- Neuropsychiatric Institute, 760 Westwood Plaza, School of Medicine, University of California, Los Angeles, CA 90095-1759, USA
| | | | | | | | | | | |
Collapse
|
28
|
Bernacchia A, Amit DJ. Impact of spatiotemporally correlated images on the structure of memory. Proc Natl Acad Sci U S A 2007; 104:3544-9. [PMID: 17360679 PMCID: PMC1805598 DOI: 10.1073/pnas.0611395104] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
How does experience modify what we store in long-term memory? Is it an effect of unattended experience or does it require supervision? What role is played by temporal correlations in the input stream? We present a plastic recurrent network in which memory of faces is initially embedded and then, in the absence of supervision, the presentation of temporally correlated faces drastically changes long-term memory. We model and interpret the results of recent experiments and provide predictions for future testing. The stimuli are frames of a morphing film, interpolating between two memorized faces: If the temporal order of presentation of the frame stimuli is random, then the structure of memory is basically unaffected by synaptic plasticity (memory preservation). If the temporal order is sequential, then all image frames are classified as the same memory (memory collapse). The empirical findings are reproduced in the simulated dynamics of the network, in which the evolution of neural activity is conditioned by the associated synaptic plasticity (learning). The results are captured by theoretical analysis, which leads to predictions concerning the critical parameters of the stimuli; a third phase is identified in which memory is erased (forgetting).
Collapse
Affiliation(s)
- Alberto Bernacchia
- Dipartimento di Fisica, Istituto Nazionale di Fisica della Materia, and
- Dipartimento di Fisiologia, Dottorato in Neurofisiologia, Universita di Roma “La Sapienza”, Rome, Italy; and
| | - Daniel J. Amit
- Racah Institute of Physics, Hebrew Univesrity, Jerusalem, Isreal
- To whom correspondence should be addressed at:
Dipartimento di Fisica, E. Fermi, Universita di Roma I, Piazzale Aldo Moro 5, Rome, Italy. E-mail:
| |
Collapse
|
29
|
Burkitt AN. A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties. BIOLOGICAL CYBERNETICS 2006; 95:97-112. [PMID: 16821035 DOI: 10.1007/s00422-006-0082-8] [Citation(s) in RCA: 135] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2005] [Accepted: 05/29/2006] [Indexed: 05/08/2023]
Abstract
The integrate-and-fire neuron model describes the state of a neuron in terms of its membrane potential, which is determined by the synaptic inputs and the injected current that the neuron receives. When the membrane potential reaches a threshold, an action potential (spike) is generated. This review considers the model in which the synaptic input varies periodically and is described by an inhomogeneous Poisson process, with both current and conductance synapses. The focus is on the mathematical methods that allow the output spike distribution to be analyzed, including first passage time methods and the Fokker-Planck equation. Recent interest in the response of neurons to periodic input has in part arisen from the study of stochastic resonance, which is the noise-induced enhancement of the signal-to-noise ratio. Networks of integrate-and-fire neurons behave in a wide variety of ways and have been used to model a variety of neural, physiological, and psychological phenomena. The properties of the integrate-and-fire neuron model with synaptic input described as a temporally homogeneous Poisson process are reviewed in an accompanying paper (Burkitt in Biol Cybern, 2006).
Collapse
Affiliation(s)
- A N Burkitt
- The Bionic Ear Institute, 384-388 Albert Street, East Melbourne, VIC 3002, Australia.
| |
Collapse
|