1
|
Triebkorn P, Jirsa V, Dominey PF. Simulating the impact of white matter connectivity on processing time scales using brain network models. Commun Biol 2025; 8:197. [PMID: 39920323 PMCID: PMC11806016 DOI: 10.1038/s42003-025-07587-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 01/21/2025] [Indexed: 02/09/2025] Open
Abstract
The capacity of the brain to process input across temporal scales is exemplified in human narrative, which requires integration of information ranging from words, over sentences to long paragraphs. It has been shown that this processing is distributed in a hierarchy across multiple areas in the brain with areas close to the sensory cortex, processing on a faster time scale than areas in associative cortex. In this study we used reservoir computing with human derived connectivity to investigate the effect of the structural connectivity on time scales across brain regions during a narrative task paradigm. We systematically tested the effect of removal of selected fibre bundles (IFO, ILF, MLF, SLF I/II/III, UF, AF) on the processing time scales across brain regions. We show that long distance pathways such as the IFO provide a form of shortcut whereby input driven activation in the visual cortex can directly impact distant frontal areas. To validate our model we demonstrated significant correlation of our predicted time scale ordering with empirical results from the intact/scrambled narrative fMRI task paradigm. This study emphasizes structural connectivity's role in brain temporal processing hierarchies, providing a framework for future research on structure and neural dynamics across cognitive tasks.
Collapse
Affiliation(s)
- Paul Triebkorn
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, 13005, France.
| | - Viktor Jirsa
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, 13005, France
| | - Peter Ford Dominey
- Inserm UMR1093-CAPS, Université Bourgogne Europe, UFR des Sciences du Sport, Campus Universitaire, BP 27877, 21000, Dijon, France.
| |
Collapse
|
2
|
Giannakakis E, Vinogradov O, Buendía V, Levina A. Structural influences on synaptic plasticity: The role of presynaptic connectivity in the emergence of E/I co-tuning. PLoS Comput Biol 2024; 20:e1012510. [PMID: 39480889 PMCID: PMC11556753 DOI: 10.1371/journal.pcbi.1012510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 11/12/2024] [Accepted: 09/25/2024] [Indexed: 11/02/2024] Open
Abstract
Cortical neurons are versatile and efficient coding units that develop strong preferences for specific stimulus characteristics. The sharpness of tuning and coding efficiency is hypothesized to be controlled by delicately balanced excitation and inhibition. These observations suggest a need for detailed co-tuning of excitatory and inhibitory populations. Theoretical studies have demonstrated that a combination of plasticity rules can lead to the emergence of excitation/inhibition (E/I) co-tuning in neurons driven by independent, low-noise signals. However, cortical signals are typically noisy and originate from highly recurrent networks, generating correlations in the inputs. This raises questions about the ability of plasticity mechanisms to self-organize co-tuned connectivity in neurons receiving noisy, correlated inputs. Here, we study the emergence of input selectivity and weight co-tuning in a neuron receiving input from a recurrent network via plastic feedforward connections. We demonstrate that while strong noise levels destroy the emergence of co-tuning in the readout neuron, introducing specific structures in the non-plastic pre-synaptic connectivity can re-establish it by generating a favourable correlation structure in the population activity. We further show that structured recurrent connectivity can impact the statistics in fully plastic recurrent networks, driving the formation of co-tuning in neurons that do not receive direct input from other areas. Our findings indicate that the network dynamics created by simple, biologically plausible structural connectivity patterns can enhance the ability of synaptic plasticity to learn input-output relationships in higher brain areas.
Collapse
Affiliation(s)
- Emmanouil Giannakakis
- Department of Computer Science, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Oleg Vinogradov
- Department of Computer Science, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Victor Buendía
- Department of Computer Science, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Anna Levina
- Department of Computer Science, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
3
|
Tiddia G, Sergi L, Golosio B. Theoretical framework for learning through structural plasticity. Phys Rev E 2024; 110:044311. [PMID: 39562962 DOI: 10.1103/physreve.110.044311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 06/19/2024] [Indexed: 11/21/2024]
Abstract
A growing body of research indicates that structural plasticity mechanisms are crucial for learning and memory consolidation. Starting from a simple phenomenological model, we exploit a mean-field approach to develop a theoretical framework of learning through this kind of plasticity, capable of taking into account several features of the connectivity and pattern of activity of biological neural networks, including probability distributions of neuron firing rates, selectivity of the responses of single neurons to multiple stimuli, probabilistic connection rules, and noisy stimuli. More importantly, it describes the effects of stabilization, pruning, and reorganization of synaptic connections. This framework is used to compute the values of some relevant quantities used to characterize the learning and memory capabilities of the neuronal network in training and testing procedures as the number of training patterns and other model parameters vary. The results are then compared with those obtained through simulations with firing-rate-based neuronal network models.
Collapse
|
4
|
Liu L, Wang H, Xing Y, Zhang Z, Zhang Q, Dong M, Ma Z, Cai L, Wang X, Tang Y. Dose-response relationship between computerized cognitive training and cognitive improvement. NPJ Digit Med 2024; 7:214. [PMID: 39147783 PMCID: PMC11327304 DOI: 10.1038/s41746-024-01210-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 08/01/2024] [Indexed: 08/17/2024] Open
Abstract
Although computerized cognitive training (CCT) is an effective digital intervention for cognitive impairment, its dose-response relationship is understudied. This retrospective cohort study explores the association between training dose and cognitive improvement to find the optimal CCT dose. From 2017 to 2022, 8,709 participants with subjective cognitive decline, mild cognitive impairment, and mild dementia were analyzed. CCT exposure varied in daily dose and frequency, with cognitive improvement measured weekly using Cognitive Index. A mixed-effects model revealed significant Cognitive Index increases across most dose groups before reaching the optimal dose. For participants under 60 years, the optimal dose was 25 to <30 min per day for 6 days a week. For those 60 years or older, it was 50 to <55 min per day for 6 days a week. These findings highlight a dose-dependent effect in CCT, suggesting age-specific optimal dosing for cognitive improvement.
Collapse
Affiliation(s)
- Liyang Liu
- Department of Neurology & Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
- Neurodegenerative Laboratory of Ministry of Education of the People's Republic of China, Beijing, China
| | - Haibo Wang
- Clinical Research Institute, Institute of Advanced Clinical Medicine, Peking University, 100191, Beijing, China
- Key Laboratory of Epidemiology of Major Diseases (Peking University), Ministry of Education, 38 Xueyuan St, Haidian district, 100191, Beijing, China
| | - Yi Xing
- Department of Neurology & Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
- Neurodegenerative Laboratory of Ministry of Education of the People's Republic of China, Beijing, China
| | - Ziheng Zhang
- Beijing Wispirit Technology Co., Ltd., Beijing, China
| | - Qingge Zhang
- Beijing Wispirit Technology Co., Ltd., Beijing, China
| | - Ming Dong
- Beijing Wispirit Technology Co., Ltd., Beijing, China
| | - Zhujiang Ma
- Beijing Wispirit Technology Co., Ltd., Beijing, China
| | - Longjun Cai
- Beijing Wispirit Technology Co., Ltd., Beijing, China
| | - Xiaoyi Wang
- Beijing Wispirit Technology Co., Ltd., Beijing, China
| | - Yi Tang
- Department of Neurology & Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.
- Neurodegenerative Laboratory of Ministry of Education of the People's Republic of China, Beijing, China.
| |
Collapse
|
5
|
Kong LW, Brewer GA, Lai YC. Reservoir-computing based associative memory and itinerancy for complex dynamical attractors. Nat Commun 2024; 15:4840. [PMID: 38844437 PMCID: PMC11156990 DOI: 10.1038/s41467-024-49190-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 05/24/2024] [Indexed: 06/09/2024] Open
Abstract
Traditional neural network models of associative memories were used to store and retrieve static patterns. We develop reservoir-computing based memories for complex dynamical attractors, under two common recalling scenarios in neuropsychology: location-addressable with an index channel and content-addressable without such a channel. We demonstrate that, for location-addressable retrieval, a single reservoir computing machine can memorize a large number of periodic and chaotic attractors, each retrievable with a specific index value. We articulate control strategies to achieve successful switching among the attractors, unveil the mechanism behind failed switching, and uncover various scaling behaviors between the number of stored attractors and the reservoir network size. For content-addressable retrieval, we exploit multistability with cue signals, where the stored attractors coexist in the high-dimensional phase space of the reservoir network. As the length of the cue signal increases through a critical value, a high success rate can be achieved. The work provides foundational insights into developing long-term memories and itinerancy for complex dynamical patterns.
Collapse
Affiliation(s)
- Ling-Wei Kong
- Department of Computational Biology, Cornell University, Ithaca, New York, USA
- School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona, USA
| | - Gene A Brewer
- Department of Psychology, Arizona State University, Tempe, Arizona, USA
| | - Ying-Cheng Lai
- School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona, USA.
- Department of Physics, Arizona State University, Tempe, Arizona, USA.
| |
Collapse
|
6
|
Schmid D, Jarvers C, Neumann H. Canonical circuit computations for computer vision. BIOLOGICAL CYBERNETICS 2023; 117:299-329. [PMID: 37306782 PMCID: PMC10600314 DOI: 10.1007/s00422-023-00966-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 05/18/2023] [Indexed: 06/13/2023]
Abstract
Advanced computer vision mechanisms have been inspired by neuroscientific findings. However, with the focus on improving benchmark achievements, technical solutions have been shaped by application and engineering constraints. This includes the training of neural networks which led to the development of feature detectors optimally suited to the application domain. However, the limitations of such approaches motivate the need to identify computational principles, or motifs, in biological vision that can enable further foundational advances in machine vision. We propose to utilize structural and functional principles of neural systems that have been largely overlooked. They potentially provide new inspirations for computer vision mechanisms and models. Recurrent feedforward, lateral, and feedback interactions characterize general principles underlying processing in mammals. We derive a formal specification of core computational motifs that utilize these principles. These are combined to define model mechanisms for visual shape and motion processing. We demonstrate how such a framework can be adopted to run on neuromorphic brain-inspired hardware platforms and can be extended to automatically adapt to environment statistics. We argue that the identified principles and their formalization inspires sophisticated computational mechanisms with improved explanatory scope. These and other elaborated, biologically inspired models can be employed to design computer vision solutions for different tasks and they can be used to advance neural network architectures of learning.
Collapse
Affiliation(s)
- Daniel Schmid
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Christian Jarvers
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Heiko Neumann
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| |
Collapse
|
7
|
Chen R, Vakilna YS, Lassers SB, Tang WC, Brewer G. Hippocampal network axons respond to patterned theta burst stimulation with lower activity of initially higher spike train similarity from EC to DG and later similarity of axons from CA1 to EC. J Neural Eng 2023; 20:056004. [PMID: 37666242 DOI: 10.1088/1741-2552/acf68a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 09/04/2023] [Indexed: 09/06/2023]
Abstract
Objective. Decoding memory functions for each hippocampal subregion involves extensive understanding of how each hippocampal subnetwork processes input stimuli. Theta burst stimulation (TBS) recapitulates natural brain stimuli which potentiates synapses in hippocampal circuits. TBS is typically applied to a bundle of axons to measure the immediate response in a downstream subregion like the cornu ammonis 1 (CA1). Yet little is known about network processing in response to stimulation, especially because individual axonal transmission between subregions is not accessible.Approach. To address these limitations, we reverse engineered the hippocampal network on a micro-electrode array partitioned by a MEMS four-chambered device with interconnecting microfluidic tunnels. The micro tunnels allowed monitoring single axon transmission which is inaccessible in slices orin vivo. The four chambers were plated separately with entorhinal cortex (EC), dentate gyrus (DG), CA1, and CA3 neurons. The patterned TBS was delivered to the EC hippocampal gateway. Evoked spike pattern similarity in each subregions was quantified with Jaccard distance metrics of spike timing.Main results. We found that the network subregion produced unique axonal responses to different stimulation patterns. Single site and multisite stimulations caused distinct information routing of axonal spikes in the network. The most spatially similar output at axons from CA3 to CA1 reflected the auto association within CA3 recurrent networks. Moreover, the spike pattern similarities shifted from high levels for axons to and from DG at 0.2 s repeat stimuli to greater similarity in axons to and from CA1 for repetitions at 10 s intervals. This time-dependent response suggested that CA3 encoded temporal information and axons transmitted the information to CA1.Significance. Our design and interrogation approach provide first insights into differences in information transmission between the four subregions of the structured hippocampal network and the dynamic pattern variations in response to stimulation at the subregional level to achieve probabilistic pattern separation and novelty detection.
Collapse
Affiliation(s)
- Ruiyi Chen
- Department of Biomedical Engineering, University of California, Irvine, CA 92697, United States of America
| | - Yash Shashank Vakilna
- Department of Biomedical Engineering, University of California, Irvine, CA 92697, United States of America
- Texas Institute of Restorative Neurotechnologies (TIRN), The University of Texas Health Science Center (UTHealth), Houston, TX 77030, United States of America
| | - Samuel Brandon Lassers
- Department of Biomedical Engineering, University of California, Irvine, CA 92697, United States of America
| | - William C Tang
- Department of Biomedical Engineering, University of California, Irvine, CA 92697, United States of America
- Department of Biomedical Engineering, National Taiwan University, Taipei 106319, Taiwan (ROC)
| | - Gregory Brewer
- Department of Biomedical Engineering, University of California, Irvine, CA 92697, United States of America
- Center for Neuroscience of Learning and Memory & MIND Center, University of California, Irvine, CA 92697, United States of America
| |
Collapse
|
8
|
Bell MK, Lee CT, Rangamani P. Spatiotemporal modelling reveals geometric dependence of AMPAR dynamics on dendritic spine morphology. J Physiol 2023; 601:3329-3350. [PMID: 36326020 DOI: 10.1113/jp283407] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 11/01/2022] [Indexed: 08/02/2023] Open
Abstract
The modification of neural circuits depends on the strengthening and weakening of synaptic connections. Synaptic strength is often correlated to the density of the ionotropic, glutamatergic receptors, AMPARs, (α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors) at the postsynaptic density (PSD). While AMPAR density is known to change based on complex biological signalling cascades, the effect of geometric factors such as dendritic spine shape, size and curvature remain poorly understood. In this work, we developed a deterministic, spatiotemporal model to study the dynamics of AMPARs during long-term potentiation (LTP). This model includes a minimal set of biochemical events that represent the upstream signalling events, trafficking of AMPARs to and from the PSD, lateral diffusion in the plane of the spine membrane, and the presence of an extrasynaptic AMPAR pool. Using idealized and realistic spine geometries, we show that the dynamics and increase of bound AMPARs at the PSD depends on a combination of endo- and exocytosis, membrane diffusion, the availability of free AMPARs and intracellular signalling interactions. We also found non-monotonic relationships between spine volume and the change in AMPARs at the PSD, suggesting that spines restrict changes in AMPARs to optimize resources and prevent runaway potentiation. KEY POINTS: Synaptic plasticity involves dynamic biochemical and physical remodelling of small protrusions called dendritic spines along the dendrites of neurons. Proper synaptic functionality within these spines requires changes in receptor number at the synapse, which has implications for downstream neural functions, such as learning and memory formation. In addition to being signalling subcompartments, spines also have unique morphological features that can play a role in regulating receptor dynamics on the synaptic surface. We have developed a spatiotemporal model that couples biochemical signalling and receptor trafficking modalities in idealized and realistic spine geometries to investigate the role of biochemical and biophysical factors in synaptic plasticity. Using this model, we highlight the importance of spine size and shape in regulating bound AMPA receptor dynamics that govern synaptic plasticity, and predict how spine shape might act to reset synaptic plasticity as a built-in resource optimization and regulation tool.
Collapse
Affiliation(s)
- Miriam K Bell
- Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, California, USA
| | - Christopher T Lee
- Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, California, USA
| | - Padmini Rangamani
- Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, California, USA
| |
Collapse
|
9
|
Lamberti M, Tripathi S, van Putten MJAM, Marzen S, le Feber J. Prediction in cultured cortical neural networks. PNAS NEXUS 2023; 2:pgad188. [PMID: 37383023 PMCID: PMC10299080 DOI: 10.1093/pnasnexus/pgad188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 04/18/2023] [Accepted: 05/25/2023] [Indexed: 06/30/2023]
Abstract
Theory suggest that networks of neurons may predict their input. Prediction may underlie most aspects of information processing and is believed to be involved in motor and cognitive control and decision-making. Retinal cells have been shown to be capable of predicting visual stimuli, and there is some evidence for prediction of input in the visual cortex and hippocampus. However, there is no proof that the ability to predict is a generic feature of neural networks. We investigated whether random in vitro neuronal networks can predict stimulation, and how prediction is related to short- and long-term memory. To answer these questions, we applied two different stimulation modalities. Focal electrical stimulation has been shown to induce long-term memory traces, whereas global optogenetic stimulation did not. We used mutual information to quantify how much activity recorded from these networks reduces the uncertainty of upcoming stimuli (prediction) or recent past stimuli (short-term memory). Cortical neural networks did predict future stimuli, with the majority of all predictive information provided by the immediate network response to the stimulus. Interestingly, prediction strongly depended on short-term memory of recent sensory inputs during focal as well as global stimulation. However, prediction required less short-term memory during focal stimulation. Furthermore, the dependency on short-term memory decreased during 20 h of focal stimulation, when long-term connectivity changes were induced. These changes are fundamental for long-term memory formation, suggesting that besides short-term memory the formation of long-term memory traces may play a role in efficient prediction.
Collapse
Affiliation(s)
- Martina Lamberti
- Department of Clinical Neurophysiology, University of Twente, PO Box 217 7500AE, Enschede, The Netherlands
| | - Shiven Tripathi
- Department of Electrical Engineering, Indian Institute of Technology, Kanpur 208016, India
| | - Michel J A M van Putten
- Department of Clinical Neurophysiology, University of Twente, PO Box 217 7500AE, Enschede, The Netherlands
| | - Sarah Marzen
- W. M. Keck Science Department, Pitzer, Scripps, and Claremont McKenna College, Claremont, CA 91711, USA
| | - Joost le Feber
- Department of Clinical Neurophysiology, University of Twente, PO Box 217 7500AE, Enschede, The Netherlands
| |
Collapse
|
10
|
Chauhan K, Khaledi-Nasab A, Neiman AB, Tass PA. Dynamics of phase oscillator networks with synaptic weight and structural plasticity. Sci Rep 2022; 12:15003. [PMID: 36056151 PMCID: PMC9440105 DOI: 10.1038/s41598-022-19417-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 08/29/2022] [Indexed: 11/08/2022] Open
Abstract
We study the dynamics of Kuramoto oscillator networks with two distinct adaptation processes, one varying the coupling strengths and the other altering the network structure. Such systems model certain networks of oscillatory neurons where the neuronal dynamics, synaptic weights, and network structure interact with and shape each other. We model synaptic weight adaptation with spike-timing-dependent plasticity (STDP) that runs on a longer time scale than neuronal spiking. Structural changes that include addition and elimination of contacts occur at yet a longer time scale than the weight adaptations. First, we study the steady-state dynamics of Kuramoto networks that are bistable and can settle in synchronized or desynchronized states. To compare the impact of adding structural plasticity, we contrast the network with only STDP to one with a combination of STDP and structural plasticity. We show that the inclusion of structural plasticity optimizes the synchronized state of a network by allowing for synchronization with fewer links than a network with STDP alone. With non-identical units in the network, the addition of structural plasticity leads to the emergence of correlations between the oscillators' natural frequencies and node degrees. In the desynchronized regime, the structural plasticity decreases the number of contacts, leading to a sparse network. In this way, adding structural plasticity strengthens both synchronized and desynchronized states of a network. Second, we use desynchronizing coordinated reset stimulation and synchronizing periodic stimulation to induce desynchronized and synchronized states, respectively. Our findings indicate that a network with a combination of STDP and structural plasticity may require stronger and longer stimulation to switch between the states than a network with STDP only.
Collapse
Affiliation(s)
- Kanishk Chauhan
- Department of Physics and Astronomy, Ohio University, Athens, OH, 45701, USA.
- Neuroscience Program, Ohio University, Athens, OH, 45701, USA.
| | - Ali Khaledi-Nasab
- Department of Neurosurgery, Stanford University, Stanford, CA, 94305, USA
| | - Alexander B Neiman
- Department of Physics and Astronomy, Ohio University, Athens, OH, 45701, USA
- Neuroscience Program, Ohio University, Athens, OH, 45701, USA
| | - Peter A Tass
- Department of Neurosurgery, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
11
|
Bonilla-Quintana M, Rangamani P. Can biophysical models of dendritic spines be used to explore synaptic changes associated with addiction? Phys Biol 2022; 19. [PMID: 35508164 DOI: 10.1088/1478-3975/ac6cbe] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 05/04/2022] [Indexed: 11/11/2022]
Abstract
Effective treatments that prevent or reduce drug relapse vulnerability should be developed to relieve the high burden of drug addiction on society. This will only be possible by enhancing the understanding of the molecular mechanisms underlying the neurobiology of addiction. Recent experimental data have shown that dendritic spines, small protrusions from the dendrites that receive excitatory input, of spiny neurons in the nucleus accumbens exhibit morphological changes during drug exposure and withdrawal. Moreover, these changes relate to the characteristic drug-seeking behavior of addiction. However, due to the complexity of the dendritic spines, we do not yet fully understand the processes underlying their structural changes in response to different inputs. We propose that biophysical models can enhance the current understanding of these processes by incorporating different, and sometimes, discrepant experimental data to identify the shared underlying mechanisms and generate experimentally testable hypotheses. This review aims to give an up-to-date report on biophysical models of dendritic spines, focusing on those models that describe their shape changes, which are well-known to relate to learning and memory. Moreover, it examines how these models can enhance our understanding of the effect of the drugs and the synaptic changes during withdrawal, as well as during neurodegenerative disease progression such as Alzheimer's disease.
Collapse
Affiliation(s)
- Mayte Bonilla-Quintana
- Mechanical Aerospace Engineering, University of California San Diego, 9500 Gilman Drive, La Jolla, California, 92093-0021, UNITED STATES
| | - Padmini Rangamani
- Mechanical Aerospace Engineering, University of California San Diego, 9500 Gilman Drive, La Jolla, California, 92093-0021, UNITED STATES
| |
Collapse
|
12
|
Triche A, Maida AS, Kumar A. Exploration in neo-Hebbian reinforcement learning: Computational approaches to the exploration-exploitation balance with bio-inspired neural networks. Neural Netw 2022; 151:16-33. [DOI: 10.1016/j.neunet.2022.03.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 03/08/2022] [Accepted: 03/14/2022] [Indexed: 10/18/2022]
|
13
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
14
|
Eqlimi E, Bockstael A, De Coensel B, Schönwiesner M, Talsma D, Botteldooren D. EEG Correlates of Learning From Speech Presented in Environmental Noise. Front Psychol 2020; 11:1850. [PMID: 33250798 PMCID: PMC7676901 DOI: 10.3389/fpsyg.2020.01850] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 07/06/2020] [Indexed: 01/07/2023] Open
Abstract
How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms.
Collapse
Affiliation(s)
- Ehsan Eqlimi
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| | - Annelies Bockstael
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium.,École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada.,Erasmushogeschool Brussel, Brussels, Belgium
| | - Bert De Coensel
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium.,ASAsense, Bruges, Belgium
| | - Marc Schönwiesner
- Faculty of Biosciences, Pharmacy and Psychology, Institute of Biology, University of Leipzig, Leipzig, Germany.,International Laboratory for Brain, Music and Sound Research (BRAMS), Université de Montréal, Montreal, QC, Canada
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| |
Collapse
|
15
|
Giannakakis E, Han CE, Weber B, Hutchings F, Kaiser M. Towards simulations of long-term behavior of neural networks: Modeling synaptic plasticity of connections within and between human brain regions. Neurocomputing 2020; 416:38-44. [PMID: 33250573 PMCID: PMC7598092 DOI: 10.1016/j.neucom.2020.01.050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Simulations of neural networks can be used to study the direct effect of internal or external changes on brain dynamics. However, some changes are not immediate but occur on the timescale of weeks, months, or years. Examples include effects of strokes, surgical tissue removal, or traumatic brain injury but also gradual changes during brain development. Simulating network activity over a long time, even for a small number of nodes, is a computational challenge. Here, we model a coupled network of human brain regions with a modified Wilson-Cowan model representing dynamics for each region and with synaptic plasticity adjusting connection weights within and between regions. Using strategies ranging from different models for plasticity, vectorization and a different differential equation solver setup, we achieved one second runtime for one second biological time.
Collapse
Affiliation(s)
- Emmanouil Giannakakis
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom
| | - Cheol E Han
- Department of Electronics and Information Engineering, Korea University, Sejong, Republic of Korea
| | - Bernd Weber
- Institute of Experimental Epileptology and Cognition Research, University of Bonn, Germany
| | - Frances Hutchings
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom
| | - Marcus Kaiser
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom.,Institute of Neuroscience, Newcastle University, the Henry Wellcome Building, Newcastle upon Tyne NE2 4HH, United Kingdom.,Department of Functional Neurosurgery, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China
| |
Collapse
|
16
|
Moreno A. Molecular mechanisms of forgetting. Eur J Neurosci 2020; 54:6912-6932. [DOI: 10.1111/ejn.14839] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 04/23/2020] [Accepted: 05/18/2020] [Indexed: 11/30/2022]
Affiliation(s)
- Andrea Moreno
- Danish Institute of Translational Neuroscience (DANDRITE) Aarhus University Aarhus C Denmark
| |
Collapse
|
17
|
Krüppel S, Tetzlaff C. The self-organized learning of noisy environmental stimuli requires distinct phases of plasticity. Netw Neurosci 2020; 4:174-199. [PMID: 32166207 PMCID: PMC7055647 DOI: 10.1162/netn_a_00118] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 12/09/2019] [Indexed: 11/25/2022] Open
Abstract
Along sensory pathways, representations of environmental stimuli become increasingly sparse and expanded. If additionally the feed-forward synaptic weights are structured according to the inherent organization of stimuli, the increase in sparseness and expansion leads to a reduction of sensory noise. However, it is unknown how the synapses in the brain form the required structure, especially given the omnipresent noise of environmental stimuli. Here, we employ a combination of synaptic plasticity and intrinsic plasticity—adapting the excitability of each neuron individually—and present stimuli with an inherent organization to a feed-forward network. We observe that intrinsic plasticity maintains the sparseness of the neural code and thereby allows synaptic plasticity to learn the organization of stimuli in low-noise environments. Nevertheless, even high levels of noise can be handled after a subsequent phase of readaptation of the neuronal excitabilities by intrinsic plasticity. Interestingly, during this phase the synaptic structure has to be maintained. These results demonstrate that learning and recalling in the presence of noise requires the coordinated interplay between plasticity mechanisms adapting different properties of the neuronal circuit. Everyday life requires living beings to continuously recognize and categorize perceived stimuli from the environment. To master this task, the representations of these stimuli become increasingly sparse and expanded along the sensory pathways of the brain. In addition, the underlying neuronal network has to be structured according to the inherent organization of the environmental stimuli. However, how the neuronal network learns the required structure even in the presence of noise remains unknown. In this theoretical study, we show that the interplay between synaptic plasticity—controlling the synaptic efficacies—and intrinsic plasticity—adapting the neuronal excitabilities—enables the network to encode the organization of environmental stimuli. It thereby structures the network to correctly categorize stimuli even in the presence of noise. After having encoded the stimuli’s organization, consolidating the synaptic structure while keeping the neuronal excitabilities dynamic enables the neuronal system to readapt to arbitrary levels of noise resulting in a near-optimal classification performance for all noise levels. These results provide new insights into the interplay between different plasticity mechanisms and how this interplay enables sensory systems to reliably learn and categorize stimuli from the surrounding environment.
Collapse
Affiliation(s)
- Steffen Krüppel
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
| |
Collapse
|
18
|
Cerebellar Neurodynamics Predict Decision Timing and Outcome on the Single-Trial Level. Cell 2020; 180:536-551.e17. [PMID: 31955849 DOI: 10.1016/j.cell.2019.12.018] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 10/28/2019] [Accepted: 12/12/2019] [Indexed: 12/20/2022]
Abstract
Goal-directed behavior requires the interaction of multiple brain regions. How these regions and their interactions with brain-wide activity drive action selection is less understood. We have investigated this question by combining whole-brain volumetric calcium imaging using light-field microscopy and an operant-conditioning task in larval zebrafish. We find global, recurring dynamics of brain states to exhibit pre-motor bifurcations toward mutually exclusive decision outcomes. These dynamics arise from a distributed network displaying trial-by-trial functional connectivity changes, especially between cerebellum and habenula, which correlate with decision outcome. Within this network the cerebellum shows particularly strong and predictive pre-motor activity (>10 s before movement initiation), mainly within the granule cells. Turn directions are determined by the difference neuroactivity between the ipsilateral and contralateral hemispheres, while the rate of bi-hemispheric population ramping quantitatively predicts decision time on the trial-by-trial level. Our results highlight a cognitive role of the cerebellum and its importance in motor planning.
Collapse
|
19
|
Zippo AG, Castiglioni I, Lin J, Borsa VM, Valente M, Biella GEM. Short-Term Classification Learning Promotes Rapid Global Improvements of Information Processing in Human Brain Functional Connectome. Front Hum Neurosci 2020; 13:462. [PMID: 32009918 PMCID: PMC6971211 DOI: 10.3389/fnhum.2019.00462] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 12/17/2019] [Indexed: 01/21/2023] Open
Abstract
Classification learning is a preeminent human ability within the animal kingdom but the key mechanisms of brain networks regulating learning remain mostly elusive. Recent neuroimaging advancements have depicted human brain as a complex graph machinery where brain regions are nodes and coherent activities among them represent the functional connections. While long-term motor memories have been found to alter functional connectivity in the resting human brain, a graph topological investigation of the short-time effects of learning are still not widely investigated. For instance, classification learning is known to orchestrate rapid modulation of diverse memory systems like short-term and visual working memories but how the brain functional connectome accommodates such modulations is unclear. We used publicly available repositories (openfmri.org) selecting three experiments, two focused on short-term classification learning along two consecutive runs where learning was promoted by trial-by-trial feedback errors, while a further experiment was used as supplementary control. We analyzed the functional connectivity extracted from BOLD fMRI signals, and estimated the graph information processing in the cerebral networks. The information processing capability, characterized by complex network statistics, significantly improved over runs, together with the subject classification accuracy. Instead, null-learning experiments, where feedbacks came with poor consistency, did not provoke any significant change in the functional connectivity over runs. We propose that learning induces fast modifications in the overall brain network dynamics, definitely ameliorating the short-term potential of the brain to process and integrate information, a dynamic consistently orchestrated by modulations of the functional connections among specific brain regions.
Collapse
Affiliation(s)
- Antonio G Zippo
- Institute of Molecular Bioimaging and Physiology, Consiglio Nazionale delle Ricerche, Milan, Italy
| | - Isabella Castiglioni
- Institute of Molecular Bioimaging and Physiology, Consiglio Nazionale delle Ricerche, Milan, Italy
| | - Jianyi Lin
- Department of Mathematics, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Virginia M Borsa
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
| | - Maurizio Valente
- Institute of Molecular Bioimaging and Physiology, Consiglio Nazionale delle Ricerche, Milan, Italy
| | - Gabriele E M Biella
- Institute of Molecular Bioimaging and Physiology, Consiglio Nazionale delle Ricerche, Milan, Italy
| |
Collapse
|
20
|
Taherkhani A, Belatreche A, Li Y, Cosma G, Maguire LP, McGinnity TM. A review of learning in biologically plausible spiking neural networks. Neural Netw 2019; 122:253-272. [PMID: 31726331 DOI: 10.1016/j.neunet.2019.09.036] [Citation(s) in RCA: 104] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 11/30/2022]
Abstract
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed.
Collapse
Affiliation(s)
- Aboozar Taherkhani
- School of Computer Science and Informatics, Faculty of Computing, Engineering and Media, De Montfort University, Leicester, UK.
| | - Ammar Belatreche
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
| | - Yuhua Li
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Georgina Cosma
- Department of Computer Science, Loughborough University, Loughborough, UK
| | - Liam P Maguire
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK
| | - T M McGinnity
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK; School of Science and Technology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
21
|
Habtegiorgis SW, Jarvers C, Rifai K, Neumann H, Wahl S. The Role of Bottom-Up and Top-Down Cortical Interactions in Adaptation to Natural Scene Statistics. Front Neural Circuits 2019; 13:9. [PMID: 30814934 PMCID: PMC6381060 DOI: 10.3389/fncir.2019.00009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 01/24/2019] [Indexed: 11/16/2022] Open
Abstract
Adaptation is a mechanism by which cortical neurons adjust their responses according to recently viewed stimuli. Visual information is processed in a circuit formed by feedforward (FF) and feedback (FB) synaptic connections of neurons in different cortical layers. Here, the functional role of FF-FB streams and their synaptic dynamics in adaptation to natural stimuli is assessed in psychophysics and neural model. We propose a cortical model which predicts psychophysically observed motion adaptation aftereffects (MAE) after exposure to geometrically distorted natural image sequences. The model comprises direction selective neurons in V1 and MT connected by recurrent FF and FB dynamic synapses. Psychophysically plausible model MAEs were obtained from synaptic changes within neurons tuned to salient direction signals of the broadband natural input. It is conceived that, motion disambiguation by FF-FB interactions is critical to encode this salient information. Moreover, only FF-FB dynamic synapses operating at distinct rates predicted psychophysical MAEs at different adaptation time-scales which could not be accounted for by single rate dynamic synapses in either of the streams. Recurrent FF-FB pathways thereby play a role during adaptation in a natural environment, specifically in inducing multilevel cortical plasticity to salient information and in mediating adaptation at different time-scales.
Collapse
Affiliation(s)
| | - Christian Jarvers
- Faculty of Engineering, Computer Sciences and Psychology, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Heiko Neumann
- Faculty of Engineering, Computer Sciences and Psychology, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Faculty of Engineering, Computer Sciences and Psychology, Institute of Neural Information Processing, Ulm University, Ulm, Germany
| |
Collapse
|
22
|
Signorelli CM. Can Computers Become Conscious and Overcome Humans? Front Robot AI 2018; 5:121. [PMID: 33501000 PMCID: PMC7805878 DOI: 10.3389/frobt.2018.00121] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2017] [Accepted: 09/26/2018] [Indexed: 11/13/2022] Open
Abstract
The idea of machines overcoming humans can be intrinsically related to conscious machines. Surpassing humans would mean replicating, reaching and exceeding key distinctive properties of human beings, for example, high-level cognition associated with conscious perception. However, can computers be compared with humans? Can computers become conscious? Can computers outstrip human capabilities? These are paradoxical and controversial questions, particularly because there are many hidden assumptions and misconceptions about the understanding of the brain. In this sense, it is necessary to first explore these assumptions and then suggest how the specific information processing of brains would be replicated by machines. Therefore, this article will discuss a subset of human capabilities and the connection with conscious behavior, secondly, a prototype theory of consciousness will be explored and machines will be classified according to this framework. Finally, this analysis will show the paradoxical conclusion that trying to achieve conscious machines to beat humans implies that computers will never completely exceed human capabilities, or if the computer were to do it, the machine should not be considered a computer anymore.
Collapse
Affiliation(s)
- Camilo Miguel Signorelli
- Department of Computer Science, University of Oxford, Oxford, United Kingdom
- Cognitive Neuroimaging Unit, INSERM U992, NeuroSpin, Gif-sur-Yvette, France
- Centre for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain
| |
Collapse
|
23
|
Habtegiorgis SW, Rifai K, Lappe M, Wahl S. Experience-dependent long-term facilitation of skew adaptation. J Vis 2018; 18:7. [DOI: 10.1167/18.9.7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Markus Lappe
- Institute of Psychology, University of Muenster, Muenster, Germany
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
24
|
Martinolli M, Gerstner W, Gilra A. Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory. Front Comput Neurosci 2018; 12:50. [PMID: 30061819 PMCID: PMC6055065 DOI: 10.3389/fncom.2018.00050] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Accepted: 06/18/2018] [Indexed: 11/13/2022] Open
Abstract
The interplay of reinforcement learning and memory is at the core of several recent neural network models, such as the Attention-Gated MEmory Tagging (AuGMEnT) model. While successful at various animal learning tasks, we find that the AuGMEnT network is unable to cope with some hierarchical tasks, where higher-level stimuli have to be maintained over a long time, while lower-level stimuli need to be remembered and forgotten over a shorter timescale. To overcome this limitation, we introduce a hybrid AuGMEnT, with leaky (or short-timescale) and non-leaky (or long-timescale) memory units, that allows the exchange of low-level information while maintaining high-level one. We test the performance of the hybrid AuGMEnT network on two cognitive reference tasks, sequence prediction and 12AX.
Collapse
Affiliation(s)
- Marco Martinolli
- School of Computer and Communication Sciences, School of Life Sciences, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences, School of Life Sciences, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Aditya Gilra
- School of Computer and Communication Sciences, School of Life Sciences, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
25
|
Quaglio P, Rostami V, Torre E, Grün S. Methods for identification of spike patterns in massively parallel spike trains. BIOLOGICAL CYBERNETICS 2018; 112:57-80. [PMID: 29651582 PMCID: PMC5908877 DOI: 10.1007/s00422-018-0755-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Accepted: 03/26/2018] [Indexed: 06/08/2023]
Abstract
Temporally, precise correlations between simultaneously recorded neurons have been interpreted as signatures of cell assemblies, i.e., groups of neurons that form processing units. Evidence for this hypothesis was found on the level of pairwise correlations in simultaneous recordings of few neurons. Increasing the number of simultaneously recorded neurons increases the chances to detect cell assembly activity due to the larger sample size. Recent technological advances have enabled the recording of 100 or more neurons in parallel. However, these massively parallel spike train data require novel statistical tools to be analyzed for correlations, because they raise considerable combinatorial and multiple testing issues. Recently, various of such methods have started to develop. First approaches were based on population or pairwise measures of synchronization, and later led to methods for the detection of various types of higher-order synchronization and of spatio-temporal patterns. The latest techniques combine data mining with analysis of statistical significance. Here, we give a comparative overview of these methods, of their assumptions and of the types of correlations they can detect.
Collapse
Affiliation(s)
- Pietro Quaglio
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.
| | - Vahid Rostami
- Computational Systems Neuroscience, Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Emiliano Torre
- Chair of Risk, Safety and Uncertainty Quantification, ETH Zürich, Zurich, Switzerland
- Risk Center, ETH Zürich, Zurich, Switzerland
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
26
|
Faghihi F, Moustafa AA. Combined Computational Systems Biology and Computational Neuroscience Approaches Help Develop of Future "Cognitive Developmental Robotics". Front Neurorobot 2017; 11:63. [PMID: 29276486 PMCID: PMC5727420 DOI: 10.3389/fnbot.2017.00063] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 10/24/2017] [Indexed: 11/13/2022] Open
Affiliation(s)
- Faramarz Faghihi
- Department for Cognitive Modeling, Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
| | - Ahmed A Moustafa
- School of Social Sciences and Psychology and Marcs Institute for Brain and Behavior, Western Sydney University, Sydney, NSW, Australia
| |
Collapse
|
27
|
Faghihi F, Moustafa AA. Sparse and burst spiking in artificial neural networks inspired by synaptic retrograde signaling. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.08.073] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
28
|
Ruiz F, Castelletto ML, Gang SS, Hallem EA. Experience-dependent olfactory behaviors of the parasitic nematode Heligmosomoides polygyrus. PLoS Pathog 2017; 13:e1006709. [PMID: 29190282 PMCID: PMC5708605 DOI: 10.1371/journal.ppat.1006709] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Accepted: 10/24/2017] [Indexed: 12/26/2022] Open
Abstract
Parasitic nematodes of humans and livestock cause extensive disease and economic loss worldwide. Many parasitic nematodes infect hosts as third-stage larvae, called iL3s. iL3s vary in their infection route: some infect by skin penetration, others by passive ingestion. Skin-penetrating iL3s actively search for hosts using host-emitted olfactory cues, but the extent to which passively ingested iL3s respond to olfactory cues was largely unknown. Here, we examined the olfactory behaviors of the passively ingested murine gastrointestinal parasite Heligmosomoides polygyrus. H. polygyrus iL3s were thought to reside primarily on mouse feces, and infect when mice consume feces containing iL3s. However, iL3s can also adhere to mouse fur and infect orally during grooming. Here, we show that H. polygyrus iL3s are highly active and show robust attraction to host feces. Despite their attraction to feces, many iL3s migrate off feces to engage in environmental navigation. In addition, H. polygyrus iL3s are attracted to mammalian skin odorants, suggesting that they migrate toward hosts. The olfactory preferences of H. polygyrus are flexible: some odorants are repulsive for iL3s maintained on feces but attractive for iL3s maintained off feces. Experience-dependent modulation of olfactory behavior occurs over the course of days and is mediated by environmental carbon dioxide (CO2) levels. Similar experience-dependent olfactory plasticity occurs in the passively ingested ruminant-parasitic nematode Haemonchus contortus, a major veterinary parasite. Our results suggest that passively ingested iL3s migrate off their original fecal source and actively navigate toward hosts or new host fecal sources using olfactory cues. Olfactory plasticity may be a mechanism that enables iL3s to switch from dispersal behavior to host-seeking behavior. Together, our results demonstrate that passively ingested nematodes do not remain inactive waiting to be swallowed, but rather display complex sensory-driven behaviors to position themselves for host ingestion. Disrupting these behaviors may be a new avenue for preventing infections. Many parasitic nematodes infect by passive ingestion when the host consumes food, water, or feces containing infective third-stage larvae (iL3s). Passively ingested nematodes that infect humans cause severe gastrointestinal distress and death in endemic regions, and those that infect livestock are a major cause of production loss worldwide. Because these parasites do not actively invade hosts but instead rely on being swallowed by hosts, it has been assumed that they show only limited sensory responses and do not engage in host-seeking behaviors. Here, we investigate the olfactory behaviors of the passively ingested murine parasite Heligmosomoides polygyrus and show that this assumption is incorrect; H. polygyrus iL3s show robust attraction to a diverse array of odorants found in mammalian skin, sweat, and feces. Moreover, the olfactory responses of H. polygyrus iL3s are experience-dependent: some odorants are repulsive to iL3s cultured on feces but attractive to iL3s removed from feces. Olfactory plasticity is also observed in the ruminant parasite Haemonchus contortus, and may enable iL3s to disperse in search of new hosts or host fecal sources. Our results suggest that passively ingested nematodes use olfactory cues to navigate their environments and position themselves where they are likely to be swallowed. By providing new insights into the olfactory behaviors of these parasites, our results may enable the development of new strategies for preventing infections.
Collapse
Affiliation(s)
- Felicitas Ruiz
- Department of Microbiology, Immunology, and Molecular Genetics, University of California, Los Angeles, Los Angeles, California, United States of America
| | - Michelle L. Castelletto
- Department of Microbiology, Immunology, and Molecular Genetics, University of California, Los Angeles, Los Angeles, California, United States of America
| | - Spencer S. Gang
- Molecular Biology Institute, University of California, Los Angeles, Los Angeles, California, United States of America
| | - Elissa A. Hallem
- Department of Microbiology, Immunology, and Molecular Genetics, University of California, Los Angeles, Los Angeles, California, United States of America
- Molecular Biology Institute, University of California, Los Angeles, Los Angeles, California, United States of America
- * E-mail:
| |
Collapse
|
29
|
Abstract
In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.
Collapse
Affiliation(s)
- Gabriele Scheler
- Carl Correns Foundation for Mathematical Biology, Mountain View, CA, 94040, USA
| |
Collapse
|
30
|
Abstract
In this paper, we document lognormal distributions for spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears as a functional property that is present everywhere. Secondly, we created a generic neural model to show that Hebbian learning will create and maintain lognormal distributions. We could prove with the model that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This settles a long-standing question about the type of plasticity exhibited by intrinsic excitability.
Collapse
Affiliation(s)
- Gabriele Scheler
- Carl Correns Foundation for Mathematical Biology, Mountain View, CA, 94040, USA
| |
Collapse
|
31
|
Park Y, Choi W, Paik SB. Symmetry of learning rate in synaptic plasticity modulates formation of flexible and stable memories. Sci Rep 2017; 7:5671. [PMID: 28720795 PMCID: PMC5516032 DOI: 10.1038/s41598-017-05929-2] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 06/06/2017] [Indexed: 01/06/2023] Open
Abstract
Spike-timing-dependent plasticity (STDP) is considered critical to learning and memory functions in the human brain. Across various types of synapse, STDP is observed as different profiles of Hebbian and anti-Hebbian learning rules. However, the specific roles of diverse STDP profiles in memory formation still remain elusive. Here, we show that the symmetry of the learning rate profile in STDP is crucial to determining the character of stored memory. Using computer simulations, we found that an asymmetric learning rate generates flexible memory that is volatile and easily overwritten by newly appended information. Moreover, a symmetric learning rate generates stable memory that can coexist with newly appended information. In addition, by combining these two conditions, we could realize a hybrid memory type that operates in a way intermediate between stable and flexible memory. Our results demonstrate that various attributes of memory functions may originate from differences in the synaptic stability.
Collapse
Affiliation(s)
- Youngjin Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Woochul Choi
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea.,Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea
| | - Se-Bum Paik
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea. .,Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Republic of Korea.
| |
Collapse
|
32
|
Ozcan AS. Filopodia: A Rapid Structural Plasticity Substrate for Fast Learning. Front Synaptic Neurosci 2017; 9:12. [PMID: 28676753 PMCID: PMC5476769 DOI: 10.3389/fnsyn.2017.00012] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 06/06/2017] [Indexed: 11/17/2022] Open
Abstract
Formation of new synapses between neurons is an essential mechanism for learning and encoding memories. The vast majority of excitatory synapses occur on dendritic spines, therefore, the growth dynamics of spines is strongly related to the plasticity timescales. Especially in the early stages of the developing brain, there is an abundant number of long, thin and motile protrusions (i.e., filopodia), which develop in timescales of seconds and minutes. Because of their unique morphology and motility, it has been suggested that filopodia can have a dual role in both spinogenesis and environmental sampling of potential axonal partners. I propose that filopodia can lower the threshold and reduce the time to form new dendritic spines and synapses, providing a substrate for fast learning. Based on this proposition, the functional role of filopodia during brain development is discussed in relation to learning and memory. Specifically, it is hypothesized that the postnatal brain starts with a single-stage memory system with filopodia playing a significant role in rapid structural plasticity along with the stability provided by the mushroom-shaped spines. Following the maturation of the hippocampus, this highly-plastic unitary system transitions to a two-stage memory system, which consists of a plastic temporary store and a long-term stable store. In alignment with these architectural changes, it is posited that after brain maturation, filopodia-based structural plasticity will be preserved in specific areas, which are involved in fast learning (e.g., hippocampus in relation to episodic memory). These propositions aim to introduce a unifying framework for a diversity of phenomena in the brain such as synaptogenesis, pruning and memory consolidation.
Collapse
Affiliation(s)
- Ahmet S Ozcan
- Machine Intelligence Laboratory, IBM Almaden Research CenterSan Jose, CA, United States
| |
Collapse
|
33
|
Working Memory Requires a Combination of Transient and Attractor-Dominated Dynamics to Process Unreliably Timed Inputs. Sci Rep 2017; 7:2473. [PMID: 28559576 PMCID: PMC5449410 DOI: 10.1038/s41598-017-02471-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2016] [Accepted: 04/11/2017] [Indexed: 12/20/2022] Open
Abstract
Working memory stores and processes information received as a stream of continuously incoming stimuli. This requires accurate sequencing and it remains puzzling how this can be reliably achieved by the neuronal system as our perceptual inputs show a high degree of temporal variability. One hypothesis is that accurate timing is achieved by purely transient neuronal dynamics; by contrast a second hypothesis states that the underlying network dynamics are dominated by attractor states. In this study, we resolve this contradiction by theoretically investigating the performance of the system using stimuli with differently accurate timing. Interestingly, only the combination of attractor and transient dynamics enables the network to perform with a low error rate. Further analysis reveals that the transient dynamics of the system are used to process information, while the attractor states store it. The interaction between both types of dynamics yields experimentally testable predictions and we show that this way the system can reliably interact with a timing-unreliable Hebbian-network representing long-term memory. Thus, this study provides a potential solution to the long-standing problem of the basic neuronal dynamics underlying working memory.
Collapse
|
34
|
Zhou J, Zou Y, Guan S, Liu Z, Boccaletti S. Synchronization in slowly switching networks of coupled oscillators. Sci Rep 2016; 6:35979. [PMID: 27779253 PMCID: PMC5078792 DOI: 10.1038/srep35979] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Accepted: 10/07/2016] [Indexed: 11/17/2022] Open
Abstract
Networks whose structure of connections evolves in time constitute a big challenge in the study of synchronization, in particular when the time scales for the evolution of the graph topology are comparable with (or even longer than) those pertinent to the units’ dynamics. We here focus on networks with a slow-switching structure, and show that the necessary conditions for synchronization, i.e. the conditions for which synchronization is locally stable, are determined by the time average of the largest Lyapunov exponents of transverse modes of the switching topologies. Comparison between fast- and slow-switching networks allows elucidating that slow-switching processes prompt synchronization in the cases where the Master Stability Function is concave, whereas fast-switching schemes facilitate synchronization for convex curves. Moreover, the condition of slow-switching enables the introduction of a control strategy for inducing synchronization in networks with arbitrary structure and coupling strength, which is of evident relevance for broad applications in real world systems.
Collapse
Affiliation(s)
- Jie Zhou
- Department of Physics, East China Normal University, Shanghai 200241, China
| | - Yong Zou
- Department of Physics, East China Normal University, Shanghai 200241, China
| | - Shuguang Guan
- Department of Physics, East China Normal University, Shanghai 200241, China
| | - Zonghua Liu
- Department of Physics, East China Normal University, Shanghai 200241, China
| | - S Boccaletti
- CNR-Institute of Complex Systems, Via Madonna del Piano, 10, 50019 Sesto Fiorentino, Florence, Italy.,The Embassy of Italy in Tel Aviv, 25 Hamered street, 68125 Tel Aviv, Israel
| |
Collapse
|
35
|
Kato A, Morita K. Forgetting in Reinforcement Learning Links Sustained Dopamine Signals to Motivation. PLoS Comput Biol 2016; 12:e1005145. [PMID: 27736881 PMCID: PMC5063413 DOI: 10.1371/journal.pcbi.1005145] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 09/14/2016] [Indexed: 12/12/2022] Open
Abstract
It has been suggested that dopamine (DA) represents reward-prediction-error (RPE) defined in reinforcement learning and therefore DA responds to unpredicted but not predicted reward. However, recent studies have found DA response sustained towards predictable reward in tasks involving self-paced behavior, and suggested that this response represents a motivational signal. We have previously shown that RPE can sustain if there is decay/forgetting of learned-values, which can be implemented as decay of synaptic strengths storing learned-values. This account, however, did not explain the suggested link between tonic/sustained DA and motivation. In the present work, we explored the motivational effects of the value-decay in self-paced approach behavior, modeled as a series of ‘Go’ or ‘No-Go’ selections towards a goal. Through simulations, we found that the value-decay can enhance motivation, specifically, facilitate fast goal-reaching, albeit counterintuitively. Mathematical analyses revealed that underlying potential mechanisms are twofold: (1) decay-induced sustained RPE creates a gradient of ‘Go’ values towards a goal, and (2) value-contrasts between ‘Go’ and ‘No-Go’ are generated because while chosen values are continually updated, unchosen values simply decay. Our model provides potential explanations for the key experimental findings that suggest DA's roles in motivation: (i) slowdown of behavior by post-training blockade of DA signaling, (ii) observations that DA blockade severely impairs effortful actions to obtain rewards while largely sparing seeking of easily obtainable rewards, and (iii) relationships between the reward amount, the level of motivation reflected in the speed of behavior, and the average level of DA. These results indicate that reinforcement learning with value-decay, or forgetting, provides a parsimonious mechanistic account for the DA's roles in value-learning and motivation. Our results also suggest that when biological systems for value-learning are active even though learning has apparently converged, the systems might be in a state of dynamic equilibrium, where learning and forgetting are balanced. Dopamine (DA) has been suggested to have two reward-related roles: (1) representing reward-prediction-error (RPE), and (2) providing motivational drive. Role(1) is based on the physiological results that DA responds to unpredicted but not predicted reward, whereas role(2) is supported by the pharmacological results that blockade of DA signaling causes motivational impairments such as slowdown of self-paced behavior. So far, these two roles are considered to be played by two different temporal patterns of DA signals: role(1) by phasic signals and role(2) by tonic/sustained signals. However, recent studies have found sustained DA signals with features indicative of both roles (1) and (2), complicating this picture. Meanwhile, whereas synaptic/circuit mechanisms for role(1), i.e., how RPE is calculated in the upstream of DA neurons and how RPE-dependent update of learned-values occurs through DA-dependent synaptic plasticity, have now become clarified, mechanisms for role(2) remain unclear. In this work, we modeled self-paced behavior by a series of ‘Go’ or ‘No-Go’ selections in the framework of reinforcement-learning assuming DA's role(1), and demonstrated that incorporation of decay/forgetting of learned-values, which is presumably implemented as decay of synaptic strengths storing learned-values, provides a potential unified mechanistic account for the DA's two roles, together with its various temporal patterns.
Collapse
Affiliation(s)
- Ayaka Kato
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
| | - Kenji Morita
- Physical and Health Education, Graduate School of Education, The University of Tokyo, Tokyo, Japan
- * E-mail:
| |
Collapse
|
36
|
Szilágyi A, Zachar I, Fedor A, de Vladar HP, Szathmáry E. Breeding novel solutions in the brain: a model of Darwinian neurodynamics. F1000Res 2016; 5:2416. [PMID: 27990266 PMCID: PMC5130073 DOI: 10.12688/f1000research.9630.2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/21/2017] [Indexed: 01/03/2023] Open
Abstract
Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.
Collapse
Affiliation(s)
- András Szilágyi
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - István Zachar
- Department of Plant Systematics, Ecology and Theoretical Biology, Institute of Biology, Eötvös University, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Anna Fedor
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Harold P de Vladar
- Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Eörs Szathmáry
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Department of Plant Systematics, Ecology and Theoretical Biology, Institute of Biology, Eötvös University, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary.,Evolutionary Systems Research Group, MTA Ecological Research Centre, Tihany, Hungary
| |
Collapse
|
37
|
Szilágyi A, Zachar I, Fedor A, de Vladar HP, Szathmáry E. Breeding novel solutions in the brain: a model of Darwinian neurodynamics. F1000Res 2016; 5:2416. [PMID: 27990266 DOI: 10.12688/f1000research.9630.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/20/2016] [Indexed: 01/15/2023] Open
Abstract
Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain - recurrent neural networks (acting as attractors), the action selection loop and implicit working memory - to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.
Collapse
Affiliation(s)
- András Szilágyi
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - István Zachar
- Department of Plant Systematics, Ecology and Theoretical Biology, Institute of Biology, Eötvös University, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Anna Fedor
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Harold P de Vladar
- Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary
| | - Eörs Szathmáry
- MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Budapest, H-1117, Hungary.,Department of Plant Systematics, Ecology and Theoretical Biology, Institute of Biology, Eötvös University, Budapest, H-1117, Hungary.,Parmenides Center for the Conceptual Foundations of Science, Munich/Pullach, 82049, Germany.,Institute of Advanced Studies, Kőszeg, H-9730, Hungary.,Evolutionary Systems Research Group, MTA Ecological Research Centre, Tihany, Hungary
| |
Collapse
|
38
|
Richard-Devantoy S, Berlim MT, Jollant F. Suicidal behaviour and memory: A systematic review and meta-analysis. World J Biol Psychiatry 2016; 16:544-66. [PMID: 25112792 DOI: 10.3109/15622975.2014.925584] [Citation(s) in RCA: 83] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVES Suicidal behaviour results from a complex interplay between stressful events and vulnerability factors, including cognitive deficits. It is not yet clear if memory impairment is part of this specific vulnerability. Therefore, the objective of this study was to examine the association between memory deficits and vulnerability to suicidal acts. METHODS A literature review was performed using Medline, Embase, and PsycInfo databases. Twenty-four studies (including 2,595 participants) met the selection criteria. Four different types of memory (i.e., working memory, short- and long-term memory, and autobiographical memory) were assessed in at least three different studies. RESULTS Autobiographical memory was significantly less specific and more general in patients with a history of suicide attempt relative to those without such a history (Hedges' g = 0.8 and 0.9, respectively). Long-term memory and working memory were both more impaired in suicide attempters than in patient and healthy controls. Only short-term memory did not differentiate suicide attempters from patient controls. CONCLUSIONS Memory may play a significant role in the risk of suicidal acts, perhaps by preventing these individuals from using past experiences to solve current problems and to envision the future, and by altering inhibitory processes. More studies are necessary to better clarify these relationships.
Collapse
Affiliation(s)
- Stephane Richard-Devantoy
- a McGill University, Department of Psychiatry & Douglas Mental Health University Institute , McGill Group for Suicide Studies , Montréal , Québec , Canada.,b Laboratoire de Psychologie des Pays de la Loire EA 4638, Université de Nantes et Angers , France
| | - Marcelo T Berlim
- a McGill University, Department of Psychiatry & Douglas Mental Health University Institute , McGill Group for Suicide Studies , Montréal , Québec , Canada
| | - Fabrice Jollant
- a McGill University, Department of Psychiatry & Douglas Mental Health University Institute , McGill Group for Suicide Studies , Montréal , Québec , Canada
| |
Collapse
|
39
|
Li Y, Kulvicius T, Tetzlaff C. Induction and Consolidation of Calcium-Based Homo- and Heterosynaptic Potentiation and Depression. PLoS One 2016; 11:e0161679. [PMID: 27560350 PMCID: PMC4999190 DOI: 10.1371/journal.pone.0161679] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 08/10/2016] [Indexed: 11/19/2022] Open
Abstract
The adaptive mechanisms of homo- and heterosynaptic plasticity play an important role in learning and memory. In order to maintain plasticity-induced changes for longer time scales (up to several days), they have to be consolidated by transferring them from a short-lasting early-phase to a long-lasting late-phase state. The underlying processes of this synaptic consolidation are already well-known for homosynaptic plasticity, however, it is not clear whether the same processes also enable the induction and consolidation of heterosynaptic plasticity. In this study, by extending a generic calcium-based plasticity model with the processes of synaptic consolidation, we show in simulations that indeed heterosynaptic plasticity can be induced and, furthermore, consolidated by the same underlying processes as for homosynaptic plasticity. Furthermore, we show that by local diffusion processes the heterosynaptic effect can be restricted to a few synapses neighboring the homosynaptically changed ones. Taken together, this generic model reproduces many experimental results of synaptic tagging and consolidation, provides several predictions for heterosynaptic induction and consolidation, and yields insights into the complex interactions between homo- and heterosynaptic plasticity over a broad variety of time (minutes to days) and spatial scales (several micrometers).
Collapse
Affiliation(s)
- Yinyun Li
- III. Institute of Physics – Biophysics, Georg-August-University, 37077 Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, 37077 Göttingen, Germany
- School of System Science, Beijing Normal University, 100875 Beijing, China
- * E-mail:
| | - Tomas Kulvicius
- III. Institute of Physics – Biophysics, Georg-August-University, 37077 Göttingen, Germany
- Maersk Mc-Kinney Moller Institute, University of Southern Denmark, 5230 Odense, Denmark
| | - Christian Tetzlaff
- Bernstein Center for Computational Neuroscience, Georg-August-University, 37077 Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
- Department of Neurobiology, Weizmann Institute of Science, 76100 Rehovot, Israel
| |
Collapse
|
40
|
Computational principles of memory. Nat Neurosci 2016; 19:394-403. [PMID: 26906506 DOI: 10.1038/nn.4237] [Citation(s) in RCA: 119] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 01/06/2016] [Indexed: 02/06/2023]
Abstract
The ability to store and later use information is essential for a variety of adaptive behaviors, including integration, learning, generalization, prediction and inference. In this Review, we survey theoretical principles that can allow the brain to construct persistent states for memory. We identify requirements that a memory system must satisfy and analyze existing models and hypothesized biological substrates in light of these requirements. We also highlight open questions, theoretical puzzles and problems shared with computer science and information theory.
Collapse
|
41
|
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Comput Biol 2016; 12:e1004954. [PMID: 27213810 PMCID: PMC4877102 DOI: 10.1371/journal.pcbi.1004954] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/28/2016] [Indexed: 11/25/2022] Open
Abstract
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
Collapse
|
42
|
Abstract
Dynamic remodeling of connectivity is a fundamental feature of neocortical circuits. Unraveling the principles underlying these dynamics is essential for the understanding of how neuronal circuits give rise to computations. Moreover, as complete descriptions of the wiring diagram in cortical tissues are becoming available, deciphering the dynamic elements in these diagrams is crucial for relating them to cortical function. Here, we used chronic in vivo two-photon imaging to longitudinally follow a few thousand dendritic spines in the mouse auditory cortex to study the determinants of these spines' lifetimes. We applied nonlinear regression to quantify the independent contribution of spine age and several morphological parameters to the prediction of the future survival of a spine. We show that spine age, size, and geometry are parameters that can provide independent contributions to the prediction of the longevity of a synaptic connection. In addition, we use this framework to emulate a serial sectioning electron microscopy experiment and demonstrate how incorporation of morphological information of dendritic spines from a single time-point allows estimation of future connectivity states. The distinction between predictable and nonpredictable connectivity changes may be used in the future to identify the specific adaptations of neuronal circuits to environmental changes. The full dataset is publicly available for further analysis. Significance statement: The neural architecture in the neocortex exhibits constant remodeling. The functional consequences of these modifications are poorly understood, in particular because the determinants of these changes are largely unknown. Here, we aimed to identify those modifications that are predictable from current network state. To that goal, we repeatedly imaged thousands of dendritic spines in the auditory cortex of mice to assess the morphology and lifetimes of synaptic connections. We developed models based on morphological features of dendritic spines that allow predicting future turnover of synaptic connections. The dynamic models presented in this paper provide a quantitative framework for adding putative temporal dynamics to the static description of a neuronal circuit from single time-point connectomics experiments.
Collapse
|
43
|
Tetzlaff C, Dasgupta S, Kulvicius T, Wörgötter F. The Use of Hebbian Cell Assemblies for Nonlinear Computation. Sci Rep 2015; 5:12866. [PMID: 26249242 PMCID: PMC4650703 DOI: 10.1038/srep12866] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2015] [Accepted: 07/10/2015] [Indexed: 11/25/2022] Open
Abstract
When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn computing complex non-linear transforms and – for execution – must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control.
Collapse
Affiliation(s)
- Christian Tetzlaff
- 1] Institute for Physics - Biophysics, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [2] Bernstein Center for Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [3]
| | - Sakyasingha Dasgupta
- 1] Institute for Physics - Biophysics, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [2] Bernstein Center for Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [3]
| | - Tomas Kulvicius
- 1] Institute for Physics - Biophysics, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [2] Bernstein Center for Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [3]
| | - Florentin Wörgötter
- 1] Institute for Physics - Biophysics, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [2] Bernstein Center for Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany
| |
Collapse
|
44
|
Cassanelli PM, Cladouchos ML, Fernández Macedo G, Sifonios L, Giaccardi LI, Gutiérrez ML, Gravielle MC, Wikinski S. Working memory training triggers delayed chromatin remodeling in the mouse corticostriatothalamic circuit. Prog Neuropsychopharmacol Biol Psychiatry 2015; 60:93-103. [PMID: 25724761 DOI: 10.1016/j.pnpbp.2015.02.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Revised: 02/05/2015] [Accepted: 02/16/2015] [Indexed: 01/10/2023]
Abstract
Working memory is a cognitive function serving goal-oriented behavior. In the last decade, working memory training has been shown to improve performance and its efficacy for the treatment of several neuropsychiatric disorders has begun to be examined. Neuroimaging studies have contributed to elucidate the brain areas involved but little is known about the underlying cellular events. A growing body of evidence has provided a link between working memory and relatively long-lasting epigenetic changes. However, the effects elicited by working memory training at the epigenetic level remain unknown. In this study we establish an animal model of working memory training and explore the changes in histone H3 acetylation (H3K9,14Ac) and histone H3 dimethylation on lysine 27 (H3K27Me2) triggered by the procedure in the brain regions of the corticostriatothalamic circuit (prelimbic/infralimbic cortex (PrL/IL), dorsomedial striatum (DMSt) and dorsomedial thalamus (DMTh)). Mice trained on a spontaneous alternation task showed improved alternation scores when tested with a retention interval that disrupts the performance of untrained animals. We then determined the involvement of the brain areas of the corticostriatothalamic circuit in working memory training by measuring the marker of neuronal activation c-fos. We observed increased c-fos levels in PrL/IL and DMSt in trained mice 90min after training. These animals also presented lower immunoreactivity for H3K9,14Ac in DMSt 24h but not 90min after the procedure. Increases in H3K27Me2, a repressive chromatin mark, were found in the DMSt and DMTh 24h after the task. Altogether, we present a mouse model to study the cellular underpinnings of working memory training and provide evidence indicating delayed chromatin remodeling towards repression triggered by the procedure.
Collapse
Affiliation(s)
- Pablo Martín Cassanelli
- Instituto de Investigaciones Farmacológicas (UBA-CONICET), Junín 956, 5th Floor, C1113AAD Ciudad Autónoma de Buenos Aires, Argentina.
| | - María Laura Cladouchos
- Instituto de Investigaciones Farmacológicas (UBA-CONICET), Junín 956, 5th Floor, C1113AAD Ciudad Autónoma de Buenos Aires, Argentina
| | - Georgina Fernández Macedo
- Instituto de Investigaciones Farmacológicas (UBA-CONICET), Junín 956, 5th Floor, C1113AAD Ciudad Autónoma de Buenos Aires, Argentina
| | - Laura Sifonios
- Instituto de Investigaciones Farmacológicas (UBA-CONICET), Junín 956, 5th Floor, C1113AAD Ciudad Autónoma de Buenos Aires, Argentina
| | - Laura Inés Giaccardi
- Instituto de Investigaciones Farmacológicas (UBA-CONICET), Junín 956, 5th Floor, C1113AAD Ciudad Autónoma de Buenos Aires, Argentina
| | - María Laura Gutiérrez
- Instituto de Investigaciones Farmacológicas (UBA-CONICET), Junín 956, 5th Floor, C1113AAD Ciudad Autónoma de Buenos Aires, Argentina
| | - María Clara Gravielle
- Instituto de Investigaciones Farmacológicas (UBA-CONICET), Junín 956, 5th Floor, C1113AAD Ciudad Autónoma de Buenos Aires, Argentina
| | - Silvia Wikinski
- Instituto de Investigaciones Farmacológicas (UBA-CONICET), Junín 956, 5th Floor, C1113AAD Ciudad Autónoma de Buenos Aires, Argentina; 1ª Cátedra de Farmacología, Facultad de Medicina, Universidad de Buenos Aires, Paraguay 2155, C1121ABG Ciudad Autónoma de Buenos Aires, Argentina
| |
Collapse
|
45
|
Zippo AG, Biella GEM. Quantifying the Number of Discriminable Coincident Dendritic Input Patterns through Dendritic Tree Morphology. Sci Rep 2015; 5:11543. [PMID: 26100354 PMCID: PMC4482401 DOI: 10.1038/srep11543] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Accepted: 05/13/2015] [Indexed: 11/09/2022] Open
Abstract
Current developments in neuronal physiology are unveiling novel roles for dendrites. Experiments have shown mechanisms of non-linear synaptic NMDA dependent activations, able to discriminate input patterns through the waveforms of the excitatory postsynaptic potentials. Contextually, the synaptic clustering of inputs is the principal cellular strategy to separate groups of common correlated inputs. Dendritic branches appear to work as independent discriminating units of inputs potentially reflecting an extraordinary repertoire of pattern memories. However, it is unclear how these observations could impact our comprehension of the structural correlates of memory at the cellular level. This work investigates the discrimination capabilities of neurons through computational biophysical models to extract a predicting law for the dendritic input discrimination capability (M). By this rule we compared neurons from a neuron reconstruction repository (neuromorpho.org). Comparisons showed that primate neurons were not supported by an equivalent M preeminence and that M is not uniformly distributed among neuron types. Remarkably, neocortical neurons had substantially less memory capacity in comparison to those from non-cortical regions. In conclusion, the proposed rule predicts the inherent neuronal spatial memory gathering potentially relevant anatomical and evolutionary considerations about the brain cytoarchitecture.
Collapse
Affiliation(s)
- Antonio G Zippo
- Institute of Biomedical Imaging and Physiology, Department of Biomedical Sciences, Consiglio Nazionale delle Ricerche, Segrate (Milan), Italy
| | - Gabriele E M Biella
- Institute of Biomedical Imaging and Physiology, Department of Biomedical Sciences, Consiglio Nazionale delle Ricerche, Segrate (Milan), Italy
| |
Collapse
|
46
|
Faghihi F, Moustafa AA. The dependence of neuronal encoding efficiency on Hebbian plasticity and homeostatic regulation of neurotransmitter release. Front Cell Neurosci 2015; 9:164. [PMID: 25972786 PMCID: PMC4412074 DOI: 10.3389/fncel.2015.00164] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2015] [Accepted: 04/14/2015] [Indexed: 11/26/2022] Open
Abstract
Synapses act as information filters by different molecular mechanisms including retrograde messenger that affect neuronal spiking activity. One of the well-known effects of retrograde messenger in presynaptic neurons is a change of the probability of neurotransmitter release. Hebbian learning describe a strengthening of a synapse between a presynaptic input onto a postsynaptic neuron when both pre- and postsynaptic neurons are coactive. In this work, a theory of homeostatic regulation of neurotransmitter release by retrograde messenger and Hebbian plasticity in neuronal encoding is presented. Encoding efficiency was measured for different synaptic conditions. In order to gain high encoding efficiency, the spiking pattern of a neuron should be dependent on the intensity of the input and show low levels of noise. In this work, we represent spiking trains as zeros and ones (corresponding to non-spike or spike in a time bin, respectively) as words with length equal to three. Then the frequency of each word (here eight words) is measured using spiking trains. These frequencies are used to measure neuronal efficiency in different conditions and for different parameter values. Results show that neurons that have synapses acting as band-pass filters show the highest efficiency to encode their input when both Hebbian mechanism and homeostatic regulation of neurotransmitter release exist in synapses. Specifically, the integration of homeostatic regulation of feedback inhibition with Hebbian mechanism and homeostatic regulation of neurotransmitter release in the synapses leads to even higher efficiency when high stimulus intensity is presented to the neurons. However, neurons with synapses acting as high-pass filters show no remarkable increase in encoding efficiency for all simulated synaptic plasticity mechanisms. This study demonstrates the importance of cooperation of Hebbian mechanism with regulation of neurotransmitter release induced by rapid diffused retrograde messenger in neurons with synapses as low and band-pass filters to obtain high encoding efficiency in different environmental and physiological conditions.
Collapse
Affiliation(s)
- Faramarz Faghihi
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University Fairfax, VA, USA
| | - Ahmed A Moustafa
- Department of Veterans Affairs, New Jersey Health Care System East Orange, NJ, USA ; School of Social Sciences and Psychology and Marcs Institute for Brain and Behavior, University of Western Sydney Sydney, NSW, Australia
| |
Collapse
|
47
|
Fauth M, Wörgötter F, Tetzlaff C. The formation of multi-synaptic connections by the interaction of synaptic and structural plasticity and their functional consequences. PLoS Comput Biol 2015; 11:e1004031. [PMID: 25590330 PMCID: PMC4295841 DOI: 10.1371/journal.pcbi.1004031] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Accepted: 11/06/2014] [Indexed: 11/19/2022] Open
Abstract
Cortical connectivity emerges from the permanent interaction between neuronal activity and synaptic as well as structural plasticity. An important experimentally observed feature of this connectivity is the distribution of the number of synapses from one neuron to another, which has been measured in several cortical layers. All of these distributions are bimodal with one peak at zero and a second one at a small number (3–8) of synapses. In this study, using a probabilistic model of structural plasticity, which depends on the synaptic weights, we explore how these distributions can emerge and which functional consequences they have. We find that bimodal distributions arise generically from the interaction of structural plasticity with synaptic plasticity rules that fulfill the following biological realistic constraints: First, the synaptic weights have to grow with the postsynaptic activity. Second, this growth curve and/or the input-output relation of the postsynaptic neuron have to change sub-linearly (negative curvature). As most neurons show such input-output-relations, these constraints can be fulfilled by many biological reasonable systems. Given such a system, we show that the different activities, which can explain the layer-specific distributions, correspond to experimentally observed activities. Considering these activities as working point of the system and varying the pre- or postsynaptic stimulation reveals a hysteresis in the number of synapses. As a consequence of this, the connectivity between two neurons can be controlled by activity but is also safeguarded against overly fast changes. These results indicate that the complex dynamics between activity and plasticity will, already between a pair of neurons, induce a variety of possible stable synaptic distributions, which could support memory mechanisms. The connectivity between neurons is modified by different mechanisms. On a time scale of minutes to hours one finds synaptic plasticity, whereas mechanisms for structural changes at axons or dendrites may take days. One main factor determining structural changes is the weight of a connection, which, in turn, is adapted by synaptic plasticity. Both mechanisms, synaptic and structural plasticity, are influenced and determined by the activity pattern in the network. Hence, it is important to understand how activity and the different plasticity mechanisms influence each other. Especially how activity influences rewiring in adult networks is still an open question. We present a model, which captures these complex interactions by abstracting structural plasticity with weight-dependent probabilities. This allows for calculating the distribution of the number of synapses between two neurons analytically. We report that biologically realistic connection patterns for different cortical layers generically arise with synaptic plasticity rules in which the synaptic weights grow with postsynaptic activity. The connectivity patterns also lead to different activity levels resembling those found in the different cortical layers. Interestingly such a system exhibits a hysteresis by which connections remain stable longer than expected, which may add to the stability of information storage in the network.
Collapse
Affiliation(s)
- Michael Fauth
- Georg-August University Göttingen, Third Institute of Physics, Bernstein Center for Computational Neuroscience, Göttingen, Germany
- * E-mail:
| | - Florentin Wörgötter
- Georg-August University Göttingen, Third Institute of Physics, Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Christian Tetzlaff
- Georg-August University Göttingen, Third Institute of Physics, Bernstein Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
48
|
Bernacchia A. The interplay of plasticity and adaptation in neural circuits: a generative model. Front Synaptic Neurosci 2014; 6:26. [PMID: 25400577 PMCID: PMC4214225 DOI: 10.3389/fnsyn.2014.00026] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2014] [Accepted: 10/09/2014] [Indexed: 11/13/2022] Open
Abstract
Multiple neural and synaptic phenomena take place in the brain. They operate over a broad range of timescales, and the consequences of their interplay are still unclear. In this work, I study a computational model of a recurrent neural network in which two dynamic processes take place: sensory adaptation and synaptic plasticity. Both phenomena are ubiquitous in the brain, but their dynamic interplay has not been investigated. I show that when both processes are included, the neural circuit is able to perform a specific computation: it becomes a generative model for certain distributions of input stimuli. The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli. In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution. This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future.
Collapse
Affiliation(s)
- Alberto Bernacchia
- School of Engineering and Science, Jacobs University Bremen Bremen, Germany
| |
Collapse
|
49
|
Chary M, Kaplan E. Synchrony can destabilize reward-sensitive networks. Front Neural Circuits 2014; 8:44. [PMID: 24817842 PMCID: PMC4012213 DOI: 10.3389/fncir.2014.00044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2013] [Accepted: 04/08/2014] [Indexed: 11/13/2022] Open
Abstract
When exposed to rewarding stimuli, only some animals develop persistent craving. Others are resilient and do not. How the activity of neural populations relates to the development of persistent craving behavior is not fully understood. Previous computational studies suggest that synchrony helps a network embed certain patterns of activity, although the role of synchrony in reward-dependent learning has been less studied. Increased synchrony has been reported as a marker for both susceptibility and resilience to developing persistent craving. Here we use computational simulations to study the effect of reward salience on the ability of synchronous input to embed a new pattern of activity into a neural population. Our main finding is that weak stimulus-reward correlations can facilitate the short-term repetition of a pattern of neural activity, while blocking long-term embedding of that pattern. Interestingly, synchrony did not have this dual effect on all patterns, which suggests that synchrony is more effective at embedding some patterns of activity than others. Our results demonstrate that synchrony can have opposing effects in networks sensitive to the correlation structure of their inputs, in this case the correlation between stimulus and reward. This work contributes to an understanding of the interplay between synchrony and reward-dependent plasticity.
Collapse
Affiliation(s)
- Michael Chary
- Department of Neuroscience, Icahn School of Medicine Mount Sinai, Friedman Brain Institute New York, NY, USA
| | - Ehud Kaplan
- Department of Neuroscience, Icahn School of Medicine Mount Sinai, Friedman Brain Institute New York, NY, USA
| |
Collapse
|
50
|
Tully PJ, Hennig MH, Lansner A. Synaptic and nonsynaptic plasticity approximating probabilistic inference. Front Synaptic Neurosci 2014; 6:8. [PMID: 24782758 PMCID: PMC3986567 DOI: 10.3389/fnsyn.2014.00008] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 03/20/2014] [Indexed: 12/28/2022] Open
Abstract
Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.
Collapse
Affiliation(s)
- Philip J Tully
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Matthias H Hennig
- School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| |
Collapse
|