101
|
Lohmann J, D'Huys O, Haynes ND, Schöll E, Gauthier DJ. Transient dynamics and their control in time-delay autonomous Boolean ring networks. Phys Rev E 2017; 95:022211. [PMID: 28297900 DOI: 10.1103/physreve.95.022211] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Indexed: 01/08/2023]
Abstract
Biochemical systems with switch-like interactions, such as gene regulatory networks, are well modeled by autonomous Boolean networks. Specifically, the topology and logic of gene interactions can be described by systems of continuous piecewise-linear differential equations, enabling analytical predictions of the dynamics of specific networks. However, most models do not account for time delays along links associated with spatial transport, mRNA transcription, and translation. To address this issue, we have developed an experimental test bed to realize a time-delay autonomous Boolean network with three inhibitory nodes, known as a repressilator, and use it to study the dynamics that arise as time delays along the links vary. We observe various nearly periodic oscillatory transient patterns with extremely long lifetime, which emerge in small network motifs due to the delay, and which are distinct from the eventual asymptotically stable periodic attractors. For repeated experiments with a given network, we find that stochastic processes give rise to a broad distribution of transient times with an exponential tail. In some cases, the transients are so long that it is doubtful the attractors will ever be approached in a biological system that has a finite lifetime. To counteract the long transients, we show experimentally that small, occasional perturbations applied to the time delays can force the trajectories to rapidly approach the attractors.
Collapse
Affiliation(s)
- Johannes Lohmann
- Department of Physics, Duke University, Durham, North Carolina 27708, USA.,Institut für Theoretische Physik, Technische Universität Berlin, 10623 Berlin, Germany
| | - Otti D'Huys
- Department of Physics, Duke University, Durham, North Carolina 27708, USA
| | - Nicholas D Haynes
- Department of Physics, Duke University, Durham, North Carolina 27708, USA
| | - Eckehard Schöll
- Institut für Theoretische Physik, Technische Universität Berlin, 10623 Berlin, Germany
| | - Daniel J Gauthier
- Department of Physics, Duke University, Durham, North Carolina 27708, USA.,Department of Physics, The Ohio State University, Columbus, Ohio 43210, USA
| |
Collapse
|
102
|
Safonov DA, Klinshov VV, Vanag VK. Dynamical regimes of four oscillators with excitatory pulse coupling. Phys Chem Chem Phys 2017; 19:12490-12501. [DOI: 10.1039/c7cp01177f] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Dynamics of four almost identical chemical oscillators pulse coupled via excitatory coupling with time delays are systematically studied.
Collapse
Affiliation(s)
- Dmitry A. Safonov
- Centre for Nonlinear Chemistry
- Immanuel Kant Baltic Federal University
- Kaliningrad
- Russia
| | - Vladimir V. Klinshov
- Institute of Applied Physics of the Russian Academy of Sciences
- Nizhny Novgorod
- Russia
| | - Vladimir K. Vanag
- Centre for Nonlinear Chemistry
- Immanuel Kant Baltic Federal University
- Kaliningrad
- Russia
| |
Collapse
|
103
|
Invariant Temporal Dynamics Underlie Perceptual Stability in Human Visual Cortex. Curr Biol 2017; 27:155-165. [DOI: 10.1016/j.cub.2016.11.024] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2016] [Revised: 11/09/2016] [Accepted: 11/10/2016] [Indexed: 11/21/2022]
|
104
|
Li G, Deng L, Wang D, Wang W, Zeng F, Zhang Z, Li H, Song S, Pei J, Shi L. Hierarchical Chunking of Sequential Memory on Neuromorphic Architecture with Reduced Synaptic Plasticity. Front Comput Neurosci 2016; 10:136. [PMID: 28066223 PMCID: PMC5168929 DOI: 10.3389/fncom.2016.00136] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Accepted: 12/01/2016] [Indexed: 11/30/2022] Open
Abstract
Chunking refers to a phenomenon whereby individuals group items together when performing a memory task to improve the performance of sequential memory. In this work, we build a bio-plausible hierarchical chunking of sequential memory (HCSM) model to explain why such improvement happens. We address this issue by linking hierarchical chunking with synaptic plasticity and neuromorphic engineering. We uncover that a chunking mechanism reduces the requirements of synaptic plasticity since it allows applying synapses with narrow dynamic range and low precision to perform a memory task. We validate a hardware version of the model through simulation, based on measured memristor behavior with narrow dynamic range in neuromorphic circuits, which reveals how chunking works and what role it plays in encoding sequential memory. Our work deepens the understanding of sequential memory and enables incorporating it for the investigation of the brain-inspired computing on neuromorphic architecture.
Collapse
Affiliation(s)
- Guoqi Li
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University Beijing, China
| | - Lei Deng
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University Beijing, China
| | - Dong Wang
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University Beijing, China
| | - Wei Wang
- School of Automation Science and Electric Engineering, Beihang University Beijing, China
| | - Fei Zeng
- Department of Materials Science and Engineering, Tsinghua University Beijing, China
| | - Ziyang Zhang
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University Beijing, China
| | - Huanglong Li
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University Beijing, China
| | - Sen Song
- School of Medicine, Tsinghua University Beijing, China
| | - Jing Pei
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University Beijing, China
| | - Luping Shi
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University Beijing, China
| |
Collapse
|
105
|
Grollier J, Querlioz D, Stiles MD. Spintronic Nanodevices for Bioinspired Computing. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2016; 104:2024-2039. [PMID: 27881881 PMCID: PMC5117478 DOI: 10.1109/jproc.2016.2597152] [Citation(s) in RCA: 85] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultra-high-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, non-volatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal-oxide-semiconductor (CMOS) bioinspired hardware.
Collapse
Affiliation(s)
- Julie Grollier
- Unité Mixte de Physique CNRS, Thales, Univ. Paris-Sud, Université Paris-Saclay, 91767 Palaiseau, France
| | - Damien Querlioz
- Centre de Nanosciences et de Nanotechnologies, CNRS, Université Paris-Saclay, 91405 Orsay, France
| | - Mark D. Stiles
- Center for Nanoscale Science and Technology, National Institute of Standards and Technology, Gaithersburg, MD 20899-6202 USA
| |
Collapse
|
106
|
Daza A, Wagemakers A, Georgeot B, Guéry-Odelin D, Sanjuán MAF. Basin entropy: a new tool to analyze uncertainty in dynamical systems. Sci Rep 2016; 6:31416. [PMID: 27514612 PMCID: PMC4981859 DOI: 10.1038/srep31416] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Accepted: 07/18/2016] [Indexed: 11/12/2022] Open
Abstract
In nonlinear dynamics, basins of attraction link a given set of initial conditions to its corresponding final states. This notion appears in a broad range of applications where several outcomes are possible, which is a common situation in neuroscience, economy, astronomy, ecology and many other disciplines. Depending on the nature of the basins, prediction can be difficult even in systems that evolve under deterministic rules. From this respect, a proper classification of this unpredictability is clearly required. To address this issue, we introduce the basin entropy, a measure to quantify this uncertainty. Its application is illustrated with several paradigmatic examples that allow us to identify the ingredients that hinder the prediction of the final state. The basin entropy provides an efficient method to probe the behavior of a system when different parameters are varied. Additionally, we provide a sufficient condition for the existence of fractal basin boundaries: when the basin entropy of the boundaries is larger than log2, the basin is fractal.
Collapse
Affiliation(s)
- Alvar Daza
- Nonlinear Dynamics, Chaos and Complex Systems Group, Departamento de Física, Universidad Rey Juan Carlos, Móstoles, Madrid, Tulipán s/n, 28933, Spain
| | - Alexandre Wagemakers
- Nonlinear Dynamics, Chaos and Complex Systems Group, Departamento de Física, Universidad Rey Juan Carlos, Móstoles, Madrid, Tulipán s/n, 28933, Spain
| | - Bertrand Georgeot
- Laboratoire de Physique Théorique, IRSAMC, Université de Toulouse, CNRS, UPS, France
| | - David Guéry-Odelin
- Laboratoire Collisions, Agrégats, Réactivité, IRSAMC, Université de Toulouse, CNRS, UPS, France
| | - Miguel A. F. Sanjuán
- Nonlinear Dynamics, Chaos and Complex Systems Group, Departamento de Física, Universidad Rey Juan Carlos, Móstoles, Madrid, Tulipán s/n, 28933, Spain
| |
Collapse
|
107
|
Rolls ET, Deco G. Non-reward neural mechanisms in the orbitofrontal cortex. Cortex 2016; 83:27-38. [PMID: 27474915 DOI: 10.1016/j.cortex.2016.06.023] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Revised: 05/03/2016] [Accepted: 06/24/2016] [Indexed: 01/23/2023]
Abstract
Single neurons in the primate orbitofrontal cortex respond when an expected reward is not obtained, and behaviour must change. The human lateral orbitofrontal cortex is activated when non-reward, or loss occurs. The neuronal computation of this negative reward prediction error is fundamental for the emotional changes associated with non-reward, and with changing behaviour. Little is known about the neuronal mechanism. Here we propose a mechanism, which we formalize into a neuronal network model, which is simulated to enable the operation of the mechanism to be investigated. A single attractor network has a reward population (or pool) of neurons that is activated by expected reward, and maintain their firing until, after a time, synaptic depression reduces the firing rate in this neuronal population. If a reward outcome is not received, the decreasing firing in the reward neurons releases the inhibition implemented by inhibitory neurons, and this results in a second population of non-reward neurons to start and continue firing encouraged by the spiking-related noise in the network. If a reward outcome is received, this keeps the reward attractor active, and this through the inhibitory neurons prevents the non-reward attractor neurons from being activated. If an expected reward has been signalled, and the reward attractor neurons are active, their firing can be directly inhibited by a non-reward outcome, and the non-reward neurons become activated because the inhibition on them is released. The neuronal mechanisms in the orbitofrontal cortex for computing negative reward prediction error are important, for this system may be over-reactive in depression, under-reactive in impulsive behaviour, and may influence the dopaminergic 'prediction error' neurons.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; University of Warwick, Department of Computer Science, Coventry, UK. http://www.oxcns.org
| | - Gustavo Deco
- Universitat Pompeu Fabra, Theoretical and Computational Neuroscience, Barcelona, Spain; Institucio Catalana de Recerca i Estudis Avancats (ICREA), Spain
| |
Collapse
|
108
|
Miskovic V, Owens M, Kuntzelman K, Gibb BE. Charting moment-to-moment brain signal variability from early to late childhood. Cortex 2016; 83:51-61. [PMID: 27479615 DOI: 10.1016/j.cortex.2016.07.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Revised: 05/20/2016] [Accepted: 07/06/2016] [Indexed: 01/08/2023]
Abstract
Large-scale brain signals exhibit rich intermittent patterning, reflecting the fact that the cortex actively eschews fixed points in favor of itinerant wandering with frequent state transitions. Fluctuations in endogenous cortical activity occur at multiple time scales and index a dynamic repertoire of network states that are continuously explored, even in the absence of external sensory inputs. Here, we quantified such moment-to-moment brain signal variability at rest in a large, cross-sectional sample of children ranging in age from seven to eleven years. Our findings revealed a monotonic rise in the complexity of electroencephalogram (EEG) signals as measured by sample entropy, from the youngest to the oldest age cohort, across a range of time scales and spatial regions. From year to year, the greatest changes in intraindividual brain signal variability were recorded at electrodes covering the anterior cortical zones. These results provide converging evidence concerning the age-dependent expansion of functional cortical network states during a critical developmental period ranging from early to late childhood.
Collapse
Affiliation(s)
- Vladimir Miskovic
- Center for Affective Science, State University of New York at Binghamton, USA.
| | - Max Owens
- Center for Affective Science, State University of New York at Binghamton, USA
| | - Karl Kuntzelman
- Center for Affective Science, State University of New York at Binghamton, USA
| | - Brandon E Gibb
- Center for Affective Science, State University of New York at Binghamton, USA
| |
Collapse
|
109
|
Xiao X, Deng H, Wei L, Huang Y, Wang Z. Neural activity of orbitofrontal cortex contributes to control of waiting. Eur J Neurosci 2016; 44:2300-13. [PMID: 27336203 DOI: 10.1111/ejn.13320] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Revised: 05/24/2016] [Accepted: 06/20/2016] [Indexed: 11/26/2022]
Abstract
The willingness to wait for delayed reward and information is of fundamental importance for deliberative behaviors. The orbitofrontal cortex (OFC) is thought to be a core component of the neural circuitry underlying the capacity to control waiting. However, the neural correlates of active waiting and the causal role of the OFC in the control of waiting still remain largely unknown. Here, we trained rats to perform a waiting task (waiting for a pseudorandom time to obtain the water reward), and recorded neuronal ensembles in the OFC throughout the task. We observed that subset OFC neurons exhibited ramping activities throughout the waiting process. Receiver operating characteristic analysis showed that neural activities during the waiting period even predicted the trial outcomes (patient vs. impatient) on a trial-by-trial basis. Furthermore, optogenetic activation of the OFC during the waiting period improved the waiting performance, but did not influence rats' movement to obtain the reward. Taken together, these findings reveal that the neural activity in the OFC contributes to the control of waiting.
Collapse
Affiliation(s)
- Xiong Xiao
- Institute of Neuroscience, State Key Laboratory of Neuroscience and CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, University of Chinese Academy of Sciences, Shanghai, 200031, China.,Graduate School of University of Chinese Academy of Sciences, Shanghai, China
| | - Hanfei Deng
- Institute of Neuroscience, State Key Laboratory of Neuroscience and CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, University of Chinese Academy of Sciences, Shanghai, 200031, China.,Graduate School of University of Chinese Academy of Sciences, Shanghai, China
| | - Lei Wei
- Institute of Neuroscience, State Key Laboratory of Neuroscience and CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, University of Chinese Academy of Sciences, Shanghai, 200031, China
| | - Yanwang Huang
- Institute of Neuroscience, State Key Laboratory of Neuroscience and CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, University of Chinese Academy of Sciences, Shanghai, 200031, China
| | - Zuoren Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience and CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, University of Chinese Academy of Sciences, Shanghai, 200031, China.
| |
Collapse
|
110
|
Wang J, Niebur E, Hu J, Li X. Suppressing epileptic activity in a neural mass model using a closed-loop proportional-integral controller. Sci Rep 2016; 6:27344. [PMID: 27273563 PMCID: PMC4895166 DOI: 10.1038/srep27344] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2015] [Accepted: 05/18/2016] [Indexed: 11/09/2022] Open
Abstract
Closed-loop control is a promising deep brain stimulation (DBS) strategy that could be used to suppress high-amplitude epileptic activity. However, there are currently no analytical approaches to determine the stimulation parameters for effective and safe treatment protocols. Proportional-integral (PI) control is the most extensively used closed-loop control scheme in the field of control engineering because of its simple implementation and perfect performance. In this study, we took Jansen's neural mass model (NMM) as a test bed to develop a PI-type closed-loop controller for suppressing epileptic activity. A graphical stability analysis method was employed to determine the stabilizing region of the PI controller in the control parameter space, which provided a theoretical guideline for the choice of the PI control parameters. Furthermore, we established the relationship between the parameters of the PI controller and the parameters of the NMM in the form of a stabilizing region, which provided insights into the mechanisms that may suppress epileptic activity in the NMM. The simulation results demonstrated the validity and effectiveness of the proposed closed-loop PI control scheme.
Collapse
Affiliation(s)
- Junsong Wang
- School of Biomedical Engineering, Tianjin Medical University, Tianjin 300070, China
| | - Ernst Niebur
- Zanvyl Krieger Mind/Brain Institute and Solomon Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jinyu Hu
- Division of Immunology and Rheumatology, Department of Medicine, Stanford University, Stanford, CA 94305, USA
| | - Xiaoli Li
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
111
|
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Comput Biol 2016; 12:e1004954. [PMID: 27213810 PMCID: PMC4877102 DOI: 10.1371/journal.pcbi.1004954] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/28/2016] [Indexed: 11/25/2022] Open
Abstract
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
Collapse
|
112
|
Tozzi A, Flå T, Peters JF. Building a minimum frustration framework for brain functions over long time scales. J Neurosci Res 2016; 94:702-16. [PMID: 27114266 DOI: 10.1002/jnr.23748] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Revised: 03/14/2016] [Accepted: 03/28/2016] [Indexed: 01/02/2023]
Abstract
The minimum frustration principle (MFP) is a computational approach stating that, over the long time scales of evolution, proteins' free energy decreases more than expected by thermodynamical constraints as their amino acids assume conformations progressively closer to the lowest energetic state. This Review shows that this general principle, borrowed from protein folding dynamics, can also be fruitfully applied to nervous function. Highlighting the foremost role of energetic requirements, macromolecular dynamics, and above all intertwined time scales in brain activity, the MFP elucidates a wide range of mental processes from sensations to memory retrieval. Brain functions are compared with trajectories that, over long nervous time scales, are attracted toward the low-energy bottom of funnel-like structures characterized by both robustness and plasticity. We discuss how the principle, derived explicitly from evolution and selection of a funneling structure from microdynamics of contacts, is unlike other brain models equipped with energy landscapes, such as the Bayesian and free energy principles and the Hopfield networks. In summary, we make available a novel approach to brain function cast in a biologically informed fashion, with the potential to be operationalized and assessed empirically. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Arturo Tozzi
- Center for Nonlinear Science, University of North Texas, Denton, Texas
| | - Tor Flå
- Department of Mathematics and Statistics, Centre for Theoretical and Computational Chemistry, UiT, The Arctic University of Norway, Tromsø, Norway
| | - James F Peters
- Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, Canada.,Department of Mathematics, Adıyaman University, Adıyaman, Turkey
| |
Collapse
|
113
|
Yada Y, Kanzaki R, Takahashi H. State-Dependent Propagation of Neuronal Sub-Population in Spontaneous Synchronized Bursts. Front Syst Neurosci 2016; 10:28. [PMID: 27065820 PMCID: PMC4815764 DOI: 10.3389/fnsys.2016.00028] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Accepted: 03/14/2016] [Indexed: 01/05/2023] Open
Abstract
Repeating stable spatiotemporal patterns emerge in synchronized spontaneous activity in neuronal networks. The repertoire of such patterns can serve as memory, or a reservoir of information, in a neuronal network; moreover, the variety of patterns may represent the network memory capacity. However, a neuronal substrate for producing a repertoire of patterns in synchronization remains elusive. We herein hypothesize that state-dependent propagation of a neuronal sub-population is the key mechanism. By combining high-resolution measurement with a 4096-channel complementary metal-oxide semiconductor (CMOS) microelectrode array (MEA) and dimensionality reduction with non-negative matrix factorization (NMF), we investigated synchronized bursts of dissociated rat cortical neurons at approximately 3 weeks in vitro. We found that bursts had a repertoire of repeating spatiotemporal patterns, and different patterns shared a partially similar sequence of sub-population, supporting the idea of sequential structure of neuronal sub-populations during synchronized activity. We additionally found that similar spatiotemporal patterns tended to appear successively and periodically, suggesting a state-dependent fluctuation of propagation, which has been overlooked in existing literature. Thus, such a state-dependent property within the sequential sub-population structure is a plausible neural substrate for performing a repertoire of stable patterns during synchronized activity.
Collapse
Affiliation(s)
- Yuichiro Yada
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan; Japan Society for the Promotion of ScienceTokyo, Japan
| | - Ryohei Kanzaki
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan
| | - Hirokazu Takahashi
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan
| |
Collapse
|
114
|
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2016; 2016:3917892. [PMID: 26880876 PMCID: PMC4735989 DOI: 10.1155/2016/3917892] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2015] [Revised: 10/08/2015] [Accepted: 10/15/2015] [Indexed: 11/17/2022]
Abstract
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
Collapse
|
115
|
Rueckert E, Kappel D, Tanneberg D, Pecevski D, Peters J. Recurrent Spiking Networks Solve Planning Tasks. Sci Rep 2016; 6:21142. [PMID: 26888174 PMCID: PMC4758071 DOI: 10.1038/srep21142] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 12/18/2015] [Indexed: 11/12/2022] Open
Abstract
A recurrent spiking neural network is proposed that implements planning as probabilistic inference for finite and infinite horizon tasks. The architecture splits this problem into two parts: The stochastic transient firing of the network embodies the dynamics of the planning task. With appropriate injected input this dynamics is shaped to generate high-reward state trajectories. A general class of reward-modulated plasticity rules for these afferent synapses is presented. The updates optimize the likelihood of getting a reward through a variant of an Expectation Maximization algorithm and learning is guaranteed to convergence to a local maximum. We find that the network dynamics are qualitatively similar to transient firing patterns during planning and foraging in the hippocampus of awake behaving rats. The model extends classical attractor models and provides a testable prediction on identifying modulating contextual information. In a real robot arm reaching and obstacle avoidance task the ability to represent multiple task solutions is investigated. The neural planning method with its local update rules provides the basis for future neuromorphic hardware implementations with promising potentials like large data processing abilities and early initiation of strategies to avoid dangerous situations in robot co-worker scenarios.
Collapse
Affiliation(s)
- Elmar Rueckert
- Intelligent Autonomous Systems Lab, Technische Universität Darmstadt, 64289, Germany
| | - David Kappel
- Institute for Theoretical Computer Science, Technische Universität Graz, 8020, Austria
| | - Daniel Tanneberg
- Intelligent Autonomous Systems Lab, Technische Universität Darmstadt, 64289, Germany
| | - Dejan Pecevski
- Institute for Theoretical Computer Science, Technische Universität Graz, 8020, Austria
| | - Jan Peters
- Intelligent Autonomous Systems Lab, Technische Universität Darmstadt, 64289, Germany.,Robot Learning Group, Max-Planck Institute for Intelligent Systems, Tuebingen, 72076, Germany
| |
Collapse
|
116
|
Horikawa Y. Effects of self-coupling and asymmetric output on metastable dynamical transient firing patterns in arrays of neurons with bidirectional inhibitory coupling. Neural Netw 2016; 76:13-28. [PMID: 26829604 DOI: 10.1016/j.neunet.2015.12.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2015] [Revised: 12/16/2015] [Accepted: 12/25/2015] [Indexed: 10/22/2022]
Abstract
Metastable dynamical transient patterns in arrays of bidirectionally coupled neurons with self-coupling and asymmetric output were studied. First, an array of asymmetric sigmoidal neurons with symmetric inhibitory bidirectional coupling and self-coupling was considered and the bifurcations of its steady solutions were shown. Metastable dynamical transient spatially nonuniform states existed in the presence of a pair of spatially symmetric stable solutions as well as unstable spatially nonuniform solutions in a restricted range of the output gain of a neuron. The duration of the transients increased exponentially with the number of neurons up to the maximum number at which the spatially nonuniform steady solutions were stabilized. The range of the output gain for which they existed reduced as asymmetry in a sigmoidal output function of a neuron increased, while the existence range expanded as the strength of inhibitory self-coupling increased. Next, arrays of spiking neuron models with slow synaptic inhibitory bidirectional coupling and self-coupling were considered with computer simulation. In an array of Class 1 Hindmarsh-Rose type models, in which each neuron showed a graded firing rate, metastable dynamical transient firing patterns were observed in the presence of inhibitory self-coupling. This agreed with the condition for the existence of metastable dynamical transients in an array of sigmoidal neurons. In an array of Class 2 Bonhoeffer-van der Pol models, in which each neuron had a clear threshold between firing and resting, long-lasting transient firing patterns with bursting and irregular motion were observed.
Collapse
Affiliation(s)
- Yo Horikawa
- Faculty of Engineering, Kagawa University, Takamatsu, 761-0396, Japan.
| |
Collapse
|
117
|
Tošić T, Sellers KK, Fröhlich F, Fedotenkova M, Beim Graben P, Hutt A. Statistical Frequency-Dependent Analysis of Trial-to-Trial Variability in Single Time Series by Recurrence Plots. Front Syst Neurosci 2016; 9:184. [PMID: 26834580 PMCID: PMC4712310 DOI: 10.3389/fnsys.2015.00184] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 12/18/2015] [Indexed: 01/27/2023] Open
Abstract
For decades, research in neuroscience has supported the hypothesis that brain dynamics exhibits recurrent metastable states connected by transients, which together encode fundamental neural information processing. To understand the system's dynamics it is important to detect such recurrence domains, but it is challenging to extract them from experimental neuroscience datasets due to the large trial-to-trial variability. The proposed methodology extracts recurrent metastable states in univariate time series by transforming datasets into their time-frequency representations and computing recurrence plots based on instantaneous spectral power values in various frequency bands. Additionally, a new statistical inference analysis compares different trial recurrence plots with corresponding surrogates to obtain statistically significant recurrent structures. This combination of methods is validated by applying it to two artificial datasets. In a final study of visually-evoked Local Field Potentials in partially anesthetized ferrets, the methodology is able to reveal recurrence structures of neural responses with trial-to-trial variability. Focusing on different frequency bands, the δ-band activity is much less recurrent than α-band activity. Moreover, α-activity is susceptible to pre-stimuli, while δ-activity is much less sensitive to pre-stimuli. This difference in recurrence structures in different frequency bands indicates diverse underlying information processing steps in the brain.
Collapse
Affiliation(s)
- Tamara Tošić
- Team Neurosys, InriaVillers-lès-Nancy, France; Loria, Centre National de la Recherche Scientifique, UMR no 7503Villers-lès-Nancy, France; Université de Lorraine, Loria, UMR no 7503Villers-lès-Nancy, France
| | - Kristin K Sellers
- Department of Psychiatry, University of North Carolina at Chapel HillChapel Hill, NC, USA; Neurobiology Curriculum, University of North Carolina at Chapel HillChapel Hill, NC, USA
| | - Flavio Fröhlich
- Department of Psychiatry, University of North Carolina at Chapel HillChapel Hill, NC, USA; Neurobiology Curriculum, University of North Carolina at Chapel HillChapel Hill, NC, USA; Department of Cell Biology and Physiology, University of North Carolina at Chapel HillChapel Hill, NC, USA; Department of Biomedical Engineering, University of North Carolina at Chapel HillChapel Hill, NC, USA; Neuroscience Center, University of North Carolina at Chapel HillChapel Hill, NC, USA
| | - Mariia Fedotenkova
- Team Neurosys, InriaVillers-lès-Nancy, France; Loria, Centre National de la Recherche Scientifique, UMR no 7503Villers-lès-Nancy, France; Université de Lorraine, Loria, UMR no 7503Villers-lès-Nancy, France
| | - Peter Beim Graben
- Department of German Studies and LinguisticsBerlin, Germany; Bernstein Center for Computational NeuroscienceBerlin, Germany
| | - Axel Hutt
- Team Neurosys, InriaVillers-lès-Nancy, France; Loria, Centre National de la Recherche Scientifique, UMR no 7503Villers-lès-Nancy, France; Université de Lorraine, Loria, UMR no 7503Villers-lès-Nancy, France
| |
Collapse
|
118
|
Abstract
Low-level perception results from neural-based computations, which build a multimodal skeleton of unconscious or self-generated inferences on our environment. This review identifies bottleneck issues concerning the role of early primary sensory cortical areas, mostly in rodent and higher mammals (cats and non-human primates), where perception substrates can be searched at multiple scales of neural integration. We discuss the limitation of purely bottom-up approaches for providing realistic models of early sensory processing and the need for identification of fast adaptive processes, operating within the time of a percept. Future progresses will depend on the careful use of comparative neuroscience (guiding the choices of experimental models and species adapted to the questions under study), on the definition of agreed-upon benchmarks for sensory stimulation, on the simultaneous acquisition of neural data at multiple spatio-temporal scales, and on the in vivo identification of key generic integration and plasticity algorithms validated experimentally and in simulations.
Collapse
|
119
|
Huys R, Jirsa VK, Darokhan Z, Valentiniene S, Roland PE. Visually Evoked Spiking Evolves While Spontaneous Ongoing Dynamics Persist. Front Syst Neurosci 2016; 9:183. [PMID: 26778982 PMCID: PMC4705305 DOI: 10.3389/fnsys.2015.00183] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2015] [Accepted: 12/11/2015] [Indexed: 11/13/2022] Open
Abstract
Neurons in the primary visual cortex spontaneously spike even when there are no visual stimuli. It is unknown whether the spiking evoked by visual stimuli is just a modification of the spontaneous ongoing cortical spiking dynamics or whether the spontaneous spiking state disappears and is replaced by evoked spiking. This study of laminar recordings of spontaneous spiking and visually evoked spiking of neurons in the ferret primary visual cortex shows that the spiking dynamics does not change: the spontaneous spiking as well as evoked spiking is controlled by a stable and persisting fixed point attractor. Its existence guarantees that evoked spiking return to the spontaneous state. However, the spontaneous ongoing spiking state and the visual evoked spiking states are qualitatively different and are separated by a threshold (separatrix). The functional advantage of this organization is that it avoids the need for a system reorganization following visual stimulation, and impedes the transition of spontaneous spiking to evoked spiking and the propagation of spontaneous spiking from layer 4 to layers 2-3.
Collapse
Affiliation(s)
- Raoul Huys
- Centre National de la Recherche Scientifique CerCo UMR 5549, Pavillon Baudot CHU Purpan Toulouse, France
| | - Viktor K Jirsa
- Faculté de Médecine, Institut de Neurosciences des Systèmes, Aix-Marseille UniversitéMarseille, France; INSERM UMR1106, Aix-Marseille UniversitéMarseille, France
| | | | | | - Per E Roland
- Department of Neuroscience and Pharmacology, University of Copenhagen Copenhagen, Denmark
| |
Collapse
|
120
|
Reservoir computing and the Sooner-is-Better bottleneck. Behav Brain Sci 2016; 39:e73. [DOI: 10.1017/s0140525x15000783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractPrior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.”
Collapse
|
121
|
Vanag VK, Smelov PS, Klinshov VV. Dynamical regimes of four almost identical chemical oscillators coupled via pulse inhibitory coupling with time delay. Phys Chem Chem Phys 2016; 18:5509-20. [DOI: 10.1039/c5cp06883e] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The dynamics of four almost identical pulse coupled chemical oscillators with time delay are systematically studied.
Collapse
Affiliation(s)
- Vladimir K. Vanag
- Centre for Nonlinear Chemistry
- Chemical-Biological Institute
- Immanuel Kant Baltic Federal University
- Kaliningrad
- Russia
| | - Pavel S. Smelov
- Centre for Nonlinear Chemistry
- Chemical-Biological Institute
- Immanuel Kant Baltic Federal University
- Kaliningrad
- Russia
| | - Vladimir V. Klinshov
- Institute of Applied Physics of the Russian Academy of Sciences
- Nizhny Novgorod
- Russia
| |
Collapse
|
122
|
Fonollosa J, Neftci E, Rabinovich M. Learning of Chunking Sequences in Cognition and Behavior. PLoS Comput Biol 2015; 11:e1004592. [PMID: 26584306 PMCID: PMC4652905 DOI: 10.1371/journal.pcbi.1004592] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2015] [Accepted: 10/05/2015] [Indexed: 12/19/2022] Open
Abstract
We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson’s disease and Schizophrenia. Because chunking is a hallmark of the brain’s organization, efforts to understand its dynamics can provide valuable insights into the brain and its disorders. For identifying the dynamical principles of chunking learning, we hypothesize that perceptual sequences can be learned and stored as a chain of metastable fixed points in a low-dimensional dynamical system, similar to the trajectory of a ball rolling down a pinball machine. During a learning phase, the interactions in the network evolve such that the network learns a chunking representation of the sequence, as when memorizing a phone number in segments. In the example of the pinball machine, learning can be identified with the gradual placement of the pins. After learning, the pins are placed in a way that, at each run, the ball follows the same trajectory (recall of the same sequence) that encodes the perceptual sequence. Simulations show that the dynamics are endowed with the hallmarks of chunking observed in behavioral experiments, such as increased delays observed before loading new chunks.
Collapse
Affiliation(s)
- Jordi Fonollosa
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
- Institute for Bioengineering of Catalonia, Barcelona, Spain
| | - Emre Neftci
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, Irvine, California, United States of America
- * E-mail:
| | - Mikhail Rabinovich
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
| |
Collapse
|
123
|
Tajima S, Yanagawa T, Fujii N, Toyoizumi T. Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding. PLoS Comput Biol 2015; 11:e1004537. [PMID: 26584045 PMCID: PMC4652869 DOI: 10.1371/journal.pcbi.1004537] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 09/07/2015] [Indexed: 12/15/2022] Open
Abstract
Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness.
Collapse
Affiliation(s)
- Satohiro Tajima
- RIKEN Brain Science Institute, Hirosawa, Wako, Saitama, Japan
- Department of Neuroscience, University of Geneva, CMU, Genève, Switzerland
| | - Toru Yanagawa
- RIKEN Brain Science Institute, Hirosawa, Wako, Saitama, Japan
| | - Naotaka Fujii
- RIKEN Brain Science Institute, Hirosawa, Wako, Saitama, Japan
| | - Taro Toyoizumi
- RIKEN Brain Science Institute, Hirosawa, Wako, Saitama, Japan
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Midori-ku, Yokohama, Kanagawa, Japan
| |
Collapse
|
124
|
Abstract
This paper considers communication in terms of inference about the behaviour of others (and our own behaviour). It is based on the premise that our sensations are largely generated by other agents like ourselves. This means, we are trying to infer how our sensations are caused by others, while they are trying to infer our behaviour: for example, in the dialogue between two speakers. We suggest that the infinite regress induced by modelling another agent - who is modelling you - can be finessed if you both possess the same model. In other words, the sensations caused by others and oneself are generated by the same process. This leads to a view of communication based upon a narrative that is shared by agents who are exchanging sensory signals. Crucially, this narrative transcends agency - and simply involves intermittently attending to and attenuating sensory input. Attending to sensations enables the shared narrative to predict the sensations generated by another (i.e. to listen), while attenuating sensory input enables one to articulate the narrative (i.e. to speak). This produces a reciprocal exchange of sensory signals that, formally, induces a generalised synchrony between internal (neuronal) brain states generating predictions in both agents. We develop the arguments behind this perspective, using an active (Bayesian) inference framework and offer some simulations (of birdsong) as proof of principle.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom.
| | - Christopher Frith
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom
| |
Collapse
|
125
|
Zou HL, Katori Y, Deng ZC, Aihara K, Lai YC. Controlled generation of switching dynamics among metastable states in pulse-coupled oscillator networks. CHAOS (WOODBURY, N.Y.) 2015; 25:103109. [PMID: 26520075 DOI: 10.1063/1.4930840] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Switching dynamics among saddles in a network of nonlinear oscillators can be exploited for information encoding and processing (hence computing), but stable attractors in the system can terminate the switching behavior. An effective control strategy is presented to sustain switching dynamics in networks of pulse-coupled oscillators. The support for the switching behavior is a set of saddles, or unstable invariant sets in the phase space. We thus identify saddles with a common property, localize the system in the vicinity of them, and then guide the system from one metastable state to another to generate desired switching dynamics. We demonstrate that the control method successfully generates persistent switching trajectories and prevents the system from entering stable attractors. In addition, there exists correspondence between the network structure and the switching dynamics, providing fundamental insights on the development of a computing paradigm based on the switching dynamics.
Collapse
Affiliation(s)
- Hai-Lin Zou
- School of Mechanics, Civil Engineering and Architecture, Northwestern Polytechnical University, Xi'an 710072, China
| | - Yuichi Katori
- Institute of Industrial Science, University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan
| | - Zi-Chen Deng
- School of Mechanics, Civil Engineering and Architecture, Northwestern Polytechnical University, Xi'an 710072, China
| | - Kazuyuki Aihara
- Institute of Industrial Science, University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan
| | - Ying-Cheng Lai
- School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona 85287, USA
| |
Collapse
|
126
|
Zhang X, Yi H, Bai W, Tian X. Dynamic trajectory of multiple single-unit activity during working memory task in rats. Front Comput Neurosci 2015; 9:117. [PMID: 26441626 PMCID: PMC4585230 DOI: 10.3389/fncom.2015.00117] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2015] [Accepted: 09/07/2015] [Indexed: 02/02/2023] Open
Abstract
Working memory plays an important role in complex cognitive tasks. A popular theoretical view is that transient properties of neuronal dynamics underlie cognitive processing. The question raised here as to how the transient dynamics evolve in working memory. To address this issue, we investigated the multiple single-unit activity dynamics in rat medial prefrontal cortex (mPFC) during a Y-maze working memory task. The approach worked by reconstructing state space from delays of the original single-unit firing rate variables, which were further analyzed using kernel principal component analysis (KPCA). Then the neural trajectories were obtained to visualize the multiple single-unit activity. Furthermore, the maximal Lyapunov exponent (MLE) was calculated to quantitatively evaluate the neural trajectories during the working memory task. The results showed that the neuronal activity produced stable and reproducible neural trajectories in the correct trials while showed irregular trajectories in the incorrect trials, which may establish a link between the neurocognitive process and behavioral performance in working memory. The MLEs significantly increased during working memory in the correctly performed trials, indicating an increased divergence of the neural trajectories. In the incorrect trials, the MLEs were nearly zero and remained unchanged during the task. Taken together, the trial-specific neural trajectory provides an effective way to track the instantaneous state of the neuronal population during the working memory task and offers valuable insights into working memory function. The MLE describes the changes of neural dynamics in working memory and may reflect different neuronal population states in working memory.
Collapse
Affiliation(s)
- Xiaofan Zhang
- Department of Biomedical Engineering, School of Biomedical Engineering and Technology, Tianjin Medical University Tianjin, China
| | - Hu Yi
- Department of Biomedical Engineering, School of Biomedical Engineering and Technology, Tianjin Medical University Tianjin, China
| | - Wenwen Bai
- Department of Biomedical Engineering, School of Biomedical Engineering and Technology, Tianjin Medical University Tianjin, China
| | - Xin Tian
- Department of Biomedical Engineering, School of Biomedical Engineering and Technology, Tianjin Medical University Tianjin, China
| |
Collapse
|
127
|
Amphetamine Exerts Dose-Dependent Changes in Prefrontal Cortex Attractor Dynamics during Working Memory. J Neurosci 2015; 35:10172-87. [PMID: 26180194 DOI: 10.1523/jneurosci.2421-14.2015] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Modulation of neural activity by monoamine neurotransmitters is thought to play an essential role in shaping computational neurodynamics in the neocortex, especially in prefrontal regions. Computational theories propose that monoamines may exert bidirectional (concentration-dependent) effects on cognition by altering prefrontal cortical attractor dynamics according to an inverted U-shaped function. To date, this hypothesis has not been addressed directly, in part because of the absence of appropriate statistical methods required to assess attractor-like behavior in vivo. The present study used a combination of advanced multivariate statistical, time series analysis, and machine learning methods to assess dynamic changes in network activity from multiple single-unit recordings from the medial prefrontal cortex (mPFC) of rats while the animals performed a foraging task guided by working memory after pretreatment with different doses of d-amphetamine (AMPH), which increases monoamine efflux in the mPFC. A dose-dependent, bidirectional effect of AMPH on neural dynamics in the mPFC was observed. Specifically, a 1.0 mg/kg dose of AMPH accentuated separation between task-epoch-specific population states and convergence toward these states. In contrast, a 3.3 mg/kg dose diminished separation and convergence toward task-epoch-specific population states, which was paralleled by deficits in cognitive performance. These results support the computationally derived hypothesis that moderate increases in monoamine efflux would enhance attractor stability, whereas high frontal monoamine levels would severely diminish it. Furthermore, they are consistent with the proposed inverted U-shaped and concentration-dependent modulation of cortical efficiency by monoamines.
Collapse
|
128
|
Smith RX, Jann K, Ances B, Wang DJ. Wavelet-based regularity analysis reveals recurrent spatiotemporal behavior in resting-state fMRI. Hum Brain Mapp 2015; 36:3603-20. [PMID: 26096080 PMCID: PMC4635674 DOI: 10.1002/hbm.22865] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 05/04/2015] [Accepted: 05/18/2015] [Indexed: 11/12/2022] Open
Abstract
One of the major findings from multimodal neuroimaging studies in the past decade is that the human brain is anatomically and functionally organized into large-scale networks. In resting state fMRI (rs-fMRI), spatial patterns emerge when temporal correlations between various brain regions are tallied, evidencing networks of ongoing intercortical cooperation. However, the dynamic structure governing the brain's spontaneous activity is far less understood due to the short and noisy nature of the rs-fMRI signal. Here, we develop a wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform to measure recurrent temporal pattern stability within the rs-fMRI signal across multiple temporal scales. The method consists of performing a stationary wavelet transform to preserve signal structure, followed by construction of "lagged" subsequences to adjust for correlated features, and finally the calculation of sample entropy across wavelet scales based on an "objective" estimate of noise level at each scale. We found that the brain's default mode network (DMN) areas manifest a higher level of irregularity in rs-fMRI time series than rest of the brain. In 25 aged subjects with mild cognitive impairment and 25 matched healthy controls, wavelet-based regularity analysis showed improved sensitivity in detecting changes in the regularity of rs-fMRI signals between the two groups within the DMN and executive control networks, compared with standard multiscale entropy analysis. Wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform is a promising technique to characterize the dynamic structure of rs-fMRI as well as other biological signals.
Collapse
Affiliation(s)
- Robert X. Smith
- Laboratory of FMRI Technology (LOFT), Department of Neurology, Ahmanson‐Lovelace Brain Mapping CenterUniversity of CaliforniaLos AngelesCalifornia
| | - Kay Jann
- Laboratory of FMRI Technology (LOFT), Department of Neurology, Ahmanson‐Lovelace Brain Mapping CenterUniversity of CaliforniaLos AngelesCalifornia
| | - Beau Ances
- Department of Neurology, School of MedicineWashington University in Saint LouisSaint LouisMissouri
| | - Danny J.J. Wang
- Laboratory of FMRI Technology (LOFT), Department of Neurology, Ahmanson‐Lovelace Brain Mapping CenterUniversity of CaliforniaLos AngelesCalifornia
| |
Collapse
|
129
|
Abstract
Single-trial analyses of ensemble activity in alert animals demonstrate that cortical circuits dynamics evolve through temporal sequences of metastable states. Metastability has been studied for its potential role in sensory coding, memory, and decision-making. Yet, very little is known about the network mechanisms responsible for its genesis. It is often assumed that the onset of state sequences is triggered by an external stimulus. Here we show that state sequences can be observed also in the absence of overt sensory stimulation. Analysis of multielectrode recordings from the gustatory cortex of alert rats revealed ongoing sequences of states, where single neurons spontaneously attain several firing rates across different states. This single-neuron multistability represents a challenge to existing spiking network models, where typically each neuron is at most bistable. We present a recurrent spiking network model that accounts for both the spontaneous generation of state sequences and the multistability in single-neuron firing rates. Each state results from the activation of neural clusters with potentiated intracluster connections, with the firing rate in each cluster depending on the number of active clusters. Simulations show that the model's ensemble activity hops among the different states, reproducing the ongoing dynamics observed in the data. When probed with external stimuli, the model predicts the quenching of single-neuron multistability into bistability and the reduction of trial-by-trial variability. Both predictions were confirmed in the data. Together, these results provide a theoretical framework that captures both ongoing and evoked network dynamics in a single mechanistic model.
Collapse
|
130
|
Rabinovich MI, Simmons AN, Varona P. Dynamical bridge between brain and mind. Trends Cogn Sci 2015; 19:453-61. [DOI: 10.1016/j.tics.2015.06.005] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Revised: 06/10/2015] [Accepted: 06/15/2015] [Indexed: 11/26/2022]
|
131
|
Rabinovich MI, Tristan I, Varona P. Hierarchical nonlinear dynamics of human attention. Neurosci Biobehav Rev 2015; 55:18-35. [DOI: 10.1016/j.neubiorev.2015.04.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2014] [Revised: 12/04/2014] [Accepted: 04/01/2015] [Indexed: 12/17/2022]
|
132
|
Abstract
Increasing evidence suggests that neural population responses have their own internal drive, or dynamics, that describe how the neural population evolves through time. An important prediction of neural dynamical models is that previously observed neural activity is informative of noisy yet-to-be-observed activity on single-trials, and may thus have a denoising effect. To investigate this prediction, we built and characterized dynamical models of single-trial motor cortical activity. We find these models capture salient dynamical features of the neural population and are informative of future neural activity on single trials. To assess how neural dynamics may beneficially denoise single-trial neural activity, we incorporate neural dynamics into a brain–machine interface (BMI). In online experiments, we find that a neural dynamical BMI achieves substantially higher performance than its non-dynamical counterpart. These results provide evidence that neural dynamics beneficially inform the temporal evolution of neural activity on single trials and may directly impact the performance of BMIs. In online experiments with monkeys the authors demonstrate, for the first time, that incorporating neural dynamics substantially improves brain–machine interface performance. This result is consistent with a framework hypothesizing that motor cortex is a dynamical machine that generates movement.
Collapse
|
133
|
Modulating conscious movement intention by noninvasive brain stimulation and the underlying neural mechanisms. J Neurosci 2015; 35:7239-55. [PMID: 25948272 DOI: 10.1523/jneurosci.4894-14.2015] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Conscious intention is a fundamental aspect of the human experience. Despite long-standing interest in the basis and implications of intention, its underlying neurobiological mechanisms remain poorly understood. Using high-definition transcranial DC stimulation (tDCS), we observed that enhancing spontaneous neuronal excitability in both the angular gyrus and the primary motor cortex caused the reported time of conscious movement intention to be ∼60-70 ms earlier. Slow brain waves recorded ∼2-3 s before movement onset, as well as hundreds of milliseconds after movement onset, independently correlated with the modulation of conscious intention by brain stimulation. These brain activities together accounted for 81% of interindividual variability in the modulation of movement intention by brain stimulation. A computational model using coupled leaky integrator units with biophysically plausible assumptions about the effect of tDCS captured the effects of stimulation on both neural activity and behavior. These results reveal a temporally extended brain process underlying conscious movement intention that spans seconds around movement commencement.
Collapse
|
134
|
Schwappach C, Hutt A, Beim Graben P. Metastable dynamics in heterogeneous neural fields. Front Syst Neurosci 2015; 9:97. [PMID: 26175671 PMCID: PMC4485166 DOI: 10.3389/fnsys.2015.00097] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2015] [Accepted: 06/15/2015] [Indexed: 11/13/2022] Open
Abstract
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data.
Collapse
Affiliation(s)
- Cordula Schwappach
- Department of German Studies and Linguistics, Humboldt-Universität zu Berlin Berlin, Germany ; Department of Physics, Humboldt-Universität zu Berlin Berlin, Germany
| | - Axel Hutt
- Team Neurosys, Inria Villers-les-Nancy, France ; Team Neurosys, Centre National de la Recherche Scientifique, UMR nō 7503, Loria Villers-les-Nancy, France ; Team Neurosys, UMR nō 7503, Loria, Université de Lorraine Villers-les-Nancy, France
| | - Peter Beim Graben
- Department of German Studies and Linguistics, Humboldt-Universität zu Berlin Berlin, Germany ; Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin Berlin, Germany
| |
Collapse
|
135
|
Friston KJ, Frith CD. Active inference, communication and hermeneutics. Cortex 2015; 68:129-43. [PMID: 25957007 PMCID: PMC4502445 DOI: 10.1016/j.cortex.2015.03.025] [Citation(s) in RCA: 136] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2014] [Revised: 12/06/2014] [Accepted: 03/27/2015] [Indexed: 11/16/2022]
Abstract
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa.
Collapse
Affiliation(s)
- Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom.
| | - Christopher D Frith
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom
| |
Collapse
|
136
|
Ueltzhöffer K, Armbruster-Genç DJN, Fiebach CJ. Stochastic Dynamics Underlying Cognitive Stability and Flexibility. PLoS Comput Biol 2015; 11:e1004331. [PMID: 26068119 PMCID: PMC4466596 DOI: 10.1371/journal.pcbi.1004331] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2014] [Accepted: 05/11/2015] [Indexed: 11/19/2022] Open
Abstract
Cognitive stability and flexibility are core functions in the successful pursuit of behavioral goals. While there is evidence for a common frontoparietal network underlying both functions and for a key role of dopamine in the modulation of flexible versus stable behavior, the exact neurocomputational mechanisms underlying those executive functions and their adaptation to environmental demands are still unclear. In this work we study the neurocomputational mechanisms underlying cue based task switching (flexibility) and distractor inhibition (stability) in a paradigm specifically designed to probe both functions. We develop a physiologically plausible, explicit model of neural networks that maintain the currently active task rule in working memory and implement the decision process. We simplify the four-choice decision network to a nonlinear drift-diffusion process that we canonically derive from a generic winner-take-all network model. By fitting our model to the behavioral data of individual subjects, we can reproduce their full behavior in terms of decisions and reaction time distributions in baseline as well as distractor inhibition and switch conditions. Furthermore, we predict the individual hemodynamic response timecourse of the rule-representing network and localize it to a frontoparietal network including the inferior frontal junction area and the intraparietal sulcus, using functional magnetic resonance imaging. This refines the understanding of task-switch-related frontoparietal brain activity as reflecting attractor-like working memory representations of task rules. Finally, we estimate the subject-specific stability of the rule-representing attractor states in terms of the minimal action associated with a transition between different rule states in the phase-space of the fitted models. This stability measure correlates with switching-specific thalamocorticostriatal activation, i.e., with a system associated with flexible working memory updating and dopaminergic modulation of cognitive flexibility. These results show that stochastic dynamical systems can implement the basic computations underlying cognitive stability and flexibility and explain neurobiological bases of individual differences. In this work we develop a neurophysiologically inspired dynamical model that is capable of solving a complex behavioral task testing cognitive stability and flexibility. We can individually fit the behavior of each of 20 human subjects that conducted this stability-flexibility task during functional magnetic resonance imaging (fMRI). The physiological nature of our model allows us to estimate the energy consumption of the rule-representing module, which we use to predict the hemodynamic fMRI response. Through this model-based prediction, we localize the rule module to a frontoparietal network known to be required for cognitive stability and flexibility. In this way we both validate our model, which is based on noisy attractor dynamics, and specify the computational role of a cortical network that is well-established in human neuroimaging research. Additionally, we quantify the individual stability of the rule-representing states and relate this stability to individual differences in energy consumption during task switching versus distractor inhibition. Hereby we show that the activation of a thalamocorticostriatal network involved in the dopaminergic modulation of cognitive stability is modulated by the model-derived stability of the frontoparietal rule-representing network. Altogether, we show that noisy dynamic systems are likely to implement the basic computations underlying cognitive stability and flexibility.
Collapse
Affiliation(s)
- Kai Ueltzhöffer
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- Bernstein Center for Computational Neuroscience, Heidelberg University, Mannheim, Germany
- * E-mail:
| | - Diana J. N. Armbruster-Genç
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- Bernstein Center for Computational Neuroscience, Heidelberg University, Mannheim, Germany
| | - Christian J. Fiebach
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- Bernstein Center for Computational Neuroscience, Heidelberg University, Mannheim, Germany
- Department of Neuroradiology, Heidelberg University, Im Neuenheimer Feld, Heidelberg, Germany
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- IDeA Center for Individual Development and Adaptive Education, Frankfurt am Main, Germany
| |
Collapse
|
137
|
Soriano MC, Brunner D, Escalona-Morán M, Mirasso CR, Fischer I. Minimal approach to neuro-inspired information processing. Front Comput Neurosci 2015; 9:68. [PMID: 26082714 PMCID: PMC4451339 DOI: 10.3389/fncom.2015.00068] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2015] [Accepted: 05/19/2015] [Indexed: 01/24/2023] Open
Abstract
To learn and mimic how the brain processes information has been a major research challenge for decades. Despite the efforts, little is known on how we encode, maintain and retrieve information. One of the hypothesis assumes that transient states are generated in our intricate network of neurons when the brain is stimulated by a sensory input. Based on this idea, powerful computational schemes have been developed. These schemes, known as machine-learning techniques, include artificial neural networks, support vector machine and reservoir computing, among others. In this paper, we concentrate on the reservoir computing (RC) technique using delay-coupled systems. Unlike traditional RC, where the information is processed in large recurrent networks of interconnected artificial neurons, we choose a minimal design, implemented via a simple nonlinear dynamical system subject to a self-feedback loop with delay. This design is not intended to represent an actual brain circuit, but aims at finding the minimum ingredients that allow developing an efficient information processor. This simple scheme not only allows us to address fundamental questions but also permits simple hardware implementations. By reducing the neuro-inspired reservoir computing approach to its bare essentials, we find that nonlinear transient responses of the simple dynamical system enable the processing of information with excellent performance and at unprecedented speed. We specifically explore different hardware implementations and, by that, we learn about the role of nonlinearity, noise, system responses, connectivity structure, and the quality of projection onto the required high-dimensional state space. Besides the relevance for the understanding of basic mechanisms, this scheme opens direct technological opportunities that could not be addressed with previous approaches.
Collapse
Affiliation(s)
- Miguel C Soriano
- Instituto de Física Interdisciplinar y Sistemas Complejos, (UIB-CSIC) Palma de Mallorca, Spain
| | - Daniel Brunner
- Instituto de Física Interdisciplinar y Sistemas Complejos, (UIB-CSIC) Palma de Mallorca, Spain
| | - Miguel Escalona-Morán
- Instituto de Física Interdisciplinar y Sistemas Complejos, (UIB-CSIC) Palma de Mallorca, Spain
| | - Claudio R Mirasso
- Instituto de Física Interdisciplinar y Sistemas Complejos, (UIB-CSIC) Palma de Mallorca, Spain
| | - Ingo Fischer
- Instituto de Física Interdisciplinar y Sistemas Complejos, (UIB-CSIC) Palma de Mallorca, Spain
| |
Collapse
|
138
|
Friederici AD, Singer W. Grounding language processing on basic neurophysiological principles. Trends Cogn Sci 2015; 19:329-38. [DOI: 10.1016/j.tics.2015.03.012] [Citation(s) in RCA: 74] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2015] [Revised: 03/19/2015] [Accepted: 03/24/2015] [Indexed: 01/02/2023]
|
139
|
Hasson U, Chen J, Honey CJ. Hierarchical process memory: memory as an integral component of information processing. Trends Cogn Sci 2015; 19:304-13. [PMID: 25980649 PMCID: PMC4457571 DOI: 10.1016/j.tics.2015.04.006] [Citation(s) in RCA: 402] [Impact Index Per Article: 40.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Revised: 04/07/2015] [Accepted: 04/10/2015] [Indexed: 11/28/2022]
Abstract
Models of working memory (WM) commonly focus on how information is encoded into and retrieved from storage at specific moments. However, in the majority of real-life processes, past information is used continuously to process incoming information across multiple timescales. Considering single-unit, electrocorticography, and functional imaging data, we argue that (i) virtually all cortical circuits can accumulate information over time, and (ii) the timescales of accumulation vary hierarchically, from early sensory areas with short processing timescales (10s to 100s of milliseconds) to higher-order areas with long processing timescales (many seconds to minutes). In this hierarchical systems perspective, memory is not restricted to a few localized stores, but is intrinsic to information processing that unfolds throughout the brain on multiple timescales.
Collapse
Affiliation(s)
- Uri Hasson
- Department of Psychology and the Neuroscience Institute, Princeton University, NJ 08544-1010, USA.
| | - Janice Chen
- Department of Psychology and the Neuroscience Institute, Princeton University, NJ 08544-1010, USA
| | - Christopher J Honey
- Department of Psychology, University of Toronto, Toronto ON, M5S 3G3, Canada
| |
Collapse
|
140
|
Abstract
Soft machines have recently gained prominence due to their inherent softness and the resulting safety and resilience in applications. However, these machines also have disadvantages, as they respond with complex body dynamics when stimulated. These dynamics exhibit a variety of properties, including nonlinearity, memory, and potentially infinitely many degrees of freedom, which are often difficult to control. Here, we demonstrate that these seemingly undesirable properties can in fact be assets that can be exploited for real-time computation. Using body dynamics generated from a soft silicone arm, we show that they can be employed to emulate desired nonlinear dynamical systems. First, by using benchmark tasks, we demonstrate that the nonlinearity and memory within the body dynamics can increase the computational performance. Second, we characterize our system’s computational capability by comparing its task performance with a standard machine learning technique and identify its range of validity and limitation. Our results suggest that soft bodies are not only impressive in their deformability and flexibility but can also be potentially used as computational resources on top and for free.
Collapse
|
141
|
Abstract
Traditional views separate cognitive processes from sensory-motor processes, seeing cognition as amodal, propositional, and compositional, and thus fundamentally different from the processes that underlie perceiving and acting. These were the ideas on which cognitive science was founded 30 years ago. However, advancing discoveries in neuroscience, cognitive neuroscience, and psychology suggests that cognition may be inseparable from processes of perceiving and acting. From this perspective, this study considers the future of cognitive science with respect to the study of cognitive development.
Collapse
Affiliation(s)
- Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University, Bloomington
| | | |
Collapse
|
142
|
Escalona-Moran MA, Soriano MC, Fischer I, Mirasso CR. Electrocardiogram Classification Using Reservoir Computing With Logistic Regression. IEEE J Biomed Health Inform 2015; 19:892-8. [DOI: 10.1109/jbhi.2014.2332001] [Citation(s) in RCA: 86] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
143
|
Extending Gurwitsch's field theory of consciousness. Conscious Cogn 2015; 34:104-23. [PMID: 25916764 DOI: 10.1016/j.concog.2015.03.017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Revised: 03/22/2015] [Accepted: 03/29/2015] [Indexed: 11/23/2022]
Abstract
Aron Gurwitsch's theory of the structure and dynamics of consciousness has much to offer contemporary theorizing about consciousness and its basis in the embodied brain. On Gurwitsch's account, as we develop it, the field of consciousness has a variable sized focus or "theme" of attention surrounded by a structured periphery of inattentional contents. As the field evolves, its contents change their status, sometimes smoothly, sometimes abruptly. Inner thoughts, a sense of one's body, and the physical environment are dominant field contents. These ideas can be linked with (and help unify) contemporary theories about the neural correlates of consciousness, inattention, the small world structure of the brain, meta-stable dynamics, embodied cognition, and predictive coding in the brain.
Collapse
|
144
|
Abstract
The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon, and experimental designs are discussed.
Collapse
Affiliation(s)
- David Papo
- GISC and Laboratory of Biological Networks, Center for Biomedical Technology, Universidad Politécnica de MadridMadrid, Spain
| |
Collapse
|
145
|
Beim Graben P, Hutt A. Detecting event-related recurrences by symbolic analysis: applications to human language processing. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2015; 373:rsta.2014.0089. [PMID: 25548270 PMCID: PMC4281863 DOI: 10.1098/rsta.2014.0089] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Quasi-stationarity is ubiquitous in complex dynamical systems. In brain dynamics, there is ample evidence that event-related potentials (ERPs) reflect such quasi-stationary states. In order to detect them from time series, several segmentation techniques have been proposed. In this study, we elaborate a recent approach for detecting quasi-stationary states as recurrence domains by means of recurrence analysis and subsequent symbolization methods. We address two pertinent problems of contemporary recurrence analysis: optimizing the size of recurrence neighbourhoods and identifying symbols from different realizations for sequence alignment. As possible solutions for these problems, we suggest a maximum entropy criterion and a Hausdorff clustering algorithm. The resulting recurrence domains for single-subject ERPs are obtained as partition cells reflecting quasi-stationary brain states.
Collapse
Affiliation(s)
- Peter Beim Graben
- Department of German Studies and Linguistics, Humboldt- Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Axel Hutt
- Team Neurosys, INRIA CR Nancy, 54602 Villers-les-Nancy Cedex, France
| |
Collapse
|
146
|
Hansen ECA, Battaglia D, Spiegler A, Deco G, Jirsa VK. Functional connectivity dynamics: Modeling the switching behavior of the resting state. Neuroimage 2015; 105:525-35. [PMID: 25462790 DOI: 10.1016/j.neuroimage.2014.11.001] [Citation(s) in RCA: 331] [Impact Index Per Article: 33.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2014] [Revised: 10/20/2014] [Accepted: 11/03/2014] [Indexed: 02/05/2023] Open
Affiliation(s)
- Enrique C A Hansen
- Université Aix-Marseille, INSERM UMR 1106, Institut de Neurosciences des Systèmes, 27Bd Jean Moulin, 13005 Marseille, France.
| | - Demian Battaglia
- Université Aix-Marseille, INSERM UMR 1106, Institut de Neurosciences des Systèmes, 27Bd Jean Moulin, 13005 Marseille, France; Bernstein Center for Computational Neuroscience, Am Faßberg 17, 37077 Göttingen, Germany.
| | - Andreas Spiegler
- Université Aix-Marseille, INSERM UMR 1106, Institut de Neurosciences des Systèmes, 27Bd Jean Moulin, 13005 Marseille, France.
| | - Gustavo Deco
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain.
| | - Viktor K Jirsa
- Université Aix-Marseille, INSERM UMR 1106, Institut de Neurosciences des Systèmes, 27Bd Jean Moulin, 13005 Marseille, France.
| |
Collapse
|
147
|
Xie Y, Izu LT, Bers DM, Sato D. Arrhythmogenic transient dynamics in cardiac myocytes. Biophys J 2014; 106:1391-7. [PMID: 24655514 DOI: 10.1016/j.bpj.2013.12.050] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2013] [Revised: 11/09/2013] [Accepted: 12/31/2013] [Indexed: 02/04/2023] Open
Abstract
Cardiac action potential alternans and early afterdepolarizations (EADs) are linked to cardiac arrhythmias. Periodic action potentials (period 1) in healthy conditions bifurcate to other states such as period 2 or chaos when alternans or EADs occur in pathological conditions. The mechanisms of alternans and EADs have been extensively studied under steady-state conditions, but lethal arrhythmias often occur during the transition between steady states. Why arrhythmias tend to develop during the transition is unclear. We used low-dimensional mathematical models to analyze dynamical mechanisms of transient alternans and EADs. We show that depending on the route from one state to another, action potential alternans and EADs may occur during the transition between two periodic steady states. The route taken depends on the time course of external perturbations or intrinsic signaling, such as β-adrenergic stimulation, which regulate cardiac calcium and potassium currents with differential kinetics.
Collapse
Affiliation(s)
- Yuanfang Xie
- Departments of Pharmacology, University of California Davis, Davis, California
| | - Leighton T Izu
- Departments of Pharmacology, University of California Davis, Davis, California
| | - Donald M Bers
- Departments of Pharmacology, University of California Davis, Davis, California
| | - Daisuke Sato
- Departments of Pharmacology, University of California Davis, Davis, California.
| |
Collapse
|
148
|
Nakajima K, Li T, Hauser H, Pfeifer R. Exploiting short-term memory in soft body dynamics as a computational resource. J R Soc Interface 2014; 11:20140437. [PMID: 25185579 PMCID: PMC4191087 DOI: 10.1098/rsif.2014.0437] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2014] [Accepted: 08/13/2014] [Indexed: 11/12/2022] Open
Abstract
Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems.
Collapse
Affiliation(s)
- K Nakajima
- The Hakubi Center for Advanced Research, Kyoto University, 606-8501 Kyoto, Japan Department of Applied Analysis and Complex Dynamical Systems, Graduate School of Informatics, Kyoto University, 606-8501 Kyoto, Japan
| | - T Li
- Artificial Intelligence Laboratory, Department of Informatics, University of Zurich, 8050 Zurich, Switzerland
| | - H Hauser
- Artificial Intelligence Laboratory, Department of Informatics, University of Zurich, 8050 Zurich, Switzerland
| | - R Pfeifer
- Artificial Intelligence Laboratory, Department of Informatics, University of Zurich, 8050 Zurich, Switzerland
| |
Collapse
|
149
|
Liu D, Gu X, Zhu J, Zhang X, Han Z, Yan W, Cheng Q, Hao J, Fan H, Hou R, Chen Z, Chen Y, Li CT. Medial prefrontal activity during delay period contributes to learning of a working memory task. Science 2014; 346:458-63. [PMID: 25342800 DOI: 10.1126/science.1256573] [Citation(s) in RCA: 139] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Cognitive processes require working memory (WM) that involves a brief period of memory retention known as the delay period. Elevated delay-period activity in the medial prefrontal cortex (mPFC) has been observed, but its functional role in WM tasks remains unclear. We optogenetically suppressed or enhanced activity of pyramidal neurons in mouse mPFC during the delay period. Behavioral performance was impaired during the learning phase but not after the mice were well trained. Delay-period mPFC activity appeared to be more important in memory retention than in inhibitory control, decision-making, or motor selection. Furthermore, endogenous delay-period mPFC activity showed more prominent modulation that correlated with memory retention and behavioral performance. Thus, properly regulated mPFC delay-period activity is critical for information retention during learning of a WM task.
Collapse
Affiliation(s)
- Ding Liu
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China. University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiaowei Gu
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China. University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jia Zhu
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China. University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiaoxing Zhang
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Zhe Han
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China. University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenjun Yan
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China. University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qi Cheng
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China. University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jiang Hao
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Hongmei Fan
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Ruiqing Hou
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Zhaoqin Chen
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Yulei Chen
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Chengyu T Li
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China.
| |
Collapse
|
150
|
Duarte RCF, Morrison A. Dynamic stability of sequential stimulus representations in adapting neuronal networks. Front Comput Neurosci 2014; 8:124. [PMID: 25374534 PMCID: PMC4205815 DOI: 10.3389/fncom.2014.00124] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 09/16/2014] [Indexed: 12/16/2022] Open
Abstract
The ability to acquire and maintain appropriate representations of time-varying, sequential stimulus events is a fundamental feature of neocortical circuits and a necessary first step toward more specialized information processing. The dynamical properties of such representations depend on the current state of the circuit, which is determined primarily by the ongoing, internally generated activity, setting the ground state from which input-specific transformations emerge. Here, we begin by demonstrating that timing-dependent synaptic plasticity mechanisms have an important role to play in the active maintenance of an ongoing dynamics characterized by asynchronous and irregular firing, closely resembling cortical activity in vivo. Incoming stimuli, acting as perturbations of the local balance of excitation and inhibition, require fast adaptive responses to prevent the development of unstable activity regimes, such as those characterized by a high degree of population-wide synchrony. We establish a link between such pathological network activity, which is circumvented by the action of plasticity, and a reduced computational capacity. Additionally, we demonstrate that the action of plasticity shapes and stabilizes the transient network states exhibited in the presence of sequentially presented stimulus events, allowing the development of adequate and discernible stimulus representations. The main feature responsible for the increased discriminability of stimulus-driven population responses in plastic networks is shown to be the decorrelating action of inhibitory plasticity and the consequent maintenance of the asynchronous irregular dynamic regime both for ongoing activity and stimulus-driven responses, whereas excitatory plasticity is shown to play only a marginal role.
Collapse
Affiliation(s)
- Renato C F Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Center and JARA Jülich, Germany ; Bernstein Center Freiburg, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Biology, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; School of Informatics, Institute of Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Center and JARA Jülich, Germany ; Bernstein Center Freiburg, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Biology, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum Bochum, Germany
| |
Collapse
|