1
|
Díaz H, Bayones L, Alvarez M, Andrade-Ortega B, Valero S, Zainos A, Romo R, Rossi-Pool R. Contextual neural dynamics during time perception in the primate ventral premotor cortex. Proc Natl Acad Sci U S A 2025; 122:e2420356122. [PMID: 39913201 PMCID: PMC11831118 DOI: 10.1073/pnas.2420356122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Accepted: 01/07/2025] [Indexed: 02/19/2025] Open
Abstract
Understanding how time perception adapts to cognitive demands remains a significant challenge. In some contexts, the brain encodes time categorically (as "long" or "short"), while in others, it encodes precise time intervals on a continuous scale. Although the ventral premotor cortex (VPC) is known for its role in complex temporal processes, such as speech, its specific involvement in time estimation remains underexplored. In this study, we investigated how the VPC processes temporal information during a time interval comparison task (TICT) and a time interval categorization task (TCT) in primates. We found a notable heterogeneity in neuronal responses associated with time perception across both tasks. While most neurons responded during time interval presentation, a smaller subset retained this information during the working memory periods. Population-level analysis revealed distinct dynamics between tasks: In the TICT, population activity exhibited a linear and parametric relationship with interval duration, whereas in the TCT, neuronal activity diverged into two distinct dynamics corresponding to the interval categories. During delay periods, these categorical or parametric representations remained consistent within each task context. This contextual shift underscores the VPC's adaptive role in interval estimation and highlights how temporal representations are modulated by cognitive demands.
Collapse
Affiliation(s)
- Héctor Díaz
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Lucas Bayones
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Manuel Alvarez
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Bernardo Andrade-Ortega
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Sebastián Valero
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Antonio Zainos
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | | | - Román Rossi-Pool
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| |
Collapse
|
2
|
Zemlianova K, Bose A, Rinzel J. Dynamical mechanisms of how an RNN keeps a beat, uncovered with a low-dimensional reduced model. Sci Rep 2024; 14:26388. [PMID: 39488649 PMCID: PMC11531529 DOI: 10.1038/s41598-024-77849-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Accepted: 10/25/2024] [Indexed: 11/04/2024] Open
Abstract
Despite music's omnipresence, the specific neural mechanisms responsible for perceiving and anticipating temporal patterns in music are unknown. To study potential mechanisms for keeping time in rhythmic contexts, we train a biologically constrained RNN, with excitatory (E) and inhibitory (I) units, on seven different stimulus tempos (2-8 Hz) on a synchronization and continuation task, a standard experimental paradigm. Our trained RNN generates a network oscillator that uses an input current (context parameter) to control oscillation frequency and replicates key features of neural dynamics observed in neural recordings of monkeys performing the same task. We develop a reduced three-variable rate model of the RNN and analyze its dynamic properties. By treating our understanding of the mathematical structure for oscillations in the reduced model as predictive, we confirm that the dynamical mechanisms are found also in the RNN. Our neurally plausible reduced model reveals an E-I circuit with two distinct inhibitory sub-populations, of which one is tightly synchronized with the excitatory units.
Collapse
Affiliation(s)
- Klavdia Zemlianova
- Center for Neural Science, New York University, New York, NY, 10003, USA
| | - Amitabha Bose
- Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, NJ, 07102, USA
| | - John Rinzel
- Center for Neural Science and Courant Institute of Mathematical Sciences, New York University, New York, NY, 10003, USA.
| |
Collapse
|
3
|
Soldado-Magraner S, Buonomano DV. Neural Sequences and the Encoding of Time. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:81-93. [PMID: 38918347 DOI: 10.1007/978-3-031-60183-5_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Converging experimental and computational evidence indicate that on the scale of seconds the brain encodes time through changing patterns of neural activity. Experimentally, two general forms of neural dynamic regimes that can encode time have been observed: neural population clocks and ramping activity. Neural population clocks provide a high-dimensional code to generate complex spatiotemporal output patterns, in which each neuron exhibits a nonlinear temporal profile. A prototypical example of neural population clocks are neural sequences, which have been observed across species, brain areas, and behavioral paradigms. Additionally, neural sequences emerge in artificial neural networks trained to solve time-dependent tasks. Here, we examine the role of neural sequences in the encoding of time, and how they may emerge in a biologically plausible manner. We conclude that neural sequences may represent a canonical computational regime to perform temporal computations.
Collapse
Affiliation(s)
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA.
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
4
|
Balcı F, Simen P. Neurocomputational Models of Interval Timing: Seeing the Forest for the Trees. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:51-78. [PMID: 38918346 DOI: 10.1007/978-3-031-60183-5_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Extracting temporal regularities and relations from experience/observation is critical for organisms' adaptiveness (communication, foraging, predation, prediction) in their ecological niches. Therefore, it is not surprising that the internal clock that enables the perception of seconds-to-minutes-long intervals (interval timing) is evolutionarily well-preserved across many species of animals. This comparative claim is primarily supported by the fact that the timing behavior of many vertebrates exhibits common statistical signatures (e.g., on-average accuracy, scalar variability, positive skew). These ubiquitous statistical features of timing behaviors serve as empirical benchmarks for modelers in their efforts to unravel the processing dynamics of the internal clock (namely answering how internal clock "ticks"). In this chapter, we introduce prominent (neuro)computational approaches to modeling interval timing at a level that can be understood by general audience. These models include Treisman's pacemaker accumulator model, the information processing variant of scalar expectancy theory, the striatal beat frequency model, behavioral expectancy theory, the learning to time model, the time-adaptive opponent Poisson drift-diffusion model, time cell models, and neural trajectory models. Crucially, we discuss these models within an overarching conceptual framework that categorizes different models as threshold vs. clock-adaptive models and as dedicated clock/ramping vs. emergent time/population code models.
Collapse
Affiliation(s)
- Fuat Balcı
- Department of Biological Sciences, University of Manitoba, Winnipeg, MB, Canada.
| | - Patrick Simen
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| |
Collapse
|
5
|
Boven E, Cerminara NL. Cerebellar contributions across behavioural timescales: a review from the perspective of cerebro-cerebellar interactions. Front Syst Neurosci 2023; 17:1211530. [PMID: 37745783 PMCID: PMC10512466 DOI: 10.3389/fnsys.2023.1211530] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/21/2023] [Indexed: 09/26/2023] Open
Abstract
Performing successful adaptive behaviour relies on our ability to process a wide range of temporal intervals with certain precision. Studies on the role of the cerebellum in temporal information processing have adopted the dogma that the cerebellum is involved in sub-second processing. However, emerging evidence shows that the cerebellum might be involved in suprasecond temporal processing as well. Here we review the reciprocal loops between cerebellum and cerebral cortex and provide a theoretical account of cerebro-cerebellar interactions with a focus on how cerebellar output can modulate cerebral processing during learning of complex sequences. Finally, we propose that while the ability of the cerebellum to support millisecond timescales might be intrinsic to cerebellar circuitry, the ability to support supra-second timescales might result from cerebellar interactions with other brain regions, such as the prefrontal cortex.
Collapse
Affiliation(s)
- Ellen Boven
- Sensory and Motor Systems Group, Faculty of Life Sciences, School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, United Kingdom
- Neural and Machine Learning Group, Bristol Computational Neuroscience Unit, Intelligent Systems Labs, School of Engineering Mathematics and Technology, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
| | - Nadia L. Cerminara
- Sensory and Motor Systems Group, Faculty of Life Sciences, School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
6
|
Xue X, Wimmer RD, Halassa MM, Chen ZS. Spiking Recurrent Neural Networks Represent Task-Relevant Neural Sequences in Rule-Dependent Computation. Cognit Comput 2023; 15:1167-1189. [PMID: 37771569 PMCID: PMC10530699 DOI: 10.1007/s12559-022-09994-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 01/13/2022] [Indexed: 11/28/2022]
Abstract
Background Prefrontal cortical neurons play essential roles in performing rule-dependent tasks and working memory-based decision making. Methods Motivated by PFG recordings of task-performing mice, we developed an excitatory-inhibitory spiking recurrent neural network (SRNN) to perform a rule-dependent two-alternative forced choice (2AFC) task. We imposed several important biological constraints onto the SRNN, and adapted spike frequency adaptation (SFA) and SuperSpike gradient methods to train the SRNN efficiently. Results The trained SRNN produced emergent rule-specific tunings in single-unit representations, showing rule-dependent population dynamics that resembled experimentally observed data. Under varying test conditions, we manipulated the SRNN parameters or configuration in computer simulations, and we investigated the impacts of rule-coding error, delay duration, recurrent weight connectivity and sparsity, and excitation/inhibition (E/I) balance on both task performance and neural representations. Conclusions Overall, our modeling study provides a computational framework to understand neuronal representations at a fine timescale during working memory and cognitive control, and provides new experimentally testable hypotheses in future experiments.
Collapse
Affiliation(s)
- Xiaohe Xue
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - Ralf D. Wimmer
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zhe Sage Chen
- Department of Psychiatry, New York University School of Medicine, New York, NY, USA
- Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
7
|
Zhou S, Seay M, Taxidis J, Golshani P, Buonomano DV. Multiplexing working memory and time in the trajectories of neural networks. Nat Hum Behav 2023; 7:1170-1184. [PMID: 37081099 PMCID: PMC10913811 DOI: 10.1038/s41562-023-01592-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 03/22/2023] [Indexed: 04/22/2023]
Abstract
Working memory (WM) and timing are generally considered distinct cognitive functions, but similar neural signatures have been implicated in both. To explore the hypothesis that WM and timing may rely on shared neural mechanisms, we used psychophysical tasks that contained either task-irrelevant timing or WM components. In both cases, the task-irrelevant component influenced performance. We then developed recurrent neural network (RNN) simulations that revealed that cue-specific neural sequences, which multiplexed WM and time, emerged as the dominant regime that captured the behavioural findings. During training, RNN dynamics transitioned from low-dimensional ramps to high-dimensional neural sequences, and depending on task requirements, steady-state or ramping activity was also observed. Analysis of RNN structure revealed that neural sequences relied primarily on inhibitory connections, and could survive the deletion of all excitatory-to-excitatory connections. Our results indicate that in some instances WM is encoded in time-varying neural activity because of the importance of predicting when WM will be used.
Collapse
Affiliation(s)
- Shanglin Zhou
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Michael Seay
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Jiannis Taxidis
- Program in Neurosciences and Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Physiology, University of Toronto, Toronto, Ontario, Canada
| | - Peyman Golshani
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
- Integrative Center for Learning and Memory, Brain Research Institute, University of California, Los Angeles, Los Angeles, CA, USA
- UCLA Semel Institute for Neuroscience and Behavioral Sciences, University of California, Los Angeles, Los Angeles, CA, USA
- West Los Angeles VA Medical Center, Los Angeles, CA, USA
| | - Dean V Buonomano
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
- Department of Psychology, University of California, Los Angeles, CA, USA.
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
8
|
Rozells J, Gavornik JP. Optogenetic manipulation of inhibitory interneurons can be used to validate a model of spatiotemporal sequence learning. Front Comput Neurosci 2023; 17:1198128. [PMID: 37362060 PMCID: PMC10288026 DOI: 10.3389/fncom.2023.1198128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/24/2023] [Indexed: 06/28/2023] Open
Abstract
The brain uses temporal information to link discrete events into memory structures supporting recognition, prediction, and a wide variety of complex behaviors. It is still an open question how experience-dependent synaptic plasticity creates memories including temporal and ordinal information. Various models have been proposed to explain how this could work, but these are often difficult to validate in a living brain. A recent model developed to explain sequence learning in the visual cortex encodes intervals in recurrent excitatory synapses and uses a learned offset between excitation and inhibition to generate precisely timed "messenger" cells that signal the end of an instance of time. This mechanism suggests that the recall of stored temporal intervals should be particularly sensitive to the activity of inhibitory interneurons that can be easily targeted in vivo with standard optogenetic tools. In this work we examined how simulated optogenetic manipulations of inhibitory cells modifies temporal learning and recall based on these mechanisms. We show that disinhibition and excess inhibition during learning or testing cause characteristic errors in recalled timing that could be used to validate the model in vivo using either physiological or behavioral measurements.
Collapse
Affiliation(s)
| | - Jeffrey P. Gavornik
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA, United States
| |
Collapse
|
9
|
Cognitive and plastic recurrent neural network clock model for the judgment of time and its variations. Sci Rep 2023; 13:3852. [PMID: 36890223 PMCID: PMC9995505 DOI: 10.1038/s41598-023-30894-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 03/02/2023] [Indexed: 03/10/2023] Open
Abstract
The aim of this study in the field of computational neurosciences was to simulate and predict inter-individual variability in time judgements with different neuropsychological properties. We propose and test a Simple Recurrent Neural Network-based clock model that is able to account for inter-individual variability in time judgment by adding four new components into the clock system: the first relates to the plasticity of the neural system, the second to the attention allocated to time, the third to the memory of duration, and the fourth to the learning of duration by iteration. A simulation with this model explored its fit with participants' time estimates in a temporal reproduction task undertaken by both children and adults, whose varied cognitive abilities were assessed with neuropsychological tests. The simulation successfully predicted 90% of temporal errors. Our Cognitive and Plastic RNN-Clock model (CP-RNN-Clock), that takes into account the interference arising from a clock system grounded in cognition, was thus validated.
Collapse
|
10
|
Akdoğan B, Wanar A, Gersten BK, Gallistel CR, Balsam PD. Temporal encoding: Relative and absolute representations of time guide behavior. JOURNAL OF EXPERIMENTAL PSYCHOLOGY. ANIMAL LEARNING AND COGNITION 2023; 49:46-61. [PMID: 36795422 PMCID: PMC10472319 DOI: 10.1037/xan0000345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Temporal information-processing is critical for adaptive behavior and goal-directed action. It is thus crucial to understand how the temporal distance between behaviorally relevant events is encoded to guide behavior. However, research on temporal representations has yielded mixed findings as to whether organisms utilize relative versus absolute judgments of time intervals. To address this fundamental question about the timing mechanism, we tested mice in a duration discrimination procedure in which they learned to correctly categorize tones of different durations as short or long. After being trained on a pair of target intervals, the mice were transferred to conditions in which cue durations and corresponding response locations were systematically manipulated so that either the relative or absolute mapping remained constant. The findings indicate that transfer occurred most readily when relative relationships of durations and response locations were preserved. In contrast, when subjects had to re-map these relative relations, even when positive transfer initially occurred based on absolute mappings, their temporal discrimination performance was impaired, and they required extensive training to re-establish temporal control. These results demonstrate that mice can represent experienced durations both as having a certain magnitude (absolute representation) and as being shorter or longer of the two durations (an ordinal relation to other cue durations), with relational control having a more enduring influence in temporal discriminations. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Başak Akdoğan
- Department of Psychology, Columbia University
- New York State Psychiatric Institute
| | | | | | | | - Peter D Balsam
- Department of Psychology, Columbia University
- New York State Psychiatric Institute
- Department of Psychology, Barnard College
| |
Collapse
|
11
|
Gallistel CR, Johansson F, Jirenhed DA, Rasmussen A, Ricci M, Hesslow G. Quantitative properties of the creation and activation of a cell-intrinsic duration-encoding engram. Front Comput Neurosci 2022; 16:1019812. [PMID: 36405788 PMCID: PMC9669310 DOI: 10.3389/fncom.2022.1019812] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/21/2022] [Indexed: 11/06/2022] Open
Abstract
The engram encoding the interval between the conditional stimulus (CS) and the unconditional stimulus (US) in eyeblink conditioning resides within a small population of cerebellar Purkinje cells. CSs activate this engram to produce a pause in the spontaneous firing rate of the cell, which times the CS-conditional blink. We developed a Bayesian algorithm that finds pause onsets and offsets in the records from individual CS-alone trials. We find that the pause consists of a single unusually long interspike interval. Its onset and offset latencies and their trial-to-trial variability are proportional to the CS-US interval. The coefficient of variation (CoV = σ/μ) are comparable to the CoVs for the conditional eye blink. The average trial-to-trial correlation between the onset latencies and the offset latencies is close to 0, implying that the onsets and offsets are mediated by two stochastically independent readings of the engram. The onset of the pause is step-like; there is no decline in firing rate between the onset of the CS and the onset of the pause. A single presynaptic spike volley suffices to trigger the reading of the engram; and the pause parameters are unaffected by subsequent volleys. The Fano factors for trial-to-trial variations in the distribution of interspike intervals within the intertrial intervals indicate pronounced non-stationarity in the endogenous spontaneous spiking rate, on which the CS-triggered firing pause supervenes. These properties of the spontaneous firing and of the engram read out may prove useful in finding the cell-intrinsic, molecular-level structure that encodes the CS-US interval.
Collapse
Affiliation(s)
| | - Fredrik Johansson
- Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Dan-Anders Jirenhed
- Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Anders Rasmussen
- Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Matthew Ricci
- Carney Institute for Brain Sciences, Brown University, Providence, RI, United States
| | - Germund Hesslow
- Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| |
Collapse
|
12
|
Basgol H, Ayhan I, Ugur E. Time Perception: A Review on Psychological, Computational, and Robotic Models. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3059045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Hamit Basgol
- Department of Cognitive Science, Bogazici University, Istanbul, Turkey
| | - Inci Ayhan
- Department of Psychology, Bogazici University, Istanbul, Turkey
| | - Emre Ugur
- Department of Computer Engineering, Bogazici University, Istanbul, Turkey
| |
Collapse
|
13
|
Li K, Wang J, Hu Z, Deng B, Yu H. Gating attractor dynamics of frontal cortex under acupuncture via recurrent neural network. IEEE J Biomed Health Inform 2022; 26:3836-3847. [PMID: 35290193 DOI: 10.1109/jbhi.2022.3158963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Acupuncture can regulate the functions of human body and improve the cognition of brain. However, the mechanism of acupuncture manipulations remains unclear. Here, we hypothesis that the frontal cortex plays a gating role in information routing of brain network under acupuncture. To that end, the gating effect of frontal cortex under acupuncture is analyzed in combination with EEG data of acupuncture at Zusanli acupoints. In addition, recurrent neural network (RNN) is used to reproduce the dynamics of frontal cortex under normal state and acupuncture state. From low-dimensional view, it is shown that the brain networks under acupuncture state can show stable attractor cycle dynamics, which may explain the regulation effect of acupuncture. Comparing with different manipulations, we find that the attractor of low-dimensional trajectory varies under different frequencies of acupuncture. Besides, a strip gated band of neural dynamics is found by changing the frequency of stimulation and excitatory-inhibitory balance of network. The attractor state is found to transport in the gating area under different stimulation frequencies, and the probability of attractor migration is different across acupuncture manipulations. This reverse engineering of brain network indicates that the differences among acupuncture manipulations are caused by interaction and separation in the neural activity space between attractors that encode acupuncture function. Consequently, our results may provide help for quantitative analysis of acupuncture, and benefit for the clinical guidance of acupuncture clinicians.
Collapse
|
14
|
Encoding time in neural dynamic regimes with distinct computational tradeoffs. PLoS Comput Biol 2022; 18:e1009271. [PMID: 35239644 PMCID: PMC8893702 DOI: 10.1371/journal.pcbi.1009271] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 02/08/2022] [Indexed: 11/19/2022] Open
Abstract
Converging evidence suggests the brain encodes time in dynamic patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Most temporal tasks, however, require more than just encoding time, and can have distinct computational requirements including the need to exhibit temporal scaling, generalize to novel contexts, or robustness to noise. It is not known how neural circuits can encode time and satisfy distinct computational requirements, nor is it known whether similar patterns of neural activity at the population level can exhibit dramatically different computational or generalization properties. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimulus-specific dynamics. We found that similar neural dynamic patterns at the level of single intervals, could exhibit fundamentally different properties, including, generalization, the connectivity structure of the trained networks, and the contribution of excitatory and inhibitory neurons. Critically, depending on the task structure RNNs were better suited for generalization or robustness to noise. Further analysis revealed different connection patterns underlying the different regimes. Our results predict that apparently similar neural dynamic patterns at the population level (e.g., neural sequences) can exhibit fundamentally different computational properties in regards to their ability to generalize to novel stimuli and their robustness to noise—and that these differences are associated with differences in network connectivity and distinct contributions of excitatory and inhibitory neurons. We also predict that the task structure used in different experimental studies accounts for some of the experimentally observed variability in how networks encode time. The ability to tell time and anticipate when external events will occur are among the most fundamental computations the brain performs. Converging evidence suggests the brain encodes time through changing patterns of neural activity. Different temporal tasks, however, have distinct computational requirements, such as the need to flexibly scale temporal patterns or generalize to novel inputs. To understand how networks can encode time and satisfy different computational requirements we trained recurrent neural networks (RNNs) on two timing tasks that have previously been used in behavioral studies. Both tasks required producing identically timed output patterns. Using a novel framework to quantify how networks encode different intervals, we found that similar patterns of neural activity—neural sequences—were associated with fundamentally different underlying mechanisms, including the connectivity patterns of the RNNs. Critically, depending on the task the RNNs were trained on, they were better suited for generalization or robustness to noise. Our results predict that similar patterns of neural activity can be produced by distinct RNN configurations, which in turn have fundamentally different computational tradeoffs. Our results also predict that differences in task structure account for some of the experimentally observed variability in how networks encode time.
Collapse
|
15
|
Rajakumar A, Rinzel J, Chen ZS. Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation. Neural Comput 2021; 33:2603-2645. [PMID: 34530451 PMCID: PMC8750453 DOI: 10.1162/neco_a_01418] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/08/2021] [Indexed: 11/04/2022]
Abstract
Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics ("neural sequences") of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
Collapse
Affiliation(s)
- Alfred Rajakumar
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, U.S.A.
| | - John Rinzel
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA.
| | - Zhe S Chen
- Department of Psychiatry and Neuroscience Institute, New York University School of Medicine, New York, NY 10016, U.S.A.
| |
Collapse
|
16
|
Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. PLoS Comput Biol 2021; 17:e1008866. [PMID: 33764970 PMCID: PMC8023498 DOI: 10.1371/journal.pcbi.1008866] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 04/06/2021] [Accepted: 03/08/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Collapse
Affiliation(s)
- Amadeus Maes
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Mathematics Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
17
|
Khanna P, Totten D, Novik L, Roberts J, Morecraft RJ, Ganguly K. Low-frequency stimulation enhances ensemble co-firing and dexterity after stroke. Cell 2021; 184:912-930.e20. [PMID: 33571430 DOI: 10.1016/j.cell.2021.01.023] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 09/08/2020] [Accepted: 01/15/2021] [Indexed: 12/31/2022]
Abstract
Electrical stimulation is a promising tool for modulating brain networks. However, it is unclear how stimulation interacts with neural patterns underlying behavior. Specifically, how might external stimulation that is not sensitive to the state of ongoing neural dynamics reliably augment neural processing and improve function? Here, we tested how low-frequency epidural alternating current stimulation (ACS) in non-human primates recovering from stroke interacted with task-related activity in perilesional cortex and affected grasping. We found that ACS increased co-firing within task-related ensembles and improved dexterity. Using a neural network model, we found that simulated ACS drove ensemble co-firing and enhanced propagation of neural activity through parts of the network with impaired connectivity, suggesting a mechanism to link increased co-firing to enhanced dexterity. Together, our results demonstrate that ACS restores neural processing in impaired networks and improves dexterity following stroke. More broadly, these results demonstrate approaches to optimize stimulation to target neural dynamics.
Collapse
Affiliation(s)
- Preeya Khanna
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA; California National Primate Research Center, University of California, Davis, Davis, CA 95616, USA
| | - Douglas Totten
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA; California National Primate Research Center, University of California, Davis, Davis, CA 95616, USA
| | - Lisa Novik
- California National Primate Research Center, University of California, Davis, Davis, CA 95616, USA
| | - Jeffrey Roberts
- California National Primate Research Center, University of California, Davis, Davis, CA 95616, USA
| | - Robert J Morecraft
- Laboratory of Neurological Sciences, Division of Basic Biomedical Sciences, Sanford School of Medicine, The University of South Dakota, Vermillion, SD 57069, USA
| | - Karunesh Ganguly
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA; California National Primate Research Center, University of California, Davis, Davis, CA 95616, USA.
| |
Collapse
|
18
|
Nápoles G, Jastrzębska A, Salgueiro Y. Pattern classification with Evolving Long-term Cognitive Networks. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.08.058] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
19
|
Li W, Li M, Qiao J, Guo X. A feature clustering-based adaptive modular neural network for nonlinear system modeling. ISA TRANSACTIONS 2020; 100:185-197. [PMID: 31767196 DOI: 10.1016/j.isatra.2019.11.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 08/27/2019] [Accepted: 11/08/2019] [Indexed: 06/10/2023]
Abstract
To improve the performance of nonlinear system modeling, this study proposes a feature clustering-based adaptive modular neural network (FC-AMNN) by simulating information processing mechanism of human brains in the way that different information is processed by different modules in parallel. Firstly, features are clustered using an adaptive feature clustering algorithm, and the number of modules in FC-AMNN is determined by the number of feature clusters automatically. The features in each cluster are then allocated to the corresponding module in FC-AMNN. Then, a self-constructive RBF neural network based on Error Correction algorithm is adopted as the subnetwork to study the allocated features. All modules work in parallel and are finally integrated using a Bayesian method to obtain the output. To demonstrate the effectiveness of the proposed model, FC-AMNN is tested on several UCI benchmark problems as well as a practical problem in wastewater treatment process. The experimental results show that the FC-AMNN can achieve a better generalization performance and an accurate result for nonlinear system modeling compared with other modular neural networks.
Collapse
Affiliation(s)
- Wenjing Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China.
| | - Meng Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| | - Junfei Qiao
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| | - Xin Guo
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| |
Collapse
|
20
|
Abstract
Perceiving, maintaining, and using time intervals in working memory are crucial for animals to anticipate or act correctly at the right time in the ever-changing world. Here, we systematically study the underlying neural mechanisms by training recurrent neural networks to perform temporal tasks or complex tasks in combination with spatial information processing and decision making. We found that neural networks perceive time through state evolution along stereotypical trajectories and produce time intervals by scaling evolution speed. Temporal and nontemporal information is jointly coded in a way that facilitates decoding generalizability. We also provided potential sources for the temporal signals observed in nontiming tasks. Our study revealed the computational principles of a number of experimental phenomena and provided several predictions. To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How do animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multiseconds in working memory? How is temporal information processed concurrently with spatial information and decision making? Why are there strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. Temporal and nontemporal information is coded in subspaces orthogonal with each other, and the state trajectories with time at different nontemporal information are quasiparallel and isomorphic. Such coding geometry facilitates the decoding generalizability of temporal and nontemporal information across each other. The network structure exhibits multiple feedforward sequences that mutually excite or inhibit depending on whether their preferences of nontemporal information are similar or not. We identified four factors that facilitate strong temporal signals in nontiming tasks, including the anticipation of coming events. Our work discloses fundamental computational principles of temporal processing, and it is supported by and gives predictions to a number of experimental phenomena.
Collapse
|
21
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
22
|
Raphan T, Dorokhin E, Delamater AR. Modeling Interval Timing by Recurrent Neural Nets. Front Integr Neurosci 2019; 13:46. [PMID: 31555104 PMCID: PMC6724642 DOI: 10.3389/fnint.2019.00046] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Accepted: 08/07/2019] [Indexed: 11/19/2022] Open
Abstract
The purpose of this study was to take a new approach in showing how the central nervous system might encode time at the supra-second level using recurrent neural nets (RNNs). This approach utilizes units with a delayed feedback, whose feedback weight determines the temporal properties of specific neurons in the network architecture. When these feedback neurons are coupled, they form a multilayered dynamical system that can be used to model temporal responses to steps of input in multidimensional systems. The timing network was implemented using separate recurrent “Go” and “No-Go” neural processing units to process an individual stimulus indicating the time of reward availability. Outputs from these distinct units on each time step are converted to a pulse reflecting a weighted sum of the separate Go and No-Go signals. This output pulse then drives an integrator unit, whose feedback weight and input weights shape the pulse distribution. This system was used to model empirical data from rodents performing in an instrumental “peak interval timing” task for two stimuli, Tone and Flash. For each of these stimuli, reward availability was signaled after different times from stimulus onset during training. Rodent performance was assessed on non-rewarded trials, following training, with each stimulus tested individually and simultaneously in a stimulus compound. The associated weights in the Go/No-Go network were trained using experimental data showing the mean distribution of bar press rates across an 80 s period in which a tone stimulus signaled reward after 5 s and a flash stimulus after 30 s from stimulus onset. Different Go/No-Go systems were used for each stimulus, but the weighted output of each fed into a final recurrent integrator unit, whose weights were unmodifiable. The recurrent neural net (RNN) model was implemented using Matlab and Matlab’s machine learning tools were utilized to train the network using the data from non-rewarded trials. The neural net output accurately fit the temporal distribution of tone and flash-initiated bar press data. Furthermore, a “Temporal Averaging” effect was also obtained when the flash and tone stimuli were combined. These results indicated that the system combining tone and flash responses were not superposed as in a linear system, but that there was a non-linearity, which interacted between tone and flash. In order to achieve an accurate fit to the empirical averaging data it was necessary to implement non-linear “saliency functions” that limited the output signal of each stimulus to the final integrator when the other was co-present. The model suggests that the central nervous system encodes timing generation as a dynamical system whose timing properties are embedded in the connection weights of the system. In this way, event timing is coded similar to the way other sensory-motor systems, such as the vestibulo-ocular and optokinetic systems, which combine sensory inputs from the vestibular and visual systems to generate the temporal aspects of compensatory eye movements.
Collapse
Affiliation(s)
- Theodore Raphan
- Institute for Neural and Intelligent Systems, Department of Computer and Information Science, Brooklyn College of City University of New York, Brooklyn, NY, United States.,Ph.D. Program in Computer Science, Graduate Center of City University of New York, New York, NY, United States.,Ph.D. Program in Psychology and Neuroscience, Graduate Center of City University of New York, New York, NY, United States
| | - Eugene Dorokhin
- Institute for Neural and Intelligent Systems, Department of Computer and Information Science, Brooklyn College of City University of New York, Brooklyn, NY, United States
| | - Andrew R Delamater
- Ph.D. Program in Psychology and Neuroscience, Graduate Center of City University of New York, New York, NY, United States.,Department of Psychology, Brooklyn College of City University of New York, Brooklyn, NY, United States
| |
Collapse
|
23
|
A model for the peak-interval task based on neural oscillation-delimited states. Behav Processes 2019; 168:103941. [PMID: 31550668 DOI: 10.1016/j.beproc.2019.103941] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Revised: 08/16/2019] [Accepted: 08/23/2019] [Indexed: 11/24/2022]
Abstract
Specific mechanisms underlying how the brain keeps track of time are largely unknown. Several existing computational models of timing reproduce behavioral results obtained with experimental psychophysical tasks, but only a few tackle the underlying biological mechanisms, such as the synchronized neural activity that occurs throughout brain areas. In this paper, we introduce a model for the peak-interval task based on neuronal network properties. We consider that Local Field Potential (LFP) oscillation cycles specify a sequence of states, represented as neuronal ensembles. Repeated presentation of time intervals during training reinforces the connections of specific ensembles to downstream networks - sets of neurons connected to the sequence of states. Later, during the peak-interval procedure, these downstream networks are reactivated by previously experienced neuronal ensembles, triggering behavioral responses at the learned time intervals. The model reproduces experimental response patterns from individual rats in the peak-interval procedure, satisfying relevant properties such as the Weber law. Finally, we provide a biological interpretation of the parameters of the model.
Collapse
|
24
|
Gámez J, Mendoza G, Prado L, Betancourt A, Merchant H. The amplitude in periodic neural state trajectories underlies the tempo of rhythmic tapping. PLoS Biol 2019; 17:e3000054. [PMID: 30958818 PMCID: PMC6472824 DOI: 10.1371/journal.pbio.3000054] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 04/18/2019] [Accepted: 03/19/2019] [Indexed: 01/03/2023] Open
Abstract
Our motor commands can be exquisitely timed according to the demands of the environment, and the ability to generate rhythms of different tempos is a hallmark of musical cognition. Yet, the neuronal underpinnings behind rhythmic tapping remain elusive. Here, we found that the activity of hundreds of primate medial premotor cortices (MPCs; pre-supplementary motor area [preSMA] and supplementary motor area [SMA]) neurons show a strong periodic pattern that becomes evident when their responses are projected into a state space using dimensionality reduction analysis. We show that different tapping tempos are encoded by circular trajectories that travelled at a constant speed but with different radii, and that this neuronal code is highly resilient to the number of participating neurons. Crucially, the changes in the amplitude of the oscillatory dynamics in neuronal state space are a signature of duration encoding during rhythmic timing, regardless of whether it is guided by an external metronome or is internally controlled and is not the result of repetitive motor commands. This dynamic state signal predicted the duration of the rhythmically produced intervals on a trial-by-trial basis. Furthermore, the increase in variability of the neural trajectories accounted for the scalar property, a hallmark feature of temporal processing across tasks and species. Finally, we found that the interval-dependent increments in the radius of periodic neural trajectories are the result of a larger number of neurons engaged in the production of longer intervals. Our results support the notion that rhythmic timing during tapping behaviors is encoded in the radial curvature of periodic MPC neural population trajectories.
Collapse
Affiliation(s)
- Jorge Gámez
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Campus Juriquilla, Querétaro, México
| | - Germán Mendoza
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Campus Juriquilla, Querétaro, México
| | - Luis Prado
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Campus Juriquilla, Querétaro, México
| | - Abraham Betancourt
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Campus Juriquilla, Querétaro, México
| | - Hugo Merchant
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Campus Juriquilla, Querétaro, México
- * E-mail:
| |
Collapse
|
25
|
The Synaptic Properties of Cells Define the Hallmarks of Interval Timing in a Recurrent Neural Network. J Neurosci 2018; 38:4186-4199. [PMID: 29615484 DOI: 10.1523/jneurosci.2651-17.2018] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 03/06/2018] [Accepted: 03/11/2018] [Indexed: 11/21/2022] Open
Abstract
Extensive research has described two key features of interval timing. The bias property is associated with accuracy and implies that time is overestimated for short intervals and underestimated for long intervals. The scalar property is linked to precision and states that the variability of interval estimates increases as a function of interval duration. The neural mechanisms behind these properties are not well understood. Here we implemented a recurrent neural network that mimics a cortical ensemble and includes cells that show paired-pulse facilitation and slow inhibitory synaptic currents. The network produces interval selective responses and reproduces both bias and scalar properties when a Bayesian decoder reads its activity. Notably, the interval-selectivity, timing accuracy, and precision of the network showed complex changes as a function of the decay time constants of the modeled synaptic properties and the level of background activity of the cells. These findings suggest that physiological values of the time constants for paired-pulse facilitation and GABAb, as well as the internal state of the network, determine the bias and scalar properties of interval timing.SIGNIFICANCE STATEMENT Timing is a fundamental element of complex behavior, including music and language. Temporal processing in a wide variety of contexts shows two primary features: time estimates exhibit a shift toward the mean (the bias property) and are more variable for longer intervals (the scalar property). We implemented a recurrent neural network that includes long-lasting synaptic currents, which cannot only produce interval-selective responses but also follow the bias and scalar properties. Interestingly, only physiological values of the time constants for paired-pulse facilitation and GABAb, as well as intermediate background activity within the network can reproduce the two key features of interval timing.
Collapse
|