1
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
2
|
Çatal Y, Keskin K, Wolman A, Klar P, Smith D, Northoff G. Flexibility of intrinsic neural timescales during distinct behavioral states. Commun Biol 2024; 7:1667. [PMID: 39702547 DOI: 10.1038/s42003-024-07349-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 12/02/2024] [Indexed: 12/21/2024] Open
Abstract
Recent neuroimaging studies demonstrate a heterogeneity of timescales prevalent in the brain's ongoing spontaneous activity, labeled intrinsic neural timescales (INT). At the same time, neural timescales also reflect stimulus- or task-related activity. The relationship of the INT during the brain's spontaneous activity with their involvement in task states including behavior remains unclear. To address this question, we combined calcium imaging data of spontaneously behaving mice and human electroencephalography (EEG) during rest and task states with computational modeling. We obtained four primary findings: (i) the distinct behavioral states can be accurately predicted from INT, (ii) INT become longer during behavioral states compared to rest, (iii) INT change from rest to task is correlated negatively with the variability of INT during rest, (iv) neural mass modeling shows a key role of recurrent connections in mediating the rest-task change of INT. Extending current findings, our results show the dynamic nature of the brain's INT in reflecting continuous behavior through their flexible rest-task modulation possibly mediated by recurrent connections.
Collapse
Affiliation(s)
- Yasir Çatal
- Mind, Brain Imaging and Neuroethics Research Unit, University of Ottawa, Ontario, ON, Canada.
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada.
| | - Kaan Keskin
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
- Department of Psychiatry, Ege University, Izmir, Turkey
- SoCAT Lab, Ege University, Izmir, Turkey
| | - Angelika Wolman
- Mind, Brain Imaging and Neuroethics Research Unit, University of Ottawa, Ontario, ON, Canada
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
| | - Philipp Klar
- Faculty of Mathematics and Natural Sciences, Institute of Experimental Psychology, Heinrich Heine University of Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| | - David Smith
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Georg Northoff
- Mind, Brain Imaging and Neuroethics Research Unit, University of Ottawa, Ontario, ON, Canada
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
| |
Collapse
|
3
|
Li Y, Yin W, Wang X, Li J, Zhou S, Ma C, Yuan P, Li B. Stable sequential dynamics in prefrontal cortex represents subjective estimation of time. eLife 2024; 13:RP96603. [PMID: 39660591 PMCID: PMC11634065 DOI: 10.7554/elife.96603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2024] Open
Abstract
Time estimation is an essential prerequisite underlying various cognitive functions. Previous studies identified 'sequential firing' and 'activity ramps' as the primary neuron activity patterns in the medial frontal cortex (mPFC) that could convey information regarding time. However, the relationship between these patterns and the timing behavior has not been fully understood. In this study, we utilized in vivo calcium imaging of mPFC in rats performing a timing task. We observed cells that showed selective activation at trial start, end, or during the timing interval. By aligning long-term time-lapse datasets, we discovered that sequential patterns of time coding were stable over weeks, while cells coding for trial start or end showed constant dynamism. Furthermore, with a novel behavior design that allowed the animal to determine individual trial interval, we were able to demonstrate that real-time adjustment in the sequence procession speed closely tracked the trial-to-trial interval variations. And errors in the rats' timing behavior can be primarily attributed to the premature ending of the time sequence. Together, our data suggest that sequential activity maybe a stable neural substrate that represents time under physiological conditions. Furthermore, our results imply the existence of a unique cell type in the mPFC that participates in the time-related sequences. Future characterization of this cell type could provide important insights in the neural mechanism of timing and related cognitive functions.
Collapse
Affiliation(s)
- Yiting Li
- Institute of Biomedical Innovation, Jiangxi Medical College, Nanchang UniversityNanchangChina
- Department of Rehabilitation Medicine, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Institute for Translational Brain Research, MOE Frontiers Center for Brain Science, MOE Innovative Center for New Drug Development of Immune Inflammatory Diseases, Fudan UniversityShanghaiChina
| | - Wenqu Yin
- Institute of Biomedical Innovation, Jiangxi Medical College, Nanchang UniversityNanchangChina
| | - Xin Wang
- Department of Rehabilitation Medicine, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Institute for Translational Brain Research, MOE Frontiers Center for Brain Science, MOE Innovative Center for New Drug Development of Immune Inflammatory Diseases, Fudan UniversityShanghaiChina
| | - Jiawen Li
- Institute of Biomedical Innovation, Jiangxi Medical College, Nanchang UniversityNanchangChina
- The Second Clinical Medicine School of Nanchang UniversityNanchangChina
| | - Shanglin Zhou
- State Key Laboratory of Medical Neurobiology, Institute for Translational Brain Research, MOEFrontiers Center for Brain Science, Fudan UniversityShanghaiChina
| | - Chaolin Ma
- Institute of Biomedical Innovation, Jiangxi Medical College, Nanchang UniversityNanchangChina
| | - Peng Yuan
- Department of Rehabilitation Medicine, Huashan Hospital, State Key Laboratory of Medical Neurobiology, Institute for Translational Brain Research, MOE Frontiers Center for Brain Science, MOE Innovative Center for New Drug Development of Immune Inflammatory Diseases, Fudan UniversityShanghaiChina
| | - Baoming Li
- Institute of Biomedical Innovation, Jiangxi Medical College, Nanchang UniversityNanchangChina
- Institute of Brain Science and Department of Physiology, School of Basic Medical Science, Hangzhou Normal UniversityHangzhouChina
| |
Collapse
|
4
|
Silva AD, Laje R. Perturbation context in paced finger tapping tunes the error-correction mechanism. Sci Rep 2024; 14:27473. [PMID: 39523377 PMCID: PMC11551152 DOI: 10.1038/s41598-024-78786-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 11/04/2024] [Indexed: 11/16/2024] Open
Abstract
Sensorimotor synchronization (SMS) is the mainly specifically human ability to move in sync with a periodic external stimulus, as in keeping pace with music. The most common experimental paradigm to study its largely unknown underlying mechanism is the paced finger-tapping task, where a participant taps to a periodic sequence of brief stimuli. Contrary to reaction time, this task involves temporal prediction because the participant needs to trigger the motor action in advance for the tap and the stimulus to occur simultaneously, then an error-correction mechanism takes past performance as input to adjust the following prediction. In a different, simpler task, it has been shown that exposure to a distribution of individual temporal intervals creates a "temporal context" that can bias the estimation/production of a single target interval. As temporal estimation and production are also involved in SMS, we asked whether a paced finger-tapping task with period perturbations would show any time-related context effect. In this work we show that a perturbation context can indeed be generated by exposure to period perturbations during paced finger tapping, affecting the shape and size of the resynchronization curve. Response asymmetry is also affected, thus evidencing an interplay between context and intrinsic nonlinearities of the correction mechanism. We conclude that perturbation context calibrates the underlying error-correction mechanism in SMS.
Collapse
Affiliation(s)
- Ariel D Silva
- Sensorimotor Dynamics Lab, Departamento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Argentina
- CONICET, Buenos Aires, Argentina
| | - Rodrigo Laje
- Sensorimotor Dynamics Lab, Departamento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Argentina.
- CONICET, Buenos Aires, Argentina.
- Departamento de Computación, Universidad de Buenos Aires, Buenos Aires, Argentina.
| |
Collapse
|
5
|
Pang S, Ding S, Peng C, Chen Y. Temporal context modulates cross-modality time discrimination: Electrophysiological evidence for supramodal temporal representation. Cortex 2024; 179:143-156. [PMID: 39173580 DOI: 10.1016/j.cortex.2024.07.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 06/29/2024] [Accepted: 07/25/2024] [Indexed: 08/24/2024]
Abstract
Although the peripheral nervous system lacks a dedicated receptor, the brain processes temporal information through different sensory channels. A critical question is whether temporal information from different sensory modalities at different times forms modality-specific representations or is integrated into a common representation in a supramodal manner. Behavioral studies on temporal memory mixing and the central tendency effect have provided evidence for supramodal temporal representations. We aimed to provide electrophysiological evidence for this proposal by employing a cross-modality time discrimination task combined with electroencephalogram (EEG) recordings. The task maintained a fixed auditory standard duration, whereas the visual comparison duration was randomly selected from the short and long ranges, creating two different audio-visual temporal contexts. The behavioral results showed that the point of subjective equality (PSE) in the short context was significantly lower than that in the long context. The EEG results revealed that the amplitude of the contingent negative variation (CNV) in the short context was significantly higher (more negative) than in the long context in the early stage, while it was lower (more positive) in the later stage. These results suggest that the audiovisual temporal context is integrated with the auditory standard duration to generate a subjective time criterion. Compared with the long context, the subjective time criterion in the short context was shorter, resulting in earlier decision-making and a preceding decrease in CNV. Our study provides electrophysiological evidence that temporal information from different modalities inputted into the brain at different times can form a supramodal temporal representation.
Collapse
Affiliation(s)
- Shufang Pang
- Key Laboratory of Cognition and Personality (Ministry of Education), Time Psychology Research Center, Center of Studies for Psychology and Social Development, Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Shaofan Ding
- Key Laboratory of Cognition and Personality (Ministry of Education), Time Psychology Research Center, Center of Studies for Psychology and Social Development, Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Chunhua Peng
- Chongqing Key Laboratory of Emotion and Mental Health, Chongqing University of Arts and Sciences, Chongqing 402160, China
| | - Youguo Chen
- Key Laboratory of Cognition and Personality (Ministry of Education), Time Psychology Research Center, Center of Studies for Psychology and Social Development, Faculty of Psychology, Southwest University, Chongqing 400715, China.
| |
Collapse
|
6
|
Zhou S, Buonomano DV. Unified control of temporal and spatial scales of sensorimotor behavior through neuromodulation of short-term synaptic plasticity. SCIENCE ADVANCES 2024; 10:eadk7257. [PMID: 38701208 DOI: 10.1126/sciadv.adk7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 04/03/2024] [Indexed: 05/05/2024]
Abstract
Neuromodulators have been shown to alter the temporal profile of short-term synaptic plasticity (STP); however, the computational function of this neuromodulation remains unexplored. Here, we propose that the neuromodulation of STP provides a general mechanism to scale neural dynamics and motor outputs in time and space. We trained recurrent neural networks that incorporated STP to produce complex motor trajectories-handwritten digits-with different temporal (speed) and spatial (size) scales. Neuromodulation of STP produced temporal and spatial scaling of the learned dynamics and enhanced temporal or spatial generalization compared to standard training of the synaptic weights in the absence of STP. The model also accounted for the results of two experimental studies involving flexible sensorimotor timing. Neuromodulation of STP provides a unified and biologically plausible mechanism to control the temporal and spatial scales of neural dynamics and sensorimotor behaviors.
Collapse
Affiliation(s)
- Shanglin Zhou
- Institute for Translational Brain Research, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, China
- MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
7
|
Soldado-Magraner S, Buonomano DV. Neural Sequences and the Encoding of Time. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:81-93. [PMID: 38918347 DOI: 10.1007/978-3-031-60183-5_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Converging experimental and computational evidence indicate that on the scale of seconds the brain encodes time through changing patterns of neural activity. Experimentally, two general forms of neural dynamic regimes that can encode time have been observed: neural population clocks and ramping activity. Neural population clocks provide a high-dimensional code to generate complex spatiotemporal output patterns, in which each neuron exhibits a nonlinear temporal profile. A prototypical example of neural population clocks are neural sequences, which have been observed across species, brain areas, and behavioral paradigms. Additionally, neural sequences emerge in artificial neural networks trained to solve time-dependent tasks. Here, we examine the role of neural sequences in the encoding of time, and how they may emerge in a biologically plausible manner. We conclude that neural sequences may represent a canonical computational regime to perform temporal computations.
Collapse
Affiliation(s)
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA.
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
8
|
Baykan C, Zhu X, Allenmark F, Shi Z. Influences of temporal order in temporal reproduction. Psychon Bull Rev 2023; 30:2210-2218. [PMID: 37291447 PMCID: PMC10728249 DOI: 10.3758/s13423-023-02310-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2023] [Indexed: 06/10/2023]
Abstract
Despite the crucial role of complex temporal sequences, such as speech and music, in our everyday lives, our ability to acquire and reproduce these patterns is prone to various contextual biases. In this study, we examined how the temporal order of auditory sequences affects temporal reproduction. Participants were asked to reproduce accelerating, decelerating or random sequences, each consisting of four intervals, by tapping their fingers. Our results showed that the reproduction and the reproduction variability were influenced by the sequential structure and interval orders. The mean reproduced interval was assimilated by the first interval of the sequence, with the lowest mean for decelerating and the highest for accelerating sequences. Additionally, the central tendency bias was affected by the volatility and the last interval of the sequence, resulting in a stronger central tendency in the random and decelerating sequences than the accelerating sequence. Using Bayesian integration between the ensemble mean of the sequence and individual durations and considering the perceptual uncertainty associated with the sequential structure and position, we were able to accurately predict the behavioral results. The findings highlight the critical role of the temporal order of a sequence in temporal pattern reproduction, with the first interval exerting greater influence on mean reproduction and the volatility and the last interval contributing to the perceptual uncertainty of individual intervals and the central tendency bias.
Collapse
Affiliation(s)
- Cemre Baykan
- General and Experimental Psychology, Department of Psychology, Ludwig Maximilian University of Munich, 80802, Munich, Germany.
| | - Xiuna Zhu
- General and Experimental Psychology, Department of Psychology, Ludwig Maximilian University of Munich, 80802, Munich, Germany
| | - Fredrik Allenmark
- General and Experimental Psychology, Department of Psychology, Ludwig Maximilian University of Munich, 80802, Munich, Germany
| | - Zhuanghua Shi
- General and Experimental Psychology, Department of Psychology, Ludwig Maximilian University of Munich, 80802, Munich, Germany
| |
Collapse
|
9
|
Beiran M, Meirhaeghe N, Sohn H, Jazayeri M, Ostojic S. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron 2023; 111:739-753.e8. [PMID: 36640766 PMCID: PMC9992137 DOI: 10.1016/j.neuron.2022.12.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 09/23/2022] [Accepted: 12/08/2022] [Indexed: 01/15/2023]
Abstract
Biological brains possess an unparalleled ability to adapt behavioral responses to changing stimuli and environments. How neural processes enable this capacity is a fundamental open question. Previous works have identified two candidate mechanisms: a low-dimensional organization of neural activity and a modulation by contextual inputs. We hypothesized that combining the two might facilitate generalization and adaptation in complex tasks. We tested this hypothesis in flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks, we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform, enabling generalization to novel inputs and adaptation to changing conditions. Reverse-engineering and theoretical analyses demonstrated that this parametric control relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds while preserving their geometry. Comparisons with data from behaving monkeys confirmed the behavioral and neural signatures of this mechanism.
Collapse
Affiliation(s)
- Manuel Beiran
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France.
| |
Collapse
|
10
|
De Corte BJ, Akdoğan B, Balsam PD. Temporal scaling and computing time in neural circuits: Should we stop watching the clock and look for its gears? Front Behav Neurosci 2022; 16:1022713. [PMID: 36570701 PMCID: PMC9773401 DOI: 10.3389/fnbeh.2022.1022713] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 10/31/2022] [Indexed: 12/13/2022] Open
Abstract
Timing underlies a variety of functions, from walking to perceiving causality. Neural timing models typically fall into one of two categories-"ramping" and "population-clock" theories. According to ramping models, individual neurons track time by gradually increasing or decreasing their activity as an event approaches. To time different intervals, ramping neurons adjust their slopes, ramping steeply for short intervals and vice versa. In contrast, according to "population-clock" models, multiple neurons track time as a group, and each neuron can fire nonlinearly. As each neuron changes its rate at each point in time, a distinct pattern of activity emerges across the population. To time different intervals, the brain learns the population patterns that coincide with key events. Both model categories have empirical support. However, they often differ in plausibility when applied to certain behavioral effects. Specifically, behavioral data indicate that the timing system has a rich computational capacity, allowing observers to spontaneously compute novel intervals from previously learned ones. In population-clock theories, population patterns map to time arbitrarily, making it difficult to explain how different patterns can be computationally combined. Ramping models are viewed as more plausible, assuming upstream circuits can set the slope of ramping neurons according to a given computation. Critically, recent studies suggest that neurons with nonlinear firing profiles often scale to time different intervals-compressing for shorter intervals and stretching for longer ones. This "temporal scaling" effect has led to a hybrid-theory where, like a population-clock model, population patterns encode time, yet like a ramping neuron adjusting its slope, the speed of each neuron's firing adapts to different intervals. Here, we argue that these "relative" population-clock models are as computationally plausible as ramping theories, viewing population-speed and ramp-slope adjustments as equivalent. Therefore, we view identifying these "speed-control" circuits as a key direction for evaluating how the timing system performs computations. Furthermore, temporal scaling highlights that a key distinction between different neural models is whether they propose an absolute or relative time-representation. However, we note that several behavioral studies suggest the brain processes both scales, cautioning against a dichotomy.
Collapse
Affiliation(s)
- Benjamin J. De Corte
- Department of Psychology, Columbia University, New York, NY, United States
- Division of Developmental Neuroscience, New York State Psychiatric Institute, New York, NY, United States
| | - Başak Akdoğan
- Department of Psychology, Columbia University, New York, NY, United States
- Division of Developmental Neuroscience, New York State Psychiatric Institute, New York, NY, United States
| | - Peter D. Balsam
- Department of Psychology, Columbia University, New York, NY, United States
- Division of Developmental Neuroscience, New York State Psychiatric Institute, New York, NY, United States
- Department of Neuroscience and Behavior, Barnard College, New York, NY, United States
| |
Collapse
|
11
|
Chinoy RB, Tanwar A, Buonomano DV. A Recurrent Neural Network Model Accounts for Both Timing and Working Memory Components of an Interval Discrimination Task. TIMING & TIME PERCEPTION 2022. [DOI: 10.1163/22134468-bja10058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract
Interval discrimination is of fundamental importance to many forms of sensory processing, including speech and music. Standard interval discrimination tasks require comparing two intervals separated in time, and thus include both working memory (WM) and timing components. Models of interval discrimination invoke separate circuits for the timing and WM components. Here we examine if, in principle, the same recurrent neural network can implement both. Using human psychophysics, we first explored the role of the WM component by varying the interstimulus delay. Consistent with previous studies, discrimination was significantly worse for a 250 ms delay, compared to 750 and 1500 ms delays, suggesting that the first interval is stably stored in WM for longer delays. We next successfully trained a recurrent neural network (RNN) on the task, demonstrating that the same network can implement both the timing and WM components. Many units in the RNN were tuned to specific intervals during the sensory epoch, and others encoded the first interval during the delay period. Overall, the encoding strategy was consistent with the notion of mixed selectivity. Units generally encoded more interval information during the sensory epoch than in the delay period, reflecting categorical encoding of short versus long in WM, rather than encoding of the specific interval. Our results demonstrate that, in contrast to standard models of interval discrimination that invoke a separate memory module, the same network can, in principle, solve the timing, WM, and comparison components of an interval discrimination task.
Collapse
Affiliation(s)
- Rehan B. Chinoy
- Departments of Neurobiology and Psychology, Brain Research Institute, and Integrative Center for Learning and Memory, University of California, Los Angeles, CA 90095–1763, USA
| | - Ashita Tanwar
- Departments of Neurobiology and Psychology, Brain Research Institute, and Integrative Center for Learning and Memory, University of California, Los Angeles, CA 90095–1763, USA
| | - Dean V. Buonomano
- Departments of Neurobiology and Psychology, Brain Research Institute, and Integrative Center for Learning and Memory, University of California, Los Angeles, CA 90095–1763, USA
| |
Collapse
|
12
|
Zhou S, Buonomano DV. Neural population clocks: Encoding time in dynamic patterns of neural activity. Behav Neurosci 2022; 136:374-382. [PMID: 35446093 PMCID: PMC9561006 DOI: 10.1037/bne0000515] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ability to predict and prepare for near- and far-future events is among the most fundamental computations the brain performs. Because of the importance of time for prediction and sensorimotor processing, the brain has evolved multiple mechanisms to tell and encode time across scales ranging from microseconds to days and beyond. Converging experimental and computational data indicate that, on the scale of seconds, timing relies on diverse neural mechanisms distributed across different brain areas. Among the different encoding mechanisms on the scale of seconds, we distinguish between neural population clocks and ramping activity as distinct strategies to encode time. One instance of neural population clocks, neural sequences, represents in some ways an optimal and flexible dynamic regime for the encoding of time. Specifically, neural sequences comprise a high-dimensional representation that can be used by downstream areas to flexibly generate arbitrarily simple and complex output patterns using biologically plausible learning rules. We propose that high-level integration areas may use high-dimensional dynamics such as neural sequences to encode time, providing downstream areas information to build low-dimensional ramp-like activity that can drive movements and temporal expectation. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Shanglin Zhou
- Department of Neurobiology, University of California, Los Angeles, CA 90095, USA
| | - Dean V. Buonomano
- Department of Neurobiology, University of California, Los Angeles, CA 90095, USA
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
13
|
Tsao A, Yousefzadeh SA, Meck WH, Moser MB, Moser EI. The neural bases for timing of durations. Nat Rev Neurosci 2022; 23:646-665. [PMID: 36097049 DOI: 10.1038/s41583-022-00623-3] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/19/2022] [Indexed: 11/10/2022]
Abstract
Durations are defined by a beginning and an end, and a major distinction is drawn between durations that start in the present and end in the future ('prospective timing') and durations that start in the past and end either in the past or the present ('retrospective timing'). Different psychological processes are thought to be engaged in each of these cases. The former is thought to engage a clock-like mechanism that accurately tracks the continuing passage of time, whereas the latter is thought to engage a reconstructive process that utilizes both temporal and non-temporal information from the memory of past events. We propose that, from a biological perspective, these two forms of duration 'estimation' are supported by computational processes that are both reliant on population state dynamics but are nevertheless distinct. Prospective timing is effectively carried out in a single step where the ongoing dynamics of population activity directly serve as the computation of duration, whereas retrospective timing is carried out in two steps: the initial generation of population state dynamics through the process of event segmentation and the subsequent computation of duration utilizing the memory of those dynamics.
Collapse
Affiliation(s)
- Albert Tsao
- Department of Biology, Stanford University, Stanford, CA, USA.
| | | | - Warren H Meck
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - May-Britt Moser
- Centre for Neural Computation, Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway
| | - Edvard I Moser
- Centre for Neural Computation, Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway.
| |
Collapse
|
14
|
Yin B, Shi Z, Wang Y, Meck WH. Oscillation/Coincidence-Detection Models of Reward-Related Timing in Corticostriatal Circuits. TIMING & TIME PERCEPTION 2022. [DOI: 10.1163/22134468-bja10057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract
The major tenets of beat-frequency/coincidence-detection models of reward-related timing are reviewed in light of recent behavioral and neurobiological findings. This includes the emphasis on a core timing network embedded in the motor system that is comprised of a corticothalamic-basal ganglia circuit. Therein, a central hub provides timing pulses (i.e., predictive signals) to the entire brain, including a set of distributed satellite regions in the cerebellum, cortex, amygdala, and hippocampus that are selectively engaged in timing in a manner that is more dependent upon the specific sensory, behavioral, and contextual requirements of the task. Oscillation/coincidence-detection models also emphasize the importance of a tuned ‘perception’ learning and memory system whereby target durations are detected by striatal networks of medium spiny neurons (MSNs) through the coincidental activation of different neural populations, typically utilizing patterns of oscillatory input from the cortex and thalamus or derivations thereof (e.g., population coding) as a time base. The measure of success of beat-frequency/coincidence-detection accounts, such as the Striatal Beat-Frequency model of reward-related timing (SBF), is their ability to accommodate new experimental findings while maintaining their original framework, thereby making testable experimental predictions concerning diagnosis and treatment of issues related to a variety of dopamine-dependent basal ganglia disorders, including Huntington’s and Parkinson’s disease.
Collapse
Affiliation(s)
- Bin Yin
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
- School of Psychology, Fujian Normal University, Fuzhou, 350117, Fujian, China
| | - Zhuanghua Shi
- Department of Psychology, Ludwig Maximilian University of Munich, 80802 Munich, Germany
| | - Yaxin Wang
- School of Psychology, Fujian Normal University, Fuzhou, 350117, Fujian, China
| | - Warren H. Meck
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
| |
Collapse
|
15
|
Zemlianova K, Bose A, Rinzel J. A biophysical counting mechanism for keeping time. BIOLOGICAL CYBERNETICS 2022; 116:205-218. [PMID: 35031845 DOI: 10.1007/s00422-021-00915-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 11/16/2021] [Indexed: 06/14/2023]
Abstract
The ability to estimate and produce appropriately timed responses is central to many behaviors including speaking, dancing, and playing a musical instrument. A classical framework for estimating or producing a time interval is the pacemaker-accumulator model in which pulses of a pacemaker are counted and compared to a stored representation. However, the neural mechanisms for how these pulses are counted remain an open question. The presence of noise and stochasticity further complicates the picture. We present a biophysical model of how to keep count of a pacemaker in the presence of various forms of stochasticity using a system of bistable Wilson-Cowan units asymmetrically connected in a one-dimensional array; all units receive the same input pulses from a central clock but only one unit is active at any point in time. With each pulse from the clock, the position of the activated unit changes thereby encoding the total number of pulses emitted by the clock. This neural architecture maps the counting problem into the spatial domain, which in turn translates count to a time estimate. We further extend the model to a hierarchical structure to be able to robustly achieve higher counts.
Collapse
Affiliation(s)
| | - Amitabha Bose
- Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, NJ, USA
| | - John Rinzel
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| |
Collapse
|
16
|
Wang T, Chen Y, Cui H. From Parametric Representation to Dynamical System: Shifting Views of the Motor Cortex in Motor Control. Neurosci Bull 2022; 38:796-808. [PMID: 35298779 PMCID: PMC9276910 DOI: 10.1007/s12264-022-00832-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/29/2021] [Indexed: 11/01/2022] Open
Abstract
In contrast to traditional representational perspectives in which the motor cortex is involved in motor control via neuronal preference for kinetics and kinematics, a dynamical system perspective emerging in the last decade views the motor cortex as a dynamical machine that generates motor commands by autonomous temporal evolution. In this review, we first look back at the history of the representational and dynamical perspectives and discuss their explanatory power and controversy from both empirical and computational points of view. Here, we aim to reconcile the above perspectives, and evaluate their theoretical impact, future direction, and potential applications in brain-machine interfaces.
Collapse
Affiliation(s)
- Tianwei Wang
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yun Chen
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - He Cui
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
17
|
Encoding time in neural dynamic regimes with distinct computational tradeoffs. PLoS Comput Biol 2022; 18:e1009271. [PMID: 35239644 PMCID: PMC8893702 DOI: 10.1371/journal.pcbi.1009271] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 02/08/2022] [Indexed: 11/19/2022] Open
Abstract
Converging evidence suggests the brain encodes time in dynamic patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Most temporal tasks, however, require more than just encoding time, and can have distinct computational requirements including the need to exhibit temporal scaling, generalize to novel contexts, or robustness to noise. It is not known how neural circuits can encode time and satisfy distinct computational requirements, nor is it known whether similar patterns of neural activity at the population level can exhibit dramatically different computational or generalization properties. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimulus-specific dynamics. We found that similar neural dynamic patterns at the level of single intervals, could exhibit fundamentally different properties, including, generalization, the connectivity structure of the trained networks, and the contribution of excitatory and inhibitory neurons. Critically, depending on the task structure RNNs were better suited for generalization or robustness to noise. Further analysis revealed different connection patterns underlying the different regimes. Our results predict that apparently similar neural dynamic patterns at the population level (e.g., neural sequences) can exhibit fundamentally different computational properties in regards to their ability to generalize to novel stimuli and their robustness to noise—and that these differences are associated with differences in network connectivity and distinct contributions of excitatory and inhibitory neurons. We also predict that the task structure used in different experimental studies accounts for some of the experimentally observed variability in how networks encode time. The ability to tell time and anticipate when external events will occur are among the most fundamental computations the brain performs. Converging evidence suggests the brain encodes time through changing patterns of neural activity. Different temporal tasks, however, have distinct computational requirements, such as the need to flexibly scale temporal patterns or generalize to novel inputs. To understand how networks can encode time and satisfy different computational requirements we trained recurrent neural networks (RNNs) on two timing tasks that have previously been used in behavioral studies. Both tasks required producing identically timed output patterns. Using a novel framework to quantify how networks encode different intervals, we found that similar patterns of neural activity—neural sequences—were associated with fundamentally different underlying mechanisms, including the connectivity patterns of the RNNs. Critically, depending on the task the RNNs were trained on, they were better suited for generalization or robustness to noise. Our results predict that similar patterns of neural activity can be produced by distinct RNN configurations, which in turn have fundamentally different computational tradeoffs. Our results also predict that differences in task structure account for some of the experimentally observed variability in how networks encode time.
Collapse
|
18
|
Linear vector models of time perception account for saccade and stimulus novelty interactions. Heliyon 2022; 8:e09036. [PMID: 35265767 PMCID: PMC8899236 DOI: 10.1016/j.heliyon.2022.e09036] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 11/24/2021] [Accepted: 02/25/2022] [Indexed: 11/21/2022] Open
Abstract
Various models (e.g., scalar, state-dependent network, and vector models) have been proposed to explain the global aspects of time perception, but they have not been tested against specific visual phenomena like perisaccadic time compression and novel stimulus time dilation. Here, in two separate experiments (N = 31), we tested how the perceived duration of a novel stimulus is influenced by 1) a simultaneous saccade, in combination with 2) a prior series of repeated stimuli in human participants. This yielded a novel behavioral interaction: pre-saccadic stimulus repetition neutralizes perisaccadic time compression. We then tested these results against simulations of the above models. Our data yielded low correlations against scalar model simulations, high but non-specific correlations for our feedforward neural network, and correlations that were both high and specific for a vector model based on identity of objective and subjective time. These results demonstrate the power of global time perception models in explaining disparate empirical phenomena and suggest that subjective time has a similar essence to time's physical vector.
Collapse
|
19
|
Calderon CB, Verguts T, Frank MJ. Thunderstruck: The ACDC model of flexible sequences and rhythms in recurrent neural circuits. PLoS Comput Biol 2022; 18:e1009854. [PMID: 35108283 PMCID: PMC8843237 DOI: 10.1371/journal.pcbi.1009854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 02/14/2022] [Accepted: 01/21/2022] [Indexed: 11/18/2022] Open
Abstract
Adaptive sequential behavior is a hallmark of human cognition. In particular, humans can learn to produce precise spatiotemporal sequences given a certain context. For instance, musicians can not only reproduce learned action sequences in a context-dependent manner, they can also quickly and flexibly reapply them in any desired tempo or rhythm without overwriting previous learning. Existing neural network models fail to account for these properties. We argue that this limitation emerges from the fact that sequence information (i.e., the position of the action) and timing (i.e., the moment of response execution) are typically stored in the same neural network weights. Here, we augment a biologically plausible recurrent neural network of cortical dynamics to include a basal ganglia-thalamic module which uses reinforcement learning to dynamically modulate action. This “associative cluster-dependent chain” (ACDC) model modularly stores sequence and timing information in distinct loci of the network. This feature increases computational power and allows ACDC to display a wide range of temporal properties (e.g., multiple sequences, temporal shifting, rescaling, and compositionality), while still accounting for several behavioral and neurophysiological empirical observations. Finally, we apply this ACDC network to show how it can learn the famous “Thunderstruck” song intro and then flexibly play it in a “bossa nova” rhythm without further training. How do humans flexibly adapt action sequences? For instance, musicians can learn a song and quickly speed up or slow down the tempo, or even play the song following a completely different rhythm (e.g., a rock song using a bossa nova rhythm). In this work, we build a biologically plausible network of cortico-basal ganglia interactions that explains how this temporal flexibility may emerge in the brain. Crucially, our model factorizes sequence order and action timing, respectively represented in cortical and basal ganglia dynamics. This factorization allows full temporal flexibility, i.e. the timing of a learned action sequence can be recomposed without interfering with the order of the sequence. As such, our model is capable of learning asynchronous action sequences, and flexibly shift, rescale, and recompose them, while accounting for biological data.
Collapse
Affiliation(s)
- Cristian Buc Calderon
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
- * E-mail:
| | - Tom Verguts
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Michael J. Frank
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
| |
Collapse
|
20
|
Rajakumar A, Rinzel J, Chen ZS. Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation. Neural Comput 2021; 33:2603-2645. [PMID: 34530451 PMCID: PMC8750453 DOI: 10.1162/neco_a_01418] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/08/2021] [Indexed: 11/04/2022]
Abstract
Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics ("neural sequences") of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
Collapse
Affiliation(s)
- Alfred Rajakumar
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, U.S.A.
| | - John Rinzel
- Courant Institute of Mathematical Sciences and Center for Neural Science, New York University, New York, NY 10012, USA.
| | - Zhe S Chen
- Department of Psychiatry and Neuroscience Institute, New York University School of Medicine, New York, NY 10016, U.S.A.
| |
Collapse
|
21
|
Meirhaeghe N, Sohn H, Jazayeri M. A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex. Neuron 2021; 109:2995-3011.e5. [PMID: 34534456 PMCID: PMC9737059 DOI: 10.1016/j.neuron.2021.08.025] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 07/02/2021] [Accepted: 08/18/2021] [Indexed: 12/14/2022]
Abstract
The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.
Collapse
Affiliation(s)
- Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences & Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
22
|
Rose D, Ott L, Guérin SMR, Annett LE, Lovatt P, Delevoye-Turrell YN. A general procedure to measure the pacing of body movements timed to music and metronome in younger and older adults. Sci Rep 2021; 11:3264. [PMID: 33547366 PMCID: PMC7864905 DOI: 10.1038/s41598-021-82283-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 12/18/2020] [Indexed: 12/31/2022] Open
Abstract
Finger-tapping tasks are classically used to investigate sensorimotor synchronization in relation to neutral auditory cues, such as metronomes. However, music is more commonly associated with an entrained bodily response, such as toe tapping, or dancing. Here we report an experimental procedure that was designed to bridge the gap between timing and intervention studies by directly comparing the effects of metronome and musical cue types on motor timing abilities across the three naturalistic voluntary actions of finger tapping, toe tapping, and stepping on the spot as a simplified case of whole body movement. Both pacing cues were presented at slow, medium, and fast tempi. The findings suggested that the task of stepping on the spot enabled better timing performances than tapping both in younger and older adults (75+). Timing performances followed an inverse U shape with best performances observed in the medium tempi that were set close to the spontaneous motor tempo in each movement type. Finally, music provided an entrainment effect in addition to pace setting that enabled better motor timing and greater stability than classically reported using a metronome. By applying time-stamp analyses to kinetic data, we demonstrate that tapping and stepping engage different timing modes. This work details the importance of translational research for a better understanding of motor timing. It offers a simple procedure that strengthens the validity of applying academic work and contributes in knowledge towards a wide range of therapeutic interventions.
Collapse
Affiliation(s)
- Dawn Rose
- Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland
- Department of Psychology and Sport Sciences, University of Hertfordshire, Hatfield, UK
| | - Laurent Ott
- Univ. Lille, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, 59000, Lille, France
| | - Ségolène M R Guérin
- Univ. Lille, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, 59000, Lille, France
| | - Lucy E Annett
- Department of Psychology and Sport Sciences, University of Hertfordshire, Hatfield, UK
| | | | | |
Collapse
|
23
|
Heys JG, Wu Z, Allegra Mascaro AL, Dombeck DA. Inactivation of the Medial Entorhinal Cortex Selectively Disrupts Learning of Interval Timing. Cell Rep 2020; 32:108163. [PMID: 32966784 PMCID: PMC8719477 DOI: 10.1016/j.celrep.2020.108163] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 03/06/2020] [Accepted: 08/26/2020] [Indexed: 11/25/2022] Open
Abstract
The entorhinal-hippocampal circuit can encode features of elapsed time, but nearly all previous research focused on neural encoding of "implicit time." Recent research has revealed encoding of "explicit time" in the medial entorhinal cortex (MEC) as mice are actively engaged in an interval timing task. However, it is unclear whether the MEC is required for temporal perception and/or learning during such explicit timing tasks. We therefore optogenetically inactivated the MEC as mice learned an interval timing "door stop" task that engaged mice in immobile interval timing behavior and locomotion-dependent navigation behavior. We find that the MEC is critically involved in learning of interval timing but not necessary for estimating temporal duration after learning. Together with our previous research, these results suggest that activity of a subcircuit in the MEC that encodes elapsed time during immobility is necessary for learning interval timing behaviors.
Collapse
Affiliation(s)
- James G Heys
- Department of Neurobiology, Northwestern University, Evanston, IL, USA
| | - Zihan Wu
- Department of Neurobiology, Northwestern University, Evanston, IL, USA
| | | | - Daniel A Dombeck
- Department of Neurobiology, Northwestern University, Evanston, IL, USA.
| |
Collapse
|
24
|
Vincent-Lamarre P, Calderini M, Thivierge JP. Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations. Front Comput Neurosci 2020; 14:78. [PMID: 33013342 PMCID: PMC7505196 DOI: 10.3389/fncom.2020.00078] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 07/24/2020] [Indexed: 11/13/2022] Open
Abstract
Many cognitive and behavioral tasks-such as interval timing, spatial navigation, motor control, and speech-require the execution of precisely-timed sequences of neural activation that cannot be fully explained by a succession of external stimuli. We show how repeatable and reliable patterns of spatiotemporal activity can be generated in chaotic and noisy spiking recurrent neural networks. We propose a general solution for networks to autonomously produce rich patterns of activity by providing a multi-periodic oscillatory signal as input. We show that the model accurately learns a variety of tasks, including speech generation, motor control, and spatial navigation. Further, the model performs temporal rescaling of natural spoken words and exhibits sequential neural activity commonly found in experimental data involving temporal processing. In the context of spatial navigation, the model learns and replays compressed sequences of place cells and captures features of neural activity such as the emergence of ripples and theta phase precession. Together, our findings suggest that combining oscillatory neuronal inputs with different frequencies provides a key mechanism to generate precisely timed sequences of activity in recurrent circuits of the brain.
Collapse
|
25
|
Abstract
Humans and animals can effortlessly coordinate their movements with external stimuli. This capacity indicates that sensory inputs can rapidly and flexibly reconfigure the ongoing dynamics in the neural circuits that control movements. Here, we develop a circuit-level model that coordinates movement times with expected and unexpected temporal events. The model consists of two interacting modules, a motor planning module that controls movement times and a sensory anticipation module that anticipates external events. Both modules harbor a reservoir of latent dynamics, and their interaction forms a control system whose output is adjusted adaptively to minimize timing errors. We show that the model’s output matches human behavior in a range of tasks including time interval production, periodic production, synchronization/continuation, and Bayesian time interval reproduction. These results demonstrate how recurrent interactions in a simple and modular neural circuit could create the dynamics needed to control timing behavior. We can flexibly coordinate our movements with external stimuli, but no circuit-level model exists to explain this ability. Inspired by fundamental concepts in control theory, the authors construct a modular neural circuit that captures human behavior in a wide range of temporal coordination tasks.
Collapse
|
26
|
Pollock E, Jazayeri M. Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS Comput Biol 2020; 16:e1008128. [PMID: 32785228 PMCID: PMC7446915 DOI: 10.1371/journal.pcbi.1008128] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 08/24/2020] [Accepted: 07/08/2020] [Indexed: 12/11/2022] Open
Abstract
Many cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.
Collapse
Affiliation(s)
- Eli Pollock
- Department of Brain & Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Mehrdad Jazayeri
- Department of Brain & Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
27
|
Liang Q, Zeng Y, Xu B. Temporal-Sequential Learning With a Brain-Inspired Spiking Neural Network and Its Application to Musical Memory. Front Comput Neurosci 2020; 14:51. [PMID: 32714173 PMCID: PMC7343962 DOI: 10.3389/fncom.2020.00051] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 05/11/2020] [Indexed: 11/13/2022] Open
Abstract
Sequence learning is a fundamental cognitive function of the brain. However, the ways in which sequential information is represented and memorized are not dealt with satisfactorily by existing models. To overcome this deficiency, this paper introduces a spiking neural network based on psychological and neurobiological findings at multiple scales. Compared with existing methods, our model has four novel features: (1) It contains several collaborative subnetworks similar to those in brain regions with different cognitive functions. The individual building blocks of the simulated areas are neural functional minicolumns composed of biologically plausible neurons. Both excitatory and inhibitory connections between neurons are modulated dynamically using a spike-timing-dependent plasticity learning rule. (2) Inspired by the mechanisms of the brain's cortical-striatal loop, a dependent timing module is constructed to encode temporal information, which is essential in sequence learning but has not been processed well by traditional algorithms. (3) Goal-based and episodic retrievals can be achieved at different time scales. (4) Musical memory is used as an application to validate the model. Experiments show that the model can store a huge amount of data on melodies and recall them with high accuracy. In addition, it can remember the entirety of a melody given only an episode or the melody played at different paces.
Collapse
Affiliation(s)
- Qian Liang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Bo Xu
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
28
|
Bellmund JLS, Polti I, Doeller CF. Sequence Memory in the Hippocampal-Entorhinal Region. J Cogn Neurosci 2020; 32:2056-2070. [PMID: 32530378 DOI: 10.1162/jocn_a_01592] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Episodic memories are constructed from sequences of events. When recalling such a memory, we not only recall individual events, but we also retrieve information about how the sequence of events unfolded. Here, we focus on the role of the hippocampal-entorhinal region in processing and remembering sequences of events, which are thought to be stored in relational networks. We summarize evidence that temporal relations are a central organizational principle for memories in the hippocampus. Importantly, we incorporate novel insights from recent studies about the role of the adjacent entorhinal cortex in sequence memory. In rodents, the lateral entorhinal subregion carries temporal information during ongoing behavior. The human homologue is recruited during memory recall where its representations reflect the temporal relationships between events encountered in a sequence. We further introduce the idea that the hippocampal-entorhinal region might enable temporal scaling of sequence representations. Flexible changes of sequence progression speed could underlie the traversal of episodic memories and mental simulations at different paces. In conclusion, we describe how the entorhinal cortex and hippocampus contribute to remembering event sequences-a core component of episodic memory.
Collapse
Affiliation(s)
- Jacob L S Bellmund
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ignacio Polti
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Norwegian University of Science and Technology, Trondheim, Norway
| | - Christian F Doeller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
29
|
Slayton MA, Romero-Sosa JL, Shore K, Buonomano DV, Viskontas IV. Musical expertise generalizes to superior temporal scaling in a Morse code tapping task. PLoS One 2020; 15:e0221000. [PMID: 31905200 PMCID: PMC6944339 DOI: 10.1371/journal.pone.0221000] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 12/10/2019] [Indexed: 11/26/2022] Open
Abstract
A key feature of the brain’s ability to tell time and generate complex temporal patterns is its capacity to produce similar temporal patterns at different speeds. For example, humans can tie a shoe, type, or play an instrument at different speeds or tempi—a phenomenon referred to as temporal scaling. While it is well established that training improves timing precision and accuracy, it is not known whether expertise improves temporal scaling, and if so, whether it generalizes across skill domains. We quantified temporal scaling and timing precision in musicians and non-musicians as they learned to tap a Morse code sequence. We found that non-musicians improved significantly over the course of days of training at the standard speed. In contrast, musicians exhibited a high level of temporal precision on the first day, which did not improve significantly with training. Although there was no significant difference in performance at the end of training at the standard speed, musicians were significantly better at temporal scaling—i.e., at reproducing the learned Morse code pattern at faster and slower speeds. Interestingly, both musicians and non-musicians exhibited a Weber-speed effect, where temporal precision at the same absolute time was higher when producing patterns at the faster speed. These results are the first to establish that the ability to generate the same motor patterns at different speeds improves with extensive training and generalizes to non-musical domains.
Collapse
Affiliation(s)
- Matthew A. Slayton
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
| | - Juan L. Romero-Sosa
- Department of Neurobiology, University of California Los Angeles, Los Angeles, CA, United States of America
- Neuroscience Interdepartmental Program, University of California Los Angeles, Los Angeles, CA, United States of America
| | - Katrina Shore
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
| | - Dean V. Buonomano
- Department of Neurobiology, University of California Los Angeles, Los Angeles, CA, United States of America
- Neuroscience Interdepartmental Program, University of California Los Angeles, Los Angeles, CA, United States of America
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, United States of America
- * E-mail: (DVB); (IVV)
| | - Indre V. Viskontas
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
- Department of Psychology, University of San Francisco, San Francisco, CA, United States of America
- * E-mail: (DVB); (IVV)
| |
Collapse
|
30
|
Sohn H, Narain D, Meirhaeghe N, Jazayeri M. Bayesian Computation through Cortical Latent Dynamics. Neuron 2019; 103:934-947.e5. [PMID: 31320220 DOI: 10.1016/j.neuron.2019.06.012] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 04/15/2019] [Accepted: 06/13/2019] [Indexed: 10/26/2022]
Abstract
Statistical regularities in the environment create prior beliefs that we rely on to optimize our behavior when sensory information is uncertain. Bayesian theory formalizes how prior beliefs can be leveraged and has had a major impact on models of perception, sensorimotor function, and cognition. However, it is not known how recurrent interactions among neurons mediate Bayesian integration. By using a time-interval reproduction task in monkeys, we found that prior statistics warp neural representations in the frontal cortex, allowing the mapping of sensory inputs to motor outputs to incorporate prior statistics in accordance with Bayesian inference. Analysis of recurrent neural network models performing the task revealed that this warping was enabled by a low-dimensional curved manifold and allowed us to further probe the potential causal underpinnings of this computational strategy. These results uncover a simple and general principle whereby prior beliefs exert their influence on behavior by sculpting cortical latent dynamics.
Collapse
Affiliation(s)
- Hansem Sohn
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Devika Narain
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Erasmus Medical Center, Rotterdam 3015CN, the Netherlands
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA 02139, USA
| | - Mehrdad Jazayeri
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
31
|
Stroud JP, Porter MA, Hennequin G, Vogels TP. Motor primitives in space and time via targeted gain modulation in cortical networks. Nat Neurosci 2018; 21:1774-1783. [PMID: 30482949 PMCID: PMC6276991 DOI: 10.1038/s41593-018-0276-0] [Citation(s) in RCA: 63] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 10/09/2018] [Indexed: 02/08/2023]
Abstract
Motor cortex (M1) exhibits a rich repertoire of neuronal activities to support the generation of complex movements. Although recent neuronal-network models capture many qualitative aspects of M1 dynamics, they can generate only a few distinct movements. Additionally, it is unclear how M1 efficiently controls movements over a wide range of shapes and speeds. We demonstrate that modulation of neuronal input-output gains in recurrent neuronal-network models with a fixed architecture can dramatically reorganize neuronal activity and thus downstream muscle outputs. Consistent with the observation of diffuse neuromodulatory projections to M1, a relatively small number of modulatory control units provide sufficient flexibility to adjust high-dimensional network activity using a simple reward-based learning rule. Furthermore, it is possible to assemble novel movements from previously learned primitives, and one can separately change movement speed while preserving movement shape. Our results provide a new perspective on the role of modulatory systems in controlling recurrent cortical activity.
Collapse
Affiliation(s)
- Jake P Stroud
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.
| | - Mason A Porter
- Department of Mathematics, University of California Los Angeles, Los Angeles, CA, USA
- Mathematical Institute, University of Oxford, Oxford, UK
- CABDyN Complexity Centre, University of Oxford, Oxford, UK
| | - Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
| |
Collapse
|