1
|
Braun J, Hurtak F, Wang-Chen S, Ramdya P. Descending networks transform command signals into population motor control. Nature 2024:10.1038/s41586-024-07523-9. [PMID: 38839968 DOI: 10.1038/s41586-024-07523-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 05/06/2024] [Indexed: 06/07/2024]
Abstract
To convert intentions into actions, movement instructions must pass from the brain to downstream motor circuits through descending neurons (DNs). These include small sets of command-like neurons that are sufficient to drive behaviours1-the circuit mechanisms for which remain unclear. Here we show that command-like DNs in Drosophila directly recruit networks of additional DNs to orchestrate behaviours that require the active control of numerous body parts. Specifically, we found that command-like DNs previously thought to drive behaviours alone2-4 in fact co-activate larger populations of DNs. Connectome analyses and experimental manipulations revealed that this functional recruitment can be explained by direct excitatory connections between command-like DNs and networks of interconnected DNs in the brain. Descending population recruitment is necessary for behavioural control: DNs with many downstream descending partners require network co-activation to drive complete behaviours and drive only simple stereotyped movements in their absence. These DN networks reside within behaviour-specific clusters that inhibit one another. These results support a mechanism for command-like descending control in which behaviours are generated through the recruitment of increasingly large DN networks that compose behaviours by combining multiple motor subroutines.
Collapse
Affiliation(s)
- Jonas Braun
- Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland
| | - Femke Hurtak
- Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland
| | - Sibo Wang-Chen
- Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland
| | - Pavan Ramdya
- Neuroengineering Laboratory, Brain Mind Institute & Interfaculty Institute of Bioengineering, EPFL, Lausanne, Switzerland.
| |
Collapse
|
2
|
Rodriguez AC, Perich MG, Miller L, Humphries MD. Motor cortex latent dynamics encode spatial and temporal arm movement parameters independently. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.26.542452. [PMID: 37292834 PMCID: PMC10246015 DOI: 10.1101/2023.05.26.542452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: Each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, but also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matthew G. Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montréal, Canada
- Québec Artificial Intelligence Institute (Mila), Québec, Canada
| | - Lee Miller
- Northwestern University, Department of Biomedical Engineering, Chicago, USA
| | - Mark D. Humphries
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
3
|
Zhou S, Buonomano DV. Unified control of temporal and spatial scales of sensorimotor behavior through neuromodulation of short-term synaptic plasticity. SCIENCE ADVANCES 2024; 10:eadk7257. [PMID: 38701208 DOI: 10.1126/sciadv.adk7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 04/03/2024] [Indexed: 05/05/2024]
Abstract
Neuromodulators have been shown to alter the temporal profile of short-term synaptic plasticity (STP); however, the computational function of this neuromodulation remains unexplored. Here, we propose that the neuromodulation of STP provides a general mechanism to scale neural dynamics and motor outputs in time and space. We trained recurrent neural networks that incorporated STP to produce complex motor trajectories-handwritten digits-with different temporal (speed) and spatial (size) scales. Neuromodulation of STP produced temporal and spatial scaling of the learned dynamics and enhanced temporal or spatial generalization compared to standard training of the synaptic weights in the absence of STP. The model also accounted for the results of two experimental studies involving flexible sensorimotor timing. Neuromodulation of STP provides a unified and biologically plausible mechanism to control the temporal and spatial scales of neural dynamics and sensorimotor behaviors.
Collapse
Affiliation(s)
- Shanglin Zhou
- Institute for Translational Brain Research, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, China
- MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
4
|
Stroud JP, Duncan J, Lengyel M. The computational foundations of dynamic coding in working memory. Trends Cogn Sci 2024:S1364-6613(24)00053-6. [PMID: 38580528 DOI: 10.1016/j.tics.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/29/2024] [Accepted: 02/29/2024] [Indexed: 04/07/2024]
Abstract
Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern. Although this has been difficult to explain theoretically, neural network models optimized for WM typically also exhibit such dynamics. Here, we examine stable versus dynamic coding in neural data, classical models, and task-optimized networks. We review principled mathematical reasons for why classical models do not, while task-optimized models naturally do exhibit dynamic coding. We suggest an update to our understanding of WM maintenance, in which dynamic coding is a fundamental computational feature rather than an epiphenomenon.
Collapse
Affiliation(s)
- Jake P Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
5
|
Banerjee A, Chen F, Druckmann S, Long MA. Temporal scaling of motor cortical dynamics reveals hierarchical control of vocal production. Nat Neurosci 2024; 27:527-535. [PMID: 38291282 DOI: 10.1038/s41593-023-01556-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 12/13/2023] [Indexed: 02/01/2024]
Abstract
Neocortical activity is thought to mediate voluntary control over vocal production, but the underlying neural mechanisms remain unclear. In a highly vocal rodent, the male Alston's singing mouse, we investigate neural dynamics in the orofacial motor cortex (OMC), a structure critical for vocal behavior. We first describe neural activity that is modulated by component notes (~100 ms), probably representing sensory feedback. At longer timescales, however, OMC neurons exhibit diverse and often persistent premotor firing patterns that stretch or compress with song duration (~10 s). Using computational modeling, we demonstrate that such temporal scaling, acting through downstream motor production circuits, can enable vocal flexibility. These results provide a framework for studying hierarchical control circuits, a common design principle across many natural and artificial systems.
Collapse
Affiliation(s)
- Arkarup Banerjee
- NYU Neuroscience Institute, New York University Langone Health, New York, NY, USA.
- Department of Otolaryngology, New York University Langone Health, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| | - Feng Chen
- Department of Applied Physics, Stanford University, Stanford, CA, USA
| | - Shaul Druckmann
- Department of Neurobiology, Stanford University, Stanford, CA, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Health, New York, NY, USA.
- Department of Otolaryngology, New York University Langone Health, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
6
|
Jiang LP, Rao RPN. Dynamic predictive coding: A model of hierarchical sequence learning and prediction in the neocortex. PLoS Comput Biol 2024; 20:e1011801. [PMID: 38330098 PMCID: PMC10880975 DOI: 10.1371/journal.pcbi.1011801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 02/21/2024] [Accepted: 01/04/2024] [Indexed: 02/10/2024] Open
Abstract
We introduce dynamic predictive coding, a hierarchical model of spatiotemporal prediction and sequence learning in the neocortex. The model assumes that higher cortical levels modulate the temporal dynamics of lower levels, correcting their predictions of dynamics using prediction errors. As a result, lower levels form representations that encode sequences at shorter timescales (e.g., a single step) while higher levels form representations that encode sequences at longer timescales (e.g., an entire sequence). We tested this model using a two-level neural network, where the top-down modulation creates low-dimensional combinations of a set of learned temporal dynamics to explain input sequences. When trained on natural videos, the lower-level model neurons developed space-time receptive fields similar to those of simple cells in the primary visual cortex while the higher-level responses spanned longer timescales, mimicking temporal response hierarchies in the cortex. Additionally, the network's hierarchical sequence representation exhibited both predictive and postdictive effects resembling those observed in visual motion processing in humans (e.g., in the flash-lag illusion). When coupled with an associative memory emulating the role of the hippocampus, the model allowed episodic memories to be stored and retrieved, supporting cue-triggered recall of an input sequence similar to activity recall in the visual cortex. When extended to three hierarchical levels, the model learned progressively more abstract temporal representations along the hierarchy. Taken together, our results suggest that cortical processing and learning of sequences can be interpreted as dynamic predictive coding based on a hierarchical spatiotemporal generative model of the visual world.
Collapse
Affiliation(s)
- Linxing Preston Jiang
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States of America
- Center for Neurotechnology, University of Washington, Seattle, Washington, United States of America
- Computational Neuroscience Center, University of Washington, Seattle, Washington, United States of America
| | - Rajesh P. N. Rao
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States of America
- Center for Neurotechnology, University of Washington, Seattle, Washington, United States of America
- Computational Neuroscience Center, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
7
|
Rao RPN, Gklezakos DC, Sathish V. Active Predictive Coding: A Unifying Neural Model for Active Perception, Compositional Learning, and Hierarchical Planning. Neural Comput 2023; 36:1-32. [PMID: 38052084 DOI: 10.1162/neco_a_01627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/20/2023] [Indexed: 12/07/2023]
Abstract
There is growing interest in predictive coding as a model of how the brain learns through predictions and prediction errors. Predictive coding models have traditionally focused on sensory coding and perception. Here we introduce active predictive coding (APC) as a unifying model for perception, action, and cognition. The APC model addresses important open problems in cognitive science and AI, including (1) how we learn compositional representations (e.g., part-whole hierarchies for equivariant vision) and (2) how we solve large-scale planning problems, which are hard for traditional reinforcement learning, by composing complex state dynamics and abstract actions from simpler dynamics and primitive actions. By using hypernetworks, self-supervised learning, and reinforcement learning, APC learns hierarchical world models by combining task-invariant state transition networks and task-dependent policy networks at multiple abstraction levels. We illustrate the applicability of the APC model to active visual perception and hierarchical planning. Our results represent, to our knowledge, the first proof-of-concept demonstration of a unified approach to addressing the part-whole learning problem in vision, the nested reference frames learning problem in cognition, and the integrated state-action hierarchy learning problem in reinforcement learning.
Collapse
Affiliation(s)
- Rajesh P N Rao
- Paul G. Allen School of Computer Science and Engineering and Center for Neurotechnology, University of Washington, Seattle, WA 98195, U.S.A.
| | - Dimitrios C Gklezakos
- Paul G. Allen School of Computer Science and Engineering and Center for Neurotechnology, University of Washington, Seattle, WA 98195, U.S.A.
| | - Vishwas Sathish
- Paul G. Allen School of Computer Science and Engineering and Center for Neurotechnology, University of Washington, Seattle, WA 98195, U.S.A.
| |
Collapse
|
8
|
Friedenberger Z, Harkin E, Tóth K, Naud R. Silences, spikes and bursts: Three-part knot of the neural code. J Physiol 2023; 601:5165-5193. [PMID: 37889516 DOI: 10.1113/jp281510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
When a neuron breaks silence, it can emit action potentials in a number of patterns. Some responses are so sudden and intense that electrophysiologists felt the need to single them out, labelling action potentials emitted at a particularly high frequency with a metonym - bursts. Is there more to bursts than a figure of speech? After all, sudden bouts of high-frequency firing are expected to occur whenever inputs surge. The burst coding hypothesis advances that the neural code has three syllables: silences, spikes and bursts. We review evidence supporting this ternary code in terms of devoted mechanisms for burst generation, synaptic transmission and synaptic plasticity. We also review the learning and attention theories for which such a triad is beneficial.
Collapse
Affiliation(s)
- Zachary Friedenberger
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Neural Dynamics and Artifical Intelligence, Department of Physics, University of Ottawa, Ottawa, Ontario, Ottawa
| | - Emerson Harkin
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Katalin Tóth
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Richard Naud
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Neural Dynamics and Artifical Intelligence, Department of Physics, University of Ottawa, Ottawa, Ontario, Ottawa
| |
Collapse
|
9
|
Stroud JP, Watanabe K, Suzuki T, Stokes MG, Lengyel M. Optimal information loading into working memory explains dynamic coding in the prefrontal cortex. Proc Natl Acad Sci U S A 2023; 120:e2307991120. [PMID: 37983510 PMCID: PMC10691340 DOI: 10.1073/pnas.2307991120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 09/29/2023] [Indexed: 11/22/2023] Open
Abstract
Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal loading of information into working memory involves inputs that are largely orthogonal, rather than similar, to the late delay activities observed during memory maintenance, naturally leading to the widely observed phenomenon of dynamic coding in PFC. Using a theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading. We also find that optimal information loading emerges as a general dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics and reveals a normative principle underlying dynamic coding.
Collapse
Affiliation(s)
- Jake P. Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Kei Watanabe
- Graduate School of Frontier Biosciences, Osaka University, Osaka565-0871, Japan
| | - Takafumi Suzuki
- Center for Information and Neural Networks, National Institute of Communication and Information Technology, Osaka565-0871, Japan
| | - Mark G. Stokes
- Department of Experimental Psychology, University of Oxford, OxfordOX2 6GG, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, OxfordOX3 9DU, United Kingdom
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, BudapestH-1051, Hungary
| |
Collapse
|
10
|
Fisher A, Rao RPN. Recursive neural programs: A differentiable framework for learning compositional part-whole hierarchies and image grammars. PNAS NEXUS 2023; 2:pgad337. [PMID: 37954157 PMCID: PMC10637337 DOI: 10.1093/pnasnexus/pgad337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 10/05/2023] [Indexed: 11/14/2023]
Abstract
Human vision, thought, and planning involve parsing and representing objects and scenes using structured representations based on part-whole hierarchies. Computer vision and machine learning researchers have recently sought to emulate this capability using neural networks, but a generative model formulation has been lacking. Generative models that leverage compositionality, recursion, and part-whole hierarchies are thought to underlie human concept learning and the ability to construct and represent flexible mental concepts. We introduce Recursive Neural Programs (RNPs), a neural generative model that addresses the part-whole hierarchy learning problem by modeling images as hierarchical trees of probabilistic sensory-motor programs. These programs recursively reuse learned sensory-motor primitives to model an image within different spatial reference frames, enabling hierarchical composition of objects from parts and implementing a grammar for images. We show that RNPs can learn part-whole hierarchies for a variety of image datasets, allowing rich compositionality and intuitive parts-based explanations of objects. Our model also suggests a cognitive framework for understanding how human brains can potentially learn and represent concepts in terms of recursively defined primitives and their relations with each other.
Collapse
Affiliation(s)
- Ares Fisher
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
| | - Rajesh P N Rao
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
11
|
Boucher PO, Wang T, Carceroni L, Kane G, Shenoy KV, Chandrasekaran C. Initial conditions combine with sensory evidence to induce decision-related dynamics in premotor cortex. Nat Commun 2023; 14:6510. [PMID: 37845221 PMCID: PMC10579235 DOI: 10.1038/s41467-023-41752-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 09/18/2023] [Indexed: 10/18/2023] Open
Abstract
We used a dynamical systems perspective to understand decision-related neural activity, a fundamentally unresolved problem. This perspective posits that time-varying neural activity is described by a state equation with an initial condition and evolves in time by combining at each time step, recurrent activity and inputs. We hypothesized various dynamical mechanisms of decisions, simulated them in models to derive predictions, and evaluated these predictions by examining firing rates of neurons in the dorsal premotor cortex (PMd) of monkeys performing a perceptual decision-making task. Prestimulus neural activity (i.e., the initial condition) predicted poststimulus neural trajectories, covaried with RT and the outcome of the previous trial, but not with choice. Poststimulus dynamics depended on both the sensory evidence and initial condition, with easier stimuli and fast initial conditions leading to the fastest choice-related dynamics. Together, these results suggest that initial conditions combine with sensory evidence to induce decision-related dynamics in PMd.
Collapse
Affiliation(s)
- Pierre O Boucher
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA
| | - Tian Wang
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA
| | - Laura Carceroni
- Undergraduate Program in Neuroscience, Boston University, Boston, 02115, MA, USA
| | - Gary Kane
- Department of Psychological and Brain Sciences, Boston University, Boston, 02115, MA, USA
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, 94305, CA, USA
- Department of Neurobiology, Stanford University, Stanford, 94305, CA, USA
- Howard Hughes Medical Institute, HHMI, Chevy Chase, 20815-6789, MD, USA
- Department of Bioengineering, Stanford University, Stanford, 94305, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, 94305, CA, USA
- Bio-X Program, Stanford University, Stanford, 94305, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, 94305, CA, USA
| | - Chandramouli Chandrasekaran
- Department of Biomedical Engineering, Boston University, Boston, 02115, MA, USA.
- Department of Psychological and Brain Sciences, Boston University, Boston, 02115, MA, USA.
- Center for Systems Neuroscience, Boston University, Boston, 02115, MA, USA.
- Department of Anatomy & Neurobiology, Boston University, Boston, 02118, MA, USA.
| |
Collapse
|
12
|
Schmidt MD, Glasmachers T, Iossifidis I. The concepts of muscle activity generation driven by upper limb kinematics. Biomed Eng Online 2023; 22:63. [PMID: 37355651 DOI: 10.1186/s12938-023-01116-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 05/16/2023] [Indexed: 06/26/2023] Open
Abstract
BACKGROUND The underlying motivation of this work is to demonstrate that artificial muscle activity of known and unknown motion can be generated based on motion parameters, such as angular position, acceleration, and velocity of each joint (or the end-effector instead), which are similarly represented in our brains. This model is motivated by the known motion planning process in the central nervous system. That process incorporates the current body state from sensory systems and previous experiences, which might be represented as pre-learned inverse dynamics that generate associated muscle activity. METHODS We develop a novel approach utilizing recurrent neural networks that are able to predict muscle activity of the upper limbs associated with complex 3D human arm motions. Therefore, motion parameters such as joint angle, velocity, acceleration, hand position, and orientation, serve as input for the models. In addition, these models are trained on multiple subjects (n=5 including , 3 male in the age of 26±2 years) and thus can generalize across individuals. In particular, we distinguish between a general model that has been trained on several subjects, a subject-specific model, and a specific fine-tuned model using a transfer learning approach to adapt the model to a new subject. Estimators such as mean square error MSE, correlation coefficient r, and coefficient of determination R2 are used to evaluate the goodness of fit. We additionally assess performance by developing a new score called the zero-line score. The present approach was compared with multiple other architectures. RESULTS The presented approach predicts the muscle activity for previously through different subjects with remarkable high precision and generalizing nicely for new motions that have not been trained before. In an exhausting comparison, our recurrent network outperformed all other architectures. In addition, the high inter-subject variation of the recorded muscle activity was successfully handled using a transfer learning approach, resulting in a good fit for the muscle activity for a new subject. CONCLUSIONS The ability of this approach to efficiently predict muscle activity contributes to the fundamental understanding of motion control. Furthermore, this approach has great potential for use in rehabilitation contexts, both as a therapeutic approach and as an assistive device. The predicted muscle activity can be utilized to guide functional electrical stimulation, allowing specific muscles to be targeted and potentially improving overall rehabilitation outcomes.
Collapse
Affiliation(s)
- Marie D Schmidt
- Faculty of Electrical Engineering and Information Technology, Ruhr-University Bochum, Bochum, Germany.
- Institute of Computer Science, University of Applied Science Ruhr West, Mülheim an der Ruhr, Germany.
| | | | - Ioannis Iossifidis
- Institute of Computer Science, University of Applied Science Ruhr West, Mülheim an der Ruhr, Germany
| |
Collapse
|
13
|
Shine JM. Neuromodulatory control of complex adaptive dynamics in the brain. Interface Focus 2023; 13:20220079. [PMID: 37065268 PMCID: PMC10102735 DOI: 10.1098/rsfs.2022.0079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 04/18/2023] Open
Abstract
How is the massive dimensionality and complexity of the microscopic constituents of the nervous system brought under sufficiently tight control so as to coordinate adaptive behaviour? A powerful means for striking this balance is to poise neurons close to the critical point of a phase transition, at which a small change in neuronal excitability can manifest a nonlinear augmentation in neuronal activity. How the brain could mediate this critical transition is a key open question in neuroscience. Here, I propose that the different arms of the ascending arousal system provide the brain with a diverse set of heterogeneous control parameters that can be used to modulate the excitability and receptivity of target neurons-in other words, to act as control parameters for mediating critical neuronal order. Through a series of worked examples, I demonstrate how the neuromodulatory arousal system can interact with the inherent topological complexity of neuronal subsystems in the brain to mediate complex adaptive behaviour.
Collapse
Affiliation(s)
- James M. Shine
- Brain and Mind Center, The University of Sydney, Sydney, Australia
| |
Collapse
|
14
|
Wen S, Yin A, Furlanello T, Perich MG, Miller LE, Itti L. Rapid adaptation of brain-computer interfaces to new neuronal ensembles or participants via generative modelling. Nat Biomed Eng 2023; 7:546-558. [PMID: 34795394 PMCID: PMC9114171 DOI: 10.1038/s41551-021-00811-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Accepted: 09/17/2021] [Indexed: 11/09/2022]
Abstract
For brain-computer interfaces (BCIs), obtaining sufficient training data for algorithms that map neural signals onto actions can be difficult, expensive or even impossible. Here we report the development and use of a generative model-a model that synthesizes a virtually unlimited number of new data distributions from a learned data distribution-that learns mappings between hand kinematics and the associated neural spike trains. The generative spike-train synthesizer is trained on data from one recording session with a monkey performing a reaching task and can be rapidly adapted to new sessions or monkeys by using limited additional neural data. We show that the model can be adapted to synthesize new spike trains, accelerating the training and improving the generalization of BCI decoders. The approach is fully data-driven, and hence, applicable to applications of BCIs beyond motor control.
Collapse
Affiliation(s)
- Shixian Wen
- University of Southern California, Los Angeles, CA, USA.
| | | | | | - M G Perich
- University of Geneva, Geneva, Switzerland
| | - L E Miller
- Northwestern University, Chicago, IL, USA
| | - Laurent Itti
- University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
15
|
Banerjee A, Chen F, Druckmann S, Long MA. Neural dynamics in the rodent motor cortex enables flexible control of vocal timing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.23.525252. [PMID: 36747850 PMCID: PMC9900850 DOI: 10.1101/2023.01.23.525252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Neocortical activity is thought to mediate voluntary control over vocal production, but the underlying neural mechanisms remain unclear. In a highly vocal rodent, the Alston's singing mouse, we investigate neural dynamics in the orofacial motor cortex (OMC), a structure critical for vocal behavior. We first describe neural activity that is modulated by component notes (approx. 100 ms), likely representing sensory feedback. At longer timescales, however, OMC neurons exhibit diverse and often persistent premotor firing patterns that stretch or compress with song duration (approx. 10 s). Using computational modeling, we demonstrate that such temporal scaling, acting via downstream motor production circuits, can enable vocal flexibility. These results provide a framework for studying hierarchical control circuits, a common design principle across many natural and artificial systems.
Collapse
Affiliation(s)
- Arkarup Banerjee
- NYU Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
- Department of Otolaryngology, New York University Langone Health, New York, NY 10016, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Feng Chen
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Shaul Druckmann
- Department of Neuroscience, Stanford University, Stanford, CA 94304, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
- Department of Otolaryngology, New York University Langone Health, New York, NY 10016, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
16
|
Heald JB, Lengyel M, Wolpert DM. Contextual inference in learning and memory. Trends Cogn Sci 2023; 27:43-64. [PMID: 36435674 PMCID: PMC9789331 DOI: 10.1016/j.tics.2022.10.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 10/11/2022] [Accepted: 10/12/2022] [Indexed: 11/25/2022]
Abstract
Context is widely regarded as a major determinant of learning and memory across numerous domains, including classical and instrumental conditioning, episodic memory, economic decision-making, and motor learning. However, studies across these domains remain disconnected due to the lack of a unifying framework formalizing the concept of context and its role in learning. Here, we develop a unified vernacular allowing direct comparisons between different domains of contextual learning. This leads to a Bayesian model positing that context is unobserved and needs to be inferred. Contextual inference then controls the creation, expression, and updating of memories. This theoretical approach reveals two distinct components that underlie adaptation, proper and apparent learning, respectively referring to the creation and updating of memories versus time-varying adjustments in their expression. We review a number of extensions of the basic Bayesian model that allow it to account for increasingly complex forms of contextual learning.
Collapse
Affiliation(s)
- James B Heald
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary.
| | - Daniel M Wolpert
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| |
Collapse
|
17
|
Ujfalussy BB, Orbán G. Sampling motion trajectories during hippocampal theta sequences. eLife 2022; 11:74058. [DOI: 10.7554/elife.74058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 09/28/2022] [Indexed: 11/06/2022] Open
Abstract
Efficient planning in complex environments requires that uncertainty associated with current inferences and possible consequences of forthcoming actions is represented. Representation of uncertainty has been established in sensory systems during simple perceptual decision making tasks but it remains unclear if complex cognitive computations such as planning and navigation are also supported by probabilistic neural representations. Here, we capitalized on gradually changing uncertainty along planned motion trajectories during hippocampal theta sequences to capture signatures of uncertainty representation in population responses. In contrast with prominent theories, we found no evidence of encoding parameters of probability distributions in the momentary population activity recorded in an open-field navigation task in rats. Instead, uncertainty was encoded sequentially by sampling motion trajectories randomly and efficiently in subsequent theta cycles from the distribution of potential trajectories. Our analysis is the first to demonstrate that the hippocampus is well equipped to contribute to optimal planning by representing uncertainty.
Collapse
Affiliation(s)
- Balazs B Ujfalussy
- Laboratory of Biological Computation, Institute of Experimental Medicine
- Laboratory of Neuronal Signalling, Institute of Experimental Medicine, Budapest
| | - Gergő Orbán
- Computational Systems Neuroscience Lab, Wigner Research Center for Physics, Budapest
| |
Collapse
|
18
|
Polykretis I, Michmizos KP. The role of astrocytes in place cell formation: A computational modeling study. J Comput Neurosci 2022; 50:505-518. [PMID: 35840871 PMCID: PMC9671849 DOI: 10.1007/s10827-022-00828-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 05/20/2022] [Accepted: 07/12/2022] [Indexed: 11/30/2022]
Abstract
Place cells develop spatially-tuned receptive fields during the early stages of novel environment exploration. The generative mechanism underlying these spatially-selective responses remains largely elusive, but has been associated with theta rhythmicity. An important factor implicating the transformation of silent cells to place cells is a spatially-uniform depolarization that is mediated by a persistent sodium current. This neuronal current is modulated by extracellular calcium concentration, which, in turn, is actively controlled by astrocytes. However, there is no established relationship between the neuronal depolarization and astrocytic activity. To consider this link, we designed a bioplausible computational model of a neuronal-astrocytic network, where astrocytes induced the transient emergence of place fields in silent cells, and accelerated the plasticity-induced consolidation of place cells. Interestingly, theta oscillations emerged naturally at the network level, resulting from the astrocytic modulation of subcellular neuronal properties. Our results suggest that astrocytes participate in spatial mapping and exploration, and further highlight the computational roles of these cells in the brain.
Collapse
Affiliation(s)
- Ioannis Polykretis
- Computational Brain Lab, Department of Computer Science, Rutgers University, New Brunswick, New Jersey, USA
| | - Konstantinos P Michmizos
- Computational Brain Lab, Department of Computer Science, Rutgers University, New Brunswick, New Jersey, USA.
| |
Collapse
|
19
|
Movement is governed by rotational neural dynamics in spinal motor networks. Nature 2022; 610:526-531. [PMID: 36224394 DOI: 10.1038/s41586-022-05293-w] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 08/30/2022] [Indexed: 11/08/2022]
Abstract
Although the generation of movements is a fundamental function of the nervous system, the underlying neural principles remain unclear. As flexor and extensor muscle activities alternate during rhythmic movements such as walking, it is often assumed that the responsible neural circuitry is similarly exhibiting alternating activity1. Here we present ensemble recordings of neurons in the lumbar spinal cord that indicate that, rather than alternating, the population is performing a low-dimensional 'rotation' in neural space, in which the neural activity is cycling through all phases continuously during the rhythmic behaviour. The radius of rotation correlates with the intended muscle force, and a perturbation of the low-dimensional trajectory can modify the motor behaviour. As existing models of spinal motor control do not offer an adequate explanation of rotation1,2, we propose a theory of neural generation of movements from which this and other unresolved issues, such as speed regulation, force control and multifunctionalism, are readily explained.
Collapse
|
20
|
Connectivity concepts in neuronal network modeling. PLoS Comput Biol 2022; 18:e1010086. [PMID: 36074778 PMCID: PMC9455883 DOI: 10.1371/journal.pcbi.1010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 04/07/2022] [Indexed: 11/19/2022] Open
Abstract
Sustainable research on computational models of neuronal networks requires published models to be understandable, reproducible, and extendable. Missing details or ambiguities about mathematical concepts and assumptions, algorithmic implementations, or parameterizations hinder progress. Such flaws are unfortunately frequent and one reason is a lack of readily applicable standards and tools for model description. Our work aims to advance complete and concise descriptions of network connectivity but also to guide the implementation of connection routines in simulation software and neuromorphic hardware systems. We first review models made available by the computational neuroscience community in the repositories ModelDB and Open Source Brain, and investigate the corresponding connectivity structures and their descriptions in both manuscript and code. The review comprises the connectivity of networks with diverse levels of neuroanatomical detail and exposes how connectivity is abstracted in existing description languages and simulator interfaces. We find that a substantial proportion of the published descriptions of connectivity is ambiguous. Based on this review, we derive a set of connectivity concepts for deterministically and probabilistically connected networks and also address networks embedded in metric space. Beside these mathematical and textual guidelines, we propose a unified graphical notation for network diagrams to facilitate an intuitive understanding of network properties. Examples of representative network models demonstrate the practical use of the ideas. We hope that the proposed standardizations will contribute to unambiguous descriptions and reproducible implementations of neuronal network connectivity in computational neuroscience.
Collapse
|
21
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
22
|
Regimes and mechanisms of transient amplification in abstract and biological neural networks. PLoS Comput Biol 2022; 18:e1010365. [PMID: 35969604 PMCID: PMC9377633 DOI: 10.1371/journal.pcbi.1010365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 07/06/2022] [Indexed: 11/24/2022] Open
Abstract
Neuronal networks encode information through patterns of activity that define the networks’ function. The neurons’ activity relies on specific connectivity structures, yet the link between structure and function is not fully understood. Here, we tackle this structure-function problem with a new conceptual approach. Instead of manipulating the connectivity directly, we focus on upper triangular matrices, which represent the network dynamics in a given orthonormal basis obtained by the Schur decomposition. This abstraction allows us to independently manipulate the eigenspectrum and feedforward structures of a connectivity matrix. Using this method, we describe a diverse repertoire of non-normal transient amplification, and to complement the analysis of the dynamical regimes, we quantify the geometry of output trajectories through the effective rank of both the eigenvector and the dynamics matrices. Counter-intuitively, we find that shrinking the eigenspectrum’s imaginary distribution leads to highly amplifying regimes in linear and long-lasting dynamics in nonlinear networks. We also find a trade-off between amplification and dimensionality of neuronal dynamics, i.e., trajectories in neuronal state-space. Networks that can amplify a large number of orthogonal initial conditions produce neuronal trajectories that lie in the same subspace of the neuronal state-space. Finally, we examine networks of excitatory and inhibitory neurons. We find that the strength of global inhibition is directly linked with the amplitude of amplification, such that weakening inhibitory weights also decreases amplification, and that the eigenspectrum’s imaginary distribution grows with an increase in the ratio between excitatory-to-inhibitory and excitatory-to-excitatory connectivity strengths. Consequently, the strength of global inhibition reveals itself as a strong signature for amplification and a potential control mechanism to switch dynamical regimes. Our results shed a light on how biological networks, i.e., networks constrained by Dale’s law, may be optimised for specific dynamical regimes. The architecture of neuronal networks lies at the heart of its dynamic behaviour, or in other words, the function of the system. However, the relationship between changes in the architecture and their effect on the dynamics, a structure-function problem, is still poorly understood. Here, we approach this problem by studying a rotated connectivity matrix that is easier to manipulate and interpret. We focus our analysis on a dynamical regime that arises from the biological property that neurons are usually not connected symmetrically, which may result in a non-normal connectivity matrix. Our techniques unveil distinct expressions of the dynamical regime of non-normal amplification. Moreover, we devise a way to analyse the geometry of the dynamics: we assign a single number to a network that quantifies how dissimilar its repertoire of behaviours can be. Finally, using our approach, we can close the loop back to the original neuronal architecture and find that biologically plausible networks use the strength of inhibition and excitatory-to-inhibitory connectivity strength to navigate the different dynamical regimes of non-normal amplification.
Collapse
|
23
|
Mazzucato L. Neural mechanisms underlying the temporal organization of naturalistic animal behavior. eLife 2022; 11:76577. [PMID: 35792884 PMCID: PMC9259028 DOI: 10.7554/elife.76577] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 06/07/2022] [Indexed: 12/17/2022] Open
Abstract
Naturalistic animal behavior exhibits a strikingly complex organization in the temporal domain, with variability arising from at least three sources: hierarchical, contextual, and stochastic. What neural mechanisms and computational principles underlie such intricate temporal features? In this review, we provide a critical assessment of the existing behavioral and neurophysiological evidence for these sources of temporal variability in naturalistic behavior. Recent research converges on an emergent mechanistic theory of temporal variability based on attractor neural networks and metastable dynamics, arising via coordinated interactions between mesoscopic neural circuits. We highlight the crucial role played by structural heterogeneities as well as noise from mesoscopic feedback loops in regulating flexible behavior. We assess the shortcomings and missing links in the current theoretical and experimental literature and propose new directions of investigation to fill these gaps.
Collapse
Affiliation(s)
- Luca Mazzucato
- Institute of Neuroscience, Departments of Biology, Mathematics and Physics, University of Oregon
| |
Collapse
|
24
|
Yang S, Wang J, Hao X, Li H, Wei X, Deng B, Loparo KA. BiCoSS: Toward Large-Scale Cognition Brain With Multigranular Neuromorphic Architecture. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2801-2815. [PMID: 33428574 DOI: 10.1109/tnnls.2020.3045492] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The further exploration of the neural mechanisms underlying the biological activities of the human brain depends on the development of large-scale spiking neural networks (SNNs) with different categories at different levels, as well as the corresponding computing platforms. Neuromorphic engineering provides approaches to high-performance biologically plausible computational paradigms inspired by neural systems. In this article, we present a biological-inspired cognitive supercomputing system (BiCoSS) that integrates multiple granules (GRs) of SNNs to realize a hybrid compatible neuromorphic platform. A scalable hierarchical heterogeneous multicore architecture is presented, and a synergistic routing scheme for hybrid neural information is proposed. The BiCoSS system can accommodate different levels of GRs and biological plausibility of SNN models in an efficient and scalable manner. Over four million neurons can be realized on BiCoSS with a power efficiency of 2.8k larger than the GPU platform, and the average latency of BiCoSS is 3.62 and 2.49 times higher than conventional architectures of digital neuromorphic systems. For the verification, BiCoSS is used to replicate various biological cognitive activities, including motor learning, action selection, context-dependent learning, and movement disorders. Comprehensively considering the programmability, biological plausibility, learning capability, computational power, and scalability, BiCoSS is shown to outperform the alternative state-of-the-art works for large-scale SNN, while its real-time computational capability enables a wide range of potential applications.
Collapse
|
25
|
Affiliation(s)
| | - Siyan Zhou
- Icahn School of Medicine at Mount Sinai, New York, NY, USA.,Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Kanaka Rajan
- Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
26
|
Dubreuil A, Valente A, Beiran M, Mastrogiuseppe F, Ostojic S. The role of population structure in computations through neural dynamics. Nat Neurosci 2022; 25:783-794. [PMID: 35668174 DOI: 10.1038/s41593-022-01088-4] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 04/28/2022] [Indexed: 11/09/2022]
Abstract
Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally complementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input-output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.
Collapse
Affiliation(s)
- Alexis Dubreuil
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, Paris, France. .,Université de Bordeaux, CNRS, IMN, UMR, Bordeaux, France.
| | - Adrian Valente
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, Paris, France.
| | - Manuel Beiran
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, Paris, France.,Center for Theoretical Neuroscience, Zuckerman Institute, Columbia University, New York, NY, USA
| | - Francesca Mastrogiuseppe
- Gatsby Computational Neuroscience Unit, University College London, London, UK.,Champalimaud Research, Lisbon, Portugal
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, Paris, France.
| |
Collapse
|
27
|
Naumann LB, Keijser J, Sprekeler H. Invariant neural subspaces maintained by feedback modulation. eLife 2022; 11:76096. [PMID: 35442191 PMCID: PMC9106332 DOI: 10.7554/elife.76096] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 04/06/2022] [Indexed: 11/13/2022] Open
Abstract
Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.
Collapse
|
28
|
Greenhouse I. Inhibition for gain modulation in the motor system. Exp Brain Res 2022; 240:1295-1302. [DOI: 10.1007/s00221-022-06351-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 03/15/2022] [Indexed: 01/10/2023]
|
29
|
Wang T, Chen Y, Cui H. From Parametric Representation to Dynamical System: Shifting Views of the Motor Cortex in Motor Control. Neurosci Bull 2022; 38:796-808. [PMID: 35298779 PMCID: PMC9276910 DOI: 10.1007/s12264-022-00832-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/29/2021] [Indexed: 11/01/2022] Open
Abstract
In contrast to traditional representational perspectives in which the motor cortex is involved in motor control via neuronal preference for kinetics and kinematics, a dynamical system perspective emerging in the last decade views the motor cortex as a dynamical machine that generates motor commands by autonomous temporal evolution. In this review, we first look back at the history of the representational and dynamical perspectives and discuss their explanatory power and controversy from both empirical and computational points of view. Here, we aim to reconcile the above perspectives, and evaluate their theoretical impact, future direction, and potential applications in brain-machine interfaces.
Collapse
Affiliation(s)
- Tianwei Wang
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yun Chen
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - He Cui
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
30
|
Curreli S, Bonato J, Romanzi S, Panzeri S, Fellin T. Complementary encoding of spatial information in hippocampal astrocytes. PLoS Biol 2022; 20:e3001530. [PMID: 35239646 PMCID: PMC8893713 DOI: 10.1371/journal.pbio.3001530] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 01/05/2022] [Indexed: 01/28/2023] Open
Abstract
Calcium dynamics into astrocytes influence the activity of nearby neuronal structures. However, because previous reports show that astrocytic calcium signals largely mirror neighboring neuronal activity, current information coding models neglect astrocytes. Using simultaneous two-photon calcium imaging of astrocytes and neurons in the hippocampus of mice navigating a virtual environment, we demonstrate that astrocytic calcium signals encode (i.e., statistically reflect) spatial information that could not be explained by visual cue information. Calcium events carrying spatial information occurred in topographically organized astrocytic subregions. Importantly, astrocytes encoded spatial information that was complementary and synergistic to that carried by neurons, improving spatial position decoding when astrocytic signals were considered alongside neuronal ones. These results suggest that the complementary place dependence of localized astrocytic calcium signals may regulate clusters of nearby synapses, enabling dynamic, context-dependent variations in population coding within brain circuits.
Collapse
Affiliation(s)
- Sebastiano Curreli
- Optical Approaches to Brain Function Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
| | - Jacopo Bonato
- Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto, Italy
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Sara Romanzi
- Optical Approaches to Brain Function Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- University of Genova, Genova, Italy
| | - Stefano Panzeri
- Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto, Italy
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Tommaso Fellin
- Optical Approaches to Brain Function Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
31
|
Robson DN, Li JM. A dynamical systems view of neuroethology: Uncovering stateful computation in natural behaviors. Curr Opin Neurobiol 2022; 73:102517. [PMID: 35217311 DOI: 10.1016/j.conb.2022.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 01/06/2022] [Accepted: 01/11/2022] [Indexed: 11/03/2022]
Abstract
State-dependent computation is key to cognition in both biological and artificial systems. Alan Turing recognized the power of stateful computation when he created the Turing machine with theoretically infinite computational capacity in 1936. Independently, by 1950, ethologists such as Tinbergen and Lorenz also began to implicitly embed rudimentary forms of state-dependent computation to create qualitative models of internal drives and naturally occurring animal behaviors. Here, we reformulate core ethological concepts in explicitly dynamical systems terms for stateful computation. We examine, based on a wealth of recent neural data collected during complex innate behaviors across species, the neural dynamics that determine the temporal structure of internal states. We will also discuss the degree to which the brain can be hierarchically partitioned into nested dynamical systems and the need for a multi-dimensional state-space model of the neuromodulatory system that underlies motivational and affective states.
Collapse
Affiliation(s)
- Drew N Robson
- Max Planck Institute for Biological Cybernetics, Tuebingen, Germany.
| | - Jennifer M Li
- Max Planck Institute for Biological Cybernetics, Tuebingen, Germany.
| |
Collapse
|
32
|
Kao TC, Sadabadi MS, Hennequin G. Optimal anticipatory control as a theory of motor preparation: A thalamo-cortical circuit model. Neuron 2021; 109:1567-1581.e12. [PMID: 33789082 PMCID: PMC8111422 DOI: 10.1016/j.neuron.2021.03.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 10/09/2020] [Accepted: 03/05/2021] [Indexed: 11/21/2022]
Abstract
Across a range of motor and cognitive tasks, cortical activity can be accurately described by low-dimensional dynamics unfolding from specific initial conditions on every trial. These "preparatory states" largely determine the subsequent evolution of both neural activity and behavior, and their importance raises questions regarding how they are, or ought to be, set. Here, we formulate motor preparation as optimal anticipatory control of future movements and show that the solution requires a form of internal feedback control of cortical circuit dynamics. In contrast to a simple feedforward strategy, feedback control enables fast movement preparation by selectively controlling the cortical state in the small subspace that matters for the upcoming movement. Feedback but not feedforward control explains the orthogonality between preparatory and movement activity observed in reaching monkeys. We propose a circuit model in which optimal preparatory control is implemented as a thalamo-cortical loop gated by the basal ganglia.
Collapse
Affiliation(s)
- Ta-Chu Kao
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - Mahdieh S Sadabadi
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, UK
| | - Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| |
Collapse
|
33
|
Zenke F, Vogels TP. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Comput 2021; 33:899-925. [PMID: 33513328 DOI: 10.1162/neco_a_01367] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 11/06/2020] [Indexed: 01/10/2023]
Abstract
Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.
Collapse
Affiliation(s)
- Friedemann Zenke
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland,
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Institute for Science and Technology, 3400 Klosterneuburg, Austria,
| |
Collapse
|
34
|
Bondanelli G, Deneux T, Bathellier B, Ostojic S. Network dynamics underlying OFF responses in the auditory cortex. eLife 2021; 10:e53151. [PMID: 33759763 PMCID: PMC8057817 DOI: 10.7554/elife.53151] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 03/19/2021] [Indexed: 11/13/2022] Open
Abstract
Across sensory systems, complex spatio-temporal patterns of neural activity arise following the onset (ON) and offset (OFF) of stimuli. While ON responses have been widely studied, the mechanisms generating OFF responses in cortical areas have so far not been fully elucidated. We examine here the hypothesis that OFF responses are single-cell signatures of recurrent interactions at the network level. To test this hypothesis, we performed population analyses of two-photon calcium recordings in the auditory cortex of awake mice listening to auditory stimuli, and compared them to linear single-cell and network models. While the single-cell model explained some prominent features of the data, it could not capture the structure across stimuli and trials. In contrast, the network model accounted for the low-dimensional organization of population responses and their global structure across stimuli, where distinct stimuli activated mostly orthogonal dimensions in the neural state-space.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’études cognitives, ENS, PSL University, INSERMParisFrance
- Neural Computation Laboratory, Center for Human Technologies, Istituto Italiano di Tecnologia (IIT)GenoaItaly
| | - Thomas Deneux
- Départment de Neurosciences Intégratives et Computationelles (ICN), Institut des Neurosciences Paris-Saclay (NeuroPSI), UMR 9197 CNRS, Université Paris SudGif-sur-YvetteFrance
| | - Brice Bathellier
- Départment de Neurosciences Intégratives et Computationelles (ICN), Institut des Neurosciences Paris-Saclay (NeuroPSI), UMR 9197 CNRS, Université Paris SudGif-sur-YvetteFrance
- Institut Pasteur, INSERM, Institut de l’AuditionParisFrance
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’études cognitives, ENS, PSL University, INSERMParisFrance
| |
Collapse
|
35
|
Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. PLoS Comput Biol 2021; 17:e1008866. [PMID: 33764970 PMCID: PMC8023498 DOI: 10.1371/journal.pcbi.1008866] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 04/06/2021] [Accepted: 03/08/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Collapse
Affiliation(s)
- Amadeus Maes
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Mathematics Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
36
|
A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping. Proc Natl Acad Sci U S A 2020; 117:32124-32135. [PMID: 33257539 DOI: 10.1073/pnas.2005087117] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
One of the primary ways we interact with the world is using our hands. In macaques, the circuit spanning the anterior intraparietal area, the hand area of the ventral premotor cortex, and the primary motor cortex is necessary for transforming visual information into grasping movements. However, no comprehensive model exists that links all steps of processing from vision to action. We hypothesized that a recurrent neural network mimicking the modular structure of the anatomical circuit and trained to use visual features of objects to generate the required muscle dynamics used by primates to grasp objects would give insight into the computations of the grasping circuit. Internal activity of modular networks trained with these constraints strongly resembled neural activity recorded from the grasping circuit during grasping and paralleled the similarities between brain regions. Network activity during the different phases of the task could be explained by linear dynamics for maintaining a distributed movement plan across the network in the absence of visual stimulus and then generating the required muscle kinematics based on these initial conditions in a module-specific way. These modular models also outperformed alternative models at explaining neural data, despite the absence of neural data during training, suggesting that the inputs, outputs, and architectural constraints imposed were sufficient for recapitulating processing in the grasping circuit. Finally, targeted lesioning of modules produced deficits similar to those observed in lesion studies of the grasping circuit, providing a potential model for how brain regions may coordinate during the visually guided grasping of objects.
Collapse
|
37
|
Inoue K, Nakajima K, Kuniyoshi Y. Designing spontaneous behavioral switching via chaotic itinerancy. SCIENCE ADVANCES 2020; 6:6/46/eabb3989. [PMID: 33177080 PMCID: PMC7673744 DOI: 10.1126/sciadv.abb3989] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 09/24/2020] [Indexed: 05/09/2023]
Abstract
Chaotic itinerancy is a frequently observed phenomenon in high-dimensional nonlinear dynamical systems and is characterized by itinerant transitions among multiple quasi-attractors. Several studies have pointed out that high-dimensional activity in animal brains can be observed to exhibit chaotic itinerancy, which is considered to play a critical role in the spontaneous behavior generation of animals. Thus, how to design desired chaotic itinerancy is a topic of great interest, particularly for neurorobotics researchers who wish to understand and implement autonomous behavioral controls. However, it is generally difficult to gain control over high-dimensional nonlinear dynamical systems. In this study, we propose a method for implementing chaotic itinerancy reproducibly in a high-dimensional chaotic neural network. We demonstrate that our method enables us to easily design both the trajectories of quasi-attractors and the transition rules among them simply by adjusting the limited number of system parameters and by using the intrinsic high-dimensional chaos.
Collapse
Affiliation(s)
- Katsuma Inoue
- Graduate School of Information Science and Technology, The University of Tokyo, Engineering Building 2, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| | - Kohei Nakajima
- Graduate School of Information Science and Technology, The University of Tokyo, Engineering Building 2, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| | - Yasuo Kuniyoshi
- Graduate School of Information Science and Technology, The University of Tokyo, Engineering Building 2, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| |
Collapse
|
38
|
Stokes MG, Muhle-Karbe PS, Myers NE. Theoretical distinction between functional states in working memory and their corresponding neural states. VISUAL COGNITION 2020; 28:420-432. [PMID: 33223922 PMCID: PMC7655036 DOI: 10.1080/13506285.2020.1825141] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 09/10/2020] [Indexed: 12/15/2022]
Abstract
Working memory (WM) is important for guiding behaviour, but not always for the next possible action. Here we define a WM item that is currently relevant for guiding behaviour as the functionally "active" item; whereas items maintained in WM, but not immediately relevant to behaviour, are defined as functionally "latent". Traditional neurophysiological theories of WM proposed that content is maintained via persistent neural activity (e.g., stable attractors); however, more recent theories have highlighted the potential role for "activity-silent" mechanisms (e.g., short-term synaptic plasticity). Given these somewhat parallel dichotomies, functionally active and latent cognitive states of WM have been associated with storage based on persistent-activity and activity-silent neural mechanisms, respectively. However, in this article we caution against a one-to-one correspondence between functional and activity states. We argue that the principal theoretical requirement for active and latent WM is that the corresponding neural states play qualitatively different functional roles. We consider a number of candidate solutions, and conclude that the neurophysiological mechanisms for functionally active and latent WM items are theoretically independent of the distinction between persistent activity-based and activity-silent forms of WM storage.
Collapse
Affiliation(s)
- Mark G. Stokes
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Paul S. Muhle-Karbe
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Nicholas E. Myers
- Wellcome Centre for Integrative Neuroimaging and Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
39
|
Pollock E, Jazayeri M. Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS Comput Biol 2020; 16:e1008128. [PMID: 32785228 PMCID: PMC7446915 DOI: 10.1371/journal.pcbi.1008128] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 08/24/2020] [Accepted: 07/08/2020] [Indexed: 12/11/2022] Open
Abstract
Many cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.
Collapse
Affiliation(s)
- Eli Pollock
- Department of Brain & Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Mehrdad Jazayeri
- Department of Brain & Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
40
|
Kozachkov L, Lundqvist M, Slotine JJ, Miller EK. Achieving stable dynamics in neural circuits. PLoS Comput Biol 2020; 16:e1007659. [PMID: 32764745 PMCID: PMC7446801 DOI: 10.1371/journal.pcbi.1007659] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 08/19/2020] [Accepted: 06/27/2020] [Indexed: 01/01/2023] Open
Abstract
The brain consists of many interconnected networks with time-varying, partially autonomous activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable, reproducible state (or sequence of states) for its computations to make sense. We approached this problem from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable computations might be achieved despite biological complexity. Crucially, our analysis is not limited to analyzing the stability of fixed geometric objects in state space (e.g points, lines, planes), but rather the stability of state trajectories which may be complex and time-varying.
Collapse
Affiliation(s)
- Leo Kozachkov
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
| | - Mikael Lundqvist
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Jean-Jacques Slotine
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
| | - Earl K. Miller
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
| |
Collapse
|
41
|
Abstract
Behavior is readily classified into patterns of movements with inferred common goals-actions. Goals may be discrete; movements are continuous. Through the careful study of isolated movements in laboratory settings, or via introspection, it has become clear that animals can exhibit exquisite graded specification to their movements. Moreover, graded control can be as fundamental to success as the selection of which action to perform under many naturalistic scenarios: a predator adjusting its speed to intercept moving prey, or a tool-user exerting the perfect amount of force to complete a delicate task. The basal ganglia are a collection of nuclei in vertebrates that extend from the forebrain (telencephalon) to the midbrain (mesencephalon), constituting a major descending extrapyramidal pathway for control over midbrain and brainstem premotor structures. Here we discuss how this pathway contributes to the continuous specification of movements that endows our voluntary actions with vigor and grace.
Collapse
Affiliation(s)
- Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA;
| | - Luke T Coddington
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA;
| | - Joshua T Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA;
| |
Collapse
|
42
|
Köksal Ersöz E, Aguilar C, Chossat P, Krupa M, Lavigne F. Neuronal mechanisms for sequential activation of memory items: Dynamics and reliability. PLoS One 2020; 15:e0231165. [PMID: 32298290 PMCID: PMC7161983 DOI: 10.1371/journal.pone.0231165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
In this article we present a biologically inspired model of activation of memory items in a sequence. Our model produces two types of sequences, corresponding to two different types of cerebral functions: activation of regular or irregular sequences. The switch between the two types of activation occurs through the modulation of biological parameters, without altering the connectivity matrix. Some of the parameters included in our model are neuronal gain, strength of inhibition, synaptic depression and noise. We investigate how these parameters enable the existence of sequences and influence the type of sequences observed. In particular we show that synaptic depression and noise drive the transitions from one memory item to the next and neuronal gain controls the switching between regular and irregular (random) activation.
Collapse
Affiliation(s)
| | - Carlos Aguilar
- Lab by MANTU, Amaris Research Unit, Route des Colles, Biot, France
| | - Pascal Chossat
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | - Martin Krupa
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | | |
Collapse
|
43
|
On Primitives in Motor Control. Motor Control 2020; 24:318-346. [DOI: 10.1123/mc.2019-0099] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 12/03/2019] [Accepted: 12/07/2019] [Indexed: 11/18/2022]
Abstract
The concept of primitives has been used in motor control both as a theoretical construct and as a means of describing the results of experimental studies involving multiple moving elements. This concept is close to Bernstein’s notion of engrams and level of synergies. Performance primitives have been explored in spaces of peripheral variables but interpreted in terms of neural control primitives. Performance primitives reflect a variety of mechanisms ranging from body mechanics to spinal mechanisms and to supraspinal circuitry. This review suggests that primitives originate at the task level as preferred time functions of spatial referent coordinates or at mappings from higher level referent coordinates to lower level, frequently abundant, referent coordinate sets. Different patterns of performance primitives can emerge depending, in particular, on the external force field.
Collapse
|
44
|
Abstract
The full functionality of the brain is determined by its molecular, cellular and circuit structure. Modern neuroscience now prioritizes the mapping of whole brain connectomes by detecting all direct neuron to neuron synaptic connections, a feat first accomplished for C. elegans, a full reconstruction of a 302-neuron nervous system. Efforts at Janelia Research Campus will soon reconstruct the whole brain connectomes of a larval and an adult Drosophila. These connectomes will provide a framework for incorporating detailed neural circuit information that Drosophila neuroscientists have gathered over decades. But when viewed in the context of a whole brain, it becomes difficult to isolate the contributions of distinct circuits, whether sensory systems or higher brain regions. The complete wiring diagram tells us that sensory information is not only processed in separate channels, but that even the earliest sensory layers are strongly synaptically interconnected. In the higher brain, long-range projections densely interconnect major brain regions and convergence centers that integrate input from different sensory systems. Furthermore, we also need to understand the impact of neuronal communication beyond direct synaptic modulation. Nevertheless, all of this can be pursued with Drosophila, combining connectomics with a diverse array of genetic tools and behavioral paradigms that provide effective approaches to entire brain function.
Collapse
Affiliation(s)
- Katrin Vogt
- Department of Physics, Harvard University, Cambridge, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
45
|
Kevrekidis PG, Cuevas-Maraver J, Saxena A. Nonlinearity + Networks: A 2020 Vision. EMERGING FRONTIERS IN NONLINEAR SCIENCE 2020. [PMCID: PMC7258850 DOI: 10.1007/978-3-030-44992-6_6] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Affiliation(s)
| | - Jesús Cuevas-Maraver
- Grupo de Fisica No Lineal, Departamento de Fisica Aplicada I, Escuela Politécnica Superior, Universidad de Sevilla, Seville, Spain
| | - Avadh Saxena
- Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM USA
| |
Collapse
|
46
|
Kao JC. Considerations in using recurrent neural networks to probe neural dynamics. J Neurophysiol 2019; 122:2504-2521. [PMID: 31619125 DOI: 10.1152/jn.00467.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. RNNs are trained to reproduce animal behavior while also capturing key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Furthermore, because the RNN's governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network's nonlinear activation and rate regularization, we show that RNNs reproducing single-neuron firing rate motifs may not adequately capture important population motifs. Second, we find that even when RNNs reproduce key neurophysiological features on both the single neuron and population levels, they can do so through distinctly different dynamical mechanisms. To distinguish between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to input noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to tasks it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics.NEW & NOTEWORTHY Artificial neurons in a recurrent neural network (RNN) may resemble empirical single-unit activity but not adequately capture important features on the neural population level. Dynamics of RNNs can be visualized in low-dimensional projections to provide insight into the RNN's dynamical mechanism. RNNs trained in different ways may reproduce neurophysiological motifs but do so with distinctly different mechanisms. RNNs trained to only perform a delayed reach task can generalize to perform tasks where the target is switched or the target location is changed.
Collapse
Affiliation(s)
- Jonathan C Kao
- Department of Electrical and Computer Engineering, University of California, Los Angeles, California.,Neurosciences Program, University of California, Los Angeles, California
| |
Collapse
|
47
|
Heeger DJ, Mackey WE. Oscillatory recurrent gated neural integrator circuits (ORGaNICs), a unifying theoretical framework for neural dynamics. Proc Natl Acad Sci U S A 2019; 116:22783-22794. [PMID: 31636212 PMCID: PMC6842604 DOI: 10.1073/pnas.1911633116] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Working memory is an example of a cognitive and neural process that is not static but evolves dynamically with changing sensory inputs; another example is motor preparation and execution. We introduce a theoretical framework for neural dynamics, based on oscillatory recurrent gated neural integrator circuits (ORGaNICs), and apply it to simulate key phenomena of working memory and motor control. The model circuits simulate neural activity with complex dynamics, including sequential activity and traveling waves of activity, that manipulate (as well as maintain) information during working memory. The same circuits convert spatial patterns of premotor activity to temporal profiles of motor control activity and manipulate (e.g., time warp) the dynamics. Derivative-like recurrent connectivity, in particular, serves to manipulate and update internal models, an essential feature of working memory and motor execution. In addition, these circuits incorporate recurrent normalization, to ensure stability over time and robustness with respect to perturbations of synaptic weights.
Collapse
Affiliation(s)
- David J Heeger
- Department of Psychology, New York University, New York, NY 10003;
- Center for Neural Science, New York University, New York, NY 10003
| | - Wayne E Mackey
- Department of Psychology, New York University, New York, NY 10003
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
48
|
Neuroscience out of control: control-theoretic perspectives on neural circuit dynamics. Curr Opin Neurobiol 2019; 58:122-129. [DOI: 10.1016/j.conb.2019.09.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 07/16/2019] [Accepted: 09/03/2019] [Indexed: 12/19/2022]
|
49
|
Constraining computational models using electron microscopy wiring diagrams. Curr Opin Neurobiol 2019; 58:94-100. [PMID: 31470252 DOI: 10.1016/j.conb.2019.07.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Accepted: 07/25/2019] [Indexed: 12/18/2022]
Abstract
Numerous efforts to generate "connectomes," or synaptic wiring diagrams, of large neural circuits or entire nervous systems are currently underway. These efforts promise an abundance of data to guide theoretical models of neural computation and test their predictions. However, there is not yet a standard set of tools for incorporating the connectivity constraints that these datasets provide into the models typically studied in theoretical neuroscience. This article surveys recent approaches to building models with constrained wiring diagrams and the insights they have provided. It also describes challenges and the need for new techniques to scale these approaches to ever more complex datasets.
Collapse
|
50
|
Stimberg M, Brette R, Goodman DFM. Brian 2, an intuitive and efficient neural simulator. eLife 2019; 8:e47314. [PMID: 31429824 PMCID: PMC6786860 DOI: 10.7554/elife.47314] [Citation(s) in RCA: 181] [Impact Index Per Article: 36.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/19/2019] [Indexed: 01/20/2023] Open
Abstract
Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input.
Collapse
Affiliation(s)
- Marcel Stimberg
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Romain Brette
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Dan FM Goodman
- Department of Electrical and Electronic EngineeringImperial College LondonLondonUnited Kingdom
| |
Collapse
|