1
|
Tolooshams B, Matias S, Wu H, Temereanca S, Uchida N, Murthy VN, Masset P, Ba D. Interpretable deep learning for deconvolutional analysis of neural signals. Neuron 2025; 113:1151-1168.e13. [PMID: 40081364 PMCID: PMC12006907 DOI: 10.1016/j.neuron.2025.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 11/06/2024] [Accepted: 02/09/2025] [Indexed: 03/16/2025]
Abstract
The widespread adoption of deep learning to model neural activity often relies on "black-box" approaches that lack an interpretable connection between neural activity and network parameters. Here, we propose using algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We introduce our method, deconvolutional unrolled neural learning (DUNL), and demonstrate its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. We uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and across striatum during unstructured, naturalistic experiments. Our work leverages advances in interpretable deep learning to provide a mechanistic understanding of neural activity.
Collapse
Affiliation(s)
- Bahareh Tolooshams
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA; Computing + mathematical sciences, California Institute of Technology, Pasadena, CA 91125, USA
| | - Sara Matias
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA
| | - Hao Wu
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA
| | - Simona Temereanca
- Carney Institute for Brain Science, Brown University, Providence, RI 02906, USA
| | - Naoshige Uchida
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge, MA 02138, USA
| | - Venkatesh N Murthy
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge, MA 02138, USA
| | - Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Psychology, McGill University, Montréal, QC H3A 1G1, Canada; Mila - Quebec Artificial Intelligence Institute, Montréal, QC H2S 3H1, Canada.
| | - Demba Ba
- Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
2
|
Tolooshams B, Matias S, Wu H, Temereanca S, Uchida N, Murthy VN, Masset P, Ba D. Interpretable deep learning for deconvolutional analysis of neural signals. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.05.574379. [PMID: 38260512 PMCID: PMC10802267 DOI: 10.1101/2024.01.05.574379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on "black-box" approaches that lack an interpretable link between neural activity and network parameters. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and in the striatum during unstructured, naturalistic experiments. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural activity.
Collapse
Affiliation(s)
- Bahareh Tolooshams
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, 91125
| | - Sara Matias
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Hao Wu
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Simona Temereanca
- Carney Institute for Brain Science, Brown University, Providence, RI, 02906
| | - Naoshige Uchida
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Venkatesh N. Murthy
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Paul Masset
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
- Department of Psychology, McGill University, Montréal QC, H3A 1G1
| | - Demba Ba
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge MA, 02138
| |
Collapse
|
3
|
Xiao G, Cai Y, Zhang Y, Xie J, Wu L, Xie H, Wu J, Dai Q. Mesoscale neuronal granular trial variability in vivo illustrated by nonlinear recurrent network in silico. Nat Commun 2024; 15:9894. [PMID: 39548098 PMCID: PMC11567969 DOI: 10.1038/s41467-024-54346-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 11/06/2024] [Indexed: 11/17/2024] Open
Abstract
Large-scale neural recording with single-neuron resolution has revealed the functional complexity of the neural systems. However, even under well-designed task conditions, the cortex-wide network exhibits highly dynamic trial variability, posing challenges to the conventional trial-averaged analysis. To study mesoscale trial variability, we conducted a comparative study between fluorescence imaging of layer-2/3 neurons in vivo and network simulation in silico. We imaged up to 40,000 cortical neurons' triggered responses by deep brain stimulus (DBS). And we build an in silico network to reproduce the biological phenomena we observed in vivo. We proved the existence of ineluctable trial variability and found it influenced by input amplitude and range. Moreover, we demonstrated that a spatially heterogeneous coding community accounts for more reliable inter-trial coding despite single-unit trial variability. A deeper understanding of trial variability from the perspective of a dynamical system may lead to uncovering intellectual abilities such as parallel coding and creativity.
Collapse
Affiliation(s)
- Guihua Xiao
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yeyi Cai
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jingyu Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Lifan Wu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
4
|
Kay K, Prince JS, Gebhart T, Tuckute G, Zhou J, Naselaris T, Schutt H. Disentangling signal and noise in neural responses through generative modeling. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.22.590510. [PMID: 38712051 PMCID: PMC11071385 DOI: 10.1101/2024.04.22.590510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Measurements of neural responses to identically repeated experimental events often exhibit large amounts of variability. This noise is distinct from signal, operationally defined as the average expected response across repeated trials for each given event. Accurately distinguishing signal from noise is important, as each is a target that is worthy of study (many believe noise reflects important aspects of brain function) and it is important not to confuse one for the other. Here, we describe a principled modeling approach in which response measurements are explicitly modeled as the sum of samples from multivariate signal and noise distributions. In our proposed method-termed Generative Modeling of Signal and Noise (GSN)-the signal distribution is estimated by subtracting the estimated noise distribution from the estimated data distribution. Importantly, GSN improves estimates of the signal distribution, but does not provide improved estimates of responses to individual events. We validate GSN using ground-truth simulations and show that it compares favorably with related methods. We also demonstrate the application of GSN to empirical fMRI data to illustrate a simple consequence of GSN: by disentangling signal and noise components in neural responses, GSN denoises principal components analysis and improves estimates of dimensionality. We end by discussing other situations that may benefit from GSN's characterization of signal and noise, such as estimation of noise ceilings for computational models of neural activity. A code toolbox for GSN is provided with both MATLAB and Python implementations.
Collapse
Affiliation(s)
- Kendrick Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota
| | | | | | - Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | - Jingyang Zhou
- Center for Computational Neuroscience (CCN), Flatiron Institute
| | - Thomas Naselaris
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota
- Department of Neuroscience, University of Minnesota
| | - Heiko Schutt
- Department of Behavioural and Cognitive Sciences, Université du Luxembourg
| |
Collapse
|
5
|
Tlaie A, Shapcott K, van der Plas TL, Rowland J, Lees R, Keeling J, Packer A, Tiesinga P, Schölvinck ML, Havenith MN. What does the mean mean? A simple test for neuroscience. PLoS Comput Biol 2024; 20:e1012000. [PMID: 38640119 PMCID: PMC11062559 DOI: 10.1371/journal.pcbi.1012000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 05/01/2024] [Accepted: 03/12/2024] [Indexed: 04/21/2024] Open
Abstract
Trial-averaged metrics, e.g. tuning curves or population response vectors, are a ubiquitous way of characterizing neuronal activity. But how relevant are such trial-averaged responses to neuronal computation itself? Here we present a simple test to estimate whether average responses reflect aspects of neuronal activity that contribute to neuronal processing. The test probes two assumptions implicitly made whenever average metrics are treated as meaningful representations of neuronal activity: Reliability: Neuronal responses repeat consistently enough across trials that they convey a recognizable reflection of the average response to downstream regions.Behavioural relevance: If a single-trial response is more similar to the average template, it is more likely to evoke correct behavioural responses. We apply this test to two data sets: (1) Two-photon recordings in primary somatosensory cortices (S1 and S2) of mice trained to detect optogenetic stimulation in S1; and (2) Electrophysiological recordings from 71 brain areas in mice performing a contrast discrimination task. Under the highly controlled settings of Data set 1, both assumptions were largely fulfilled. In contrast, the less restrictive paradigm of Data set 2 met neither assumption. Simulations predict that the larger diversity of neuronal response preferences, rather than higher cross-trial reliability, drives the better performance of Data set 1. We conclude that when behaviour is less tightly restricted, average responses do not seem particularly relevant to neuronal computation, potentially because information is encoded more dynamically. Most importantly, we encourage researchers to apply this simple test of computational relevance whenever using trial-averaged neuronal metrics, in order to gauge how representative cross-trial averages are in a given context.
Collapse
Affiliation(s)
- Alejandro Tlaie
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Technical University of Madrid, Madrid, Spain
| | | | - Thijs L. van der Plas
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - James Rowland
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Robert Lees
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Joshua Keeling
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Adam Packer
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Paul Tiesinga
- Department of Neuroinformatics, Donders Institute, Radboud University, Nijmegen, The Netherlands
| | | | - Martha N. Havenith
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
6
|
Casartelli L, Maronati C, Cavallo A. From neural noise to co-adaptability: Rethinking the multifaceted architecture of motor variability. Phys Life Rev 2023; 47:245-263. [PMID: 37976727 DOI: 10.1016/j.plrev.2023.10.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 11/19/2023]
Abstract
In the last decade, the source and the functional meaning of motor variability have attracted considerable attention in behavioral and brain sciences. This construct classically combined different levels of description, variable internal robustness or coherence, and multifaceted operational meanings. We provide here a comprehensive review of the literature with the primary aim of building a precise lexicon that goes beyond the generic and monolithic use of motor variability. In the pars destruens of the work, we model three domains of motor variability related to peculiar computational elements that influence fluctuations in motor outputs. Each domain is in turn characterized by multiple sub-domains. We begin with the domains of noise and differentiation. However, the main contribution of our model concerns the domain of adaptability, which refers to variation within the same exact motor representation. In particular, we use the terms learning and (social)fitting to specify the portions of motor variability that depend on our propensity to learn and on our largely constitutive propensity to be influenced by external factors. A particular focus is on motor variability in the context of the sub-domain named co-adaptability. Further groundbreaking challenges arise in the modeling of motor variability. Therefore, in a separate pars construens, we attempt to characterize these challenges, addressing both theoretical and experimental aspects as well as potential clinical implications for neurorehabilitation. All in all, our work suggests that motor variability is neither simply detrimental nor beneficial, and that studying its fluctuations can provide meaningful insights for future research.
Collapse
Affiliation(s)
- Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Scientific Institute IRCCS E. MEDEA, Italy
| | - Camilla Maronati
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy
| | - Andrea Cavallo
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy; C'MoN Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
| |
Collapse
|
7
|
Roland PE. How far neuroscience is from understanding brains. Front Syst Neurosci 2023; 17:1147896. [PMID: 37867627 PMCID: PMC10585277 DOI: 10.3389/fnsys.2023.1147896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 07/31/2023] [Indexed: 10/24/2023] Open
Abstract
The cellular biology of brains is relatively well-understood, but neuroscientists have not yet generated a theory explaining how brains work. Explanations of how neurons collectively operate to produce what brains can do are tentative and incomplete. Without prior assumptions about the brain mechanisms, I attempt here to identify major obstacles to progress in neuroscientific understanding of brains and central nervous systems. Most of the obstacles to our understanding are conceptual. Neuroscience lacks concepts and models rooted in experimental results explaining how neurons interact at all scales. The cerebral cortex is thought to control awake activities, which contrasts with recent experimental results. There is ambiguity distinguishing task-related brain activities from spontaneous activities and organized intrinsic activities. Brains are regarded as driven by external and internal stimuli in contrast to their considerable autonomy. Experimental results are explained by sensory inputs, behavior, and psychological concepts. Time and space are regarded as mutually independent variables for spiking, post-synaptic events, and other measured variables, in contrast to experimental results. Dynamical systems theory and models describing evolution of variables with time as the independent variable are insufficient to account for central nervous system activities. Spatial dynamics may be a practical solution. The general hypothesis that measurements of changes in fundamental brain variables, action potentials, transmitter releases, post-synaptic transmembrane currents, etc., propagating in central nervous systems reveal how they work, carries no additional assumptions. Combinations of current techniques could reveal many aspects of spatial dynamics of spiking, post-synaptic processing, and plasticity in insects and rodents to start with. But problems defining baseline and reference conditions hinder interpretations of the results. Furthermore, the facts that pooling and averaging of data destroy their underlying dynamics imply that single-trial designs and statistics are necessary.
Collapse
Affiliation(s)
- Per E. Roland
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
8
|
Aery Jones EA, Giocomo LM. Neural ensembles in navigation: From single cells to population codes. Curr Opin Neurobiol 2023; 78:102665. [PMID: 36542882 PMCID: PMC9845194 DOI: 10.1016/j.conb.2022.102665] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 10/27/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022]
Abstract
The brain can represent behaviorally relevant information through the firing of individual neurons as well as the coordinated firing of ensembles of neurons. Neurons in the hippocampus and associated cortical regions participate in a variety of types of ensembles to support navigation. These ensemble types include single cell codes, population codes, time-compressed sequences, behavioral sequences, and engrams. We present the physiological basis and behavioral relevance of ensemble firing. We discuss how these traditional definitions of ensembles can constrain or expand potential analyses due to the underlying assumptions and abstractions made. We highlight how coding can change at the ensemble level while underlying single cell codes remain intact. Finally, we present how ensemble definitions could be broadened to better understand the full complexity of the brain.
Collapse
Affiliation(s)
- Emily A Aery Jones
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.
| |
Collapse
|
9
|
Beron CC, Neufeld SQ, Linderman SW, Sabatini BL. Mice exhibit stochastic and efficient action switching during probabilistic decision making. Proc Natl Acad Sci U S A 2022; 119:e2113961119. [PMID: 35385355 PMCID: PMC9169659 DOI: 10.1073/pnas.2113961119] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 03/03/2022] [Indexed: 12/05/2022] Open
Abstract
In probabilistic and nonstationary environments, individuals must use internal and external cues to flexibly make decisions that lead to desirable outcomes. To gain insight into the process by which animals choose between actions, we trained mice in a task with time-varying reward probabilities. In our implementation of such a two-armed bandit task, thirsty mice use information about recent action and action–outcome histories to choose between two ports that deliver water probabilistically. Here we comprehensively modeled choice behavior in this task, including the trial-to-trial changes in port selection, i.e., action switching behavior. We find that mouse behavior is, at times, deterministic and, at others, apparently stochastic. The behavior deviates from that of a theoretically optimal agent performing Bayesian inference in a hidden Markov model (HMM). We formulate a set of models based on logistic regression, reinforcement learning, and sticky Bayesian inference that we demonstrate are mathematically equivalent and that accurately describe mouse behavior. The switching behavior of mice in the task is captured in each model by a stochastic action policy, a history-dependent representation of action value, and a tendency to repeat actions despite incoming evidence. The models parsimoniously capture behavior across different environmental conditionals by varying the stickiness parameter, and like the mice, they achieve nearly maximal reward rates. These results indicate that mouse behavior reaches near-maximal performance with reduced action switching and can be described by a set of equivalent models with a small number of relatively fixed parameters.
Collapse
Affiliation(s)
- Celia C. Beron
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115
- HHMI, Harvard Medical School, Boston, MA 02115
| | - Shay Q. Neufeld
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115
- HHMI, Harvard Medical School, Boston, MA 02115
| | - Scott W. Linderman
- Department of Statistics, Stanford University, Stanford, CA 94305
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA 94305
| | - Bernardo L. Sabatini
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115
- HHMI, Harvard Medical School, Boston, MA 02115
| |
Collapse
|
10
|
Xu D, Dong M, Chen Y, Delgado AM, Hughes NC, Zhang L, O'Connor DH. Cortical processing of flexible and context-dependent sensorimotor sequences. Nature 2022; 603:464-469. [PMID: 35264793 PMCID: PMC9109820 DOI: 10.1038/s41586-022-04478-7] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 01/26/2022] [Indexed: 11/08/2022]
Abstract
The brain generates complex sequences of movements that can be flexibly configured based on behavioural context or real-time sensory feedback1, but how this occurs is not fully understood. Here we developed a 'sequence licking' task in which mice directed their tongue to a target that moved through a series of locations. Mice could rapidly branch the sequence online based on tactile feedback. Closed-loop optogenetics and electrophysiology revealed that the tongue and jaw regions of the primary somatosensory (S1TJ) and motor (M1TJ) cortices2 encoded and controlled tongue kinematics at the level of individual licks. By contrast, the tongue 'premotor' (anterolateral motor) cortex3-10 encoded latent variables including intended lick angle, sequence identity and progress towards the reward that marked successful sequence execution. Movement-nonspecific sequence branching signals occurred in the anterolateral motor cortex and M1TJ. Our results reveal a set of key cortical areas for flexible and context-informed sequence generation.
Collapse
Affiliation(s)
- Duo Xu
- The Solomon H. Snyder Department of Neuroscience, Krieger Mind/Brain Institute, Kavli Neuroscience Discovery Institute, Brain Science Institute, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Mingyuan Dong
- The Solomon H. Snyder Department of Neuroscience, Krieger Mind/Brain Institute, Kavli Neuroscience Discovery Institute, Brain Science Institute, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Yuxi Chen
- Undergraduate Studies, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Angel M Delgado
- Undergraduate Studies, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Natasha C Hughes
- Undergraduate Studies, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Linghua Zhang
- The Solomon H. Snyder Department of Neuroscience, Krieger Mind/Brain Institute, Kavli Neuroscience Discovery Institute, Brain Science Institute, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Daniel H O'Connor
- The Solomon H. Snyder Department of Neuroscience, Krieger Mind/Brain Institute, Kavli Neuroscience Discovery Institute, Brain Science Institute, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|