1
|
Saiki-Ishikawa A, Agrios M, Savya S, Forrest A, Sroussi H, Hsu S, Basrai D, Xu F, Miri A. Hierarchy between forelimb premotor and primary motor cortices and its manifestation in their firing patterns. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.23.559136. [PMID: 38798685 PMCID: PMC11118350 DOI: 10.1101/2023.09.23.559136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Though hierarchy is commonly invoked in descriptions of motor cortical function, its presence and manifestation in firing patterns remain poorly resolved. Here we use optogenetic inactivation to demonstrate that short-latency influence between forelimb premotor and primary motor cortices is asymmetric during reaching in mice, demonstrating a partial hierarchy between the endogenous activity in each region. Multi-region recordings revealed that some activity is captured by similar but delayed patterns where either region's activity leads, with premotor activity leading more. Yet firing in each region is dominated by patterns shared between regions and is equally predictive of firing in the other region at the single-neuron level. In dual-region network models fit to data, regions differed in their dependence on across-region input, rather than the amount of such input they received. Our results indicate that motor cortical hierarchy, while present, may not be exposed when inferring interactions between populations from firing patterns alone.
Collapse
|
2
|
Pellegrino A, Stein H, Cayco-Gajic NA. Dimensionality reduction beyond neural subspaces with slice tensor component analysis. Nat Neurosci 2024:10.1038/s41593-024-01626-2. [PMID: 38710876 DOI: 10.1038/s41593-024-01626-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/20/2024] [Indexed: 05/08/2024]
Abstract
Recent work has argued that large-scale neural recordings are often well described by patterns of coactivation across neurons. Yet the view that neural variability is constrained to a fixed, low-dimensional subspace may overlook higher-dimensional structure, including stereotyped neural sequences or slowly evolving latent spaces. Here we argue that task-relevant variability in neural data can also cofluctuate over trials or time, defining distinct 'covariability classes' that may co-occur within the same dataset. To demix these covariability classes, we develop sliceTCA (slice tensor component analysis), a new unsupervised dimensionality reduction method for neural data tensors. In three example datasets, including motor cortical activity during a classic reaching task in primates and recent multiregion recordings in mice, we show that sliceTCA can capture more task-relevant structure in neural data using fewer components than traditional methods. Overall, our theoretical framework extends the classic view of low-dimensional population activity by incorporating additional classes of latent variables capturing higher-dimensional structure.
Collapse
Affiliation(s)
- Arthur Pellegrino
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France.
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, UK.
| | - Heike Stein
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France
| | - N Alex Cayco-Gajic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France.
| |
Collapse
|
3
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
5
|
Verhein JR, Vyas S, Shenoy KV. Methylphenidate modulates motor cortical dynamics and behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.15.562405. [PMID: 37905157 PMCID: PMC10614820 DOI: 10.1101/2023.10.15.562405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Methylphenidate (MPH, brand: Ritalin) is a common stimulant used both medically and non-medically. Though typically prescribed for its cognitive effects, MPH also affects movement. While it is known that MPH noncompetitively blocks the reuptake of catecholamines through inhibition of dopamine and norepinephrine transporters, a critical step in exploring how it affects behavior is to understand how MPH directly affects neural activity. This would establish an electrophysiological mechanism of action for MPH. Since we now have biologically-grounded network-level hypotheses regarding how populations of motor cortical neurons plan and execute movements, there is a unique opportunity to make testable predictions regarding how systemic MPH administration - a pharmacological perturbation - might affect neural activity in motor cortex. To that end, we administered clinically-relevant doses of MPH to Rhesus monkeys as they performed an instructed-delay reaching task. Concomitantly, we measured neural activity from dorsal premotor and primary motor cortex. Consistent with our predictions, we found dose-dependent and significant effects on reaction time, trial-by-trial variability, and movement speed. We confirmed our hypotheses that changes in reaction time and variability were accompanied by previously established population-level changes in motor cortical preparatory activity and the condition-independent signal that precedes movements. We expected changes in speed to be a result of changes in the amplitude of motor cortical dynamics and/or a translation of those dynamics in activity space. Instead, our data are consistent with a mechanism whereby the neuromodulatory effect of MPH is to increase the gain and/or the signal-to-noise of motor cortical dynamics during reaching. Continued work in this domain to better understand the brain-wide electrophysiological mechanism of action of MPH and other psychoactive drugs could facilitate more targeted treatments for a host of cognitive-motor disorders.
Collapse
Affiliation(s)
- Jessica R Verhein
- Medical Scientist Training Program, Stanford School of Medicine, Stanford University, Stanford, CA
- Neurosciences Graduate Program, Stanford School of Medicine, Stanford University, Stanford, CA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA
- Current affiliations: Psychiatry Research Residency Training Program, University of California, San Francisco, San Francisco, CA
| | - Saurabh Vyas
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA
- Department of Bioengineering, Stanford University, Stanford, CA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY
| | - Krishna V Shenoy
- Neurosciences Graduate Program, Stanford School of Medicine, Stanford University, Stanford, CA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA
- Department of Bioengineering, Stanford University, Stanford, CA
- Department of Electrical Engineering, Stanford University, Stanford, CA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA
- Department of Neurobiology, Stanford University, Stanford, CA
- Bio-X Program, Stanford University, Stanford, CA
| |
Collapse
|
6
|
Stephen EP, Li Y, Metzger S, Oganian Y, Chang EF. Latent neural dynamics encode temporal context in speech. Hear Res 2023; 437:108838. [PMID: 37441880 DOI: 10.1016/j.heares.2023.108838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 06/15/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Direct neural recordings from human auditory cortex have demonstrated encoding for acoustic-phonetic features of consonants and vowels. Neural responses also encode distinct acoustic amplitude cues related to timing, such as those that occur at the onset of a sentence after a silent period or the onset of the vowel in each syllable. Here, we used a group reduced rank regression model to show that distributed cortical responses support a low-dimensional latent state representation of temporal context in speech. The timing cues each capture more unique variance than all other phonetic features and exhibit rotational or cyclical dynamics in latent space from activity that is widespread over the superior temporal gyrus. We propose that these spatially distributed timing signals could serve to provide temporal context for, and possibly bind across time, the concurrent processing of individual phonetic features, to compose higher-order phonological (e.g. word-level) representations.
Collapse
Affiliation(s)
- Emily P Stephen
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; Department of Mathematics and Statistics, Boston University, Boston, MA 02215, United States
| | - Yuanning Li
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Sean Metzger
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States
| | - Yulia Oganian
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; Center for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| | - Edward F Chang
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States.
| |
Collapse
|
7
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532554. [PMID: 36993213 PMCID: PMC10055042 DOI: 10.1101/2023.03.14.532554] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
|
8
|
Latimer KW, Freedman DJ. Low-dimensional encoding of decisions in parietal cortex reflects long-term training history. Nat Commun 2023; 14:1010. [PMID: 36823109 PMCID: PMC9950136 DOI: 10.1038/s41467-023-36554-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 02/07/2023] [Indexed: 02/25/2023] Open
Abstract
Neurons in parietal cortex exhibit task-related activity during decision-making tasks. However, it remains unclear how long-term training to perform different tasks over months or even years shapes neural computations and representations. We examine lateral intraparietal area (LIP) responses during a visual motion delayed-match-to-category task. We consider two pairs of male macaque monkeys with different training histories: one trained only on the categorization task, and another first trained to perform fine motion-direction discrimination (i.e., pretrained). We introduce a novel analytical approach-generalized multilinear models-to quantify low-dimensional, task-relevant components in population activity. During the categorization task, we found stronger cosine-like motion-direction tuning in the pretrained monkeys than in the category-only monkeys, and that the pretrained monkeys' performance depended more heavily on fine discrimination between sample and test stimuli. These results suggest that sensory representations in LIP depend on the sequence of tasks that the animals have learned, underscoring the importance of considering training history in studies with complex behavioral tasks.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Department of Neurobiology, University of Chicago, Chicago, IL, USA.
| | - David J Freedman
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
9
|
Cortico-cortical drive in a coupled premotor-primary motor cortex dynamical system. Cell Rep 2022; 41:111849. [PMID: 36543147 DOI: 10.1016/j.celrep.2022.111849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/13/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
In the conventional view of sensorimotor control, the premotor cortex (PM) plans actions that are executed by the primary motor cortex (M1). This notion arises in part from many experiments that have imposed a preparatory "planning" period, during which PM becomes active without M1. But during many natural movements, PM and M1 are co-activated, making it difficult to distinguish their functional roles. We leverage coupled dynamical systems models (cDSMs) to uncover interactions between PM and M1 during movements performed with no preparatory period. We build cDSMs using neural and behavioral data recorded from two non-human primates as they performed a reach-grasp-manipulate task. PM and M1 interact dynamically throughout these movements. Whereas PM drives the M1 in some situations, in other situations, M1 drives PM activity, contrary to the conventional assumption. Our DSM framework provides additional predictions differentiating the roles of PM and M1 in controlling movement.
Collapse
|
10
|
Saxena S, Russo AA, Cunningham J, Churchland MM. Motor cortex activity across movement speeds is predicted by network-level strategies for generating muscle activity. eLife 2022; 11:67620. [PMID: 35621264 PMCID: PMC9197394 DOI: 10.7554/elife.67620] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/26/2022] [Indexed: 12/02/2022] Open
Abstract
Learned movements can be skillfully performed at different paces. What neural strategies produce this flexibility? Can they be predicted and understood by network modeling? We trained monkeys to perform a cycling task at different speeds, and trained artificial recurrent networks to generate the empirical muscle-activity patterns. Network solutions reflected the principle that smooth well-behaved dynamics require low trajectory tangling. Network solutions had a consistent form, which yielded quantitative and qualitative predictions. To evaluate predictions, we analyzed motor cortex activity recorded during the same task. Responses supported the hypothesis that the dominant neural signals reflect not muscle activity, but network-level strategies for generating muscle activity. Single-neuron responses were better accounted for by network activity than by muscle activity. Similarly, neural population trajectories shared their organization not with muscle trajectories, but with network solutions. Thus, cortical activity could be understood based on the need to generate muscle activity via dynamics that allow smooth, robust control over movement speed.
Collapse
Affiliation(s)
- Shreya Saxena
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Center for Theoretical Neuroscience, Columbia University, New York, United States.,Department of Statistics, Columbia University, New York, United States
| | - Abigail A Russo
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Department of Neuroscience, Columbia University, New York, United States
| | - John Cunningham
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Center for Theoretical Neuroscience, Columbia University, New York, United States.,Department of Statistics, Columbia University, New York, United States
| | - Mark M Churchland
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Department of Neuroscience, Columbia University, New York, United States.,Kavli Institute for Brain Science, Columbia University, New York, United States
| |
Collapse
|
11
|
Schroeder KE, Perkins SM, Wang Q, Churchland MM. Cortical Control of Virtual Self-Motion Using Task-Specific Subspaces. J Neurosci 2022; 42:220-239. [PMID: 34716229 PMCID: PMC8802935 DOI: 10.1523/jneurosci.2687-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 09/18/2021] [Accepted: 10/17/2021] [Indexed: 11/21/2022] Open
Abstract
Brain-machine interfaces (BMIs) for reaching have enjoyed continued performance improvements, yet there remains significant need for BMIs that control other movement classes. Recent scientific findings suggest that the intrinsic covariance structure of neural activity depends strongly on movement class, potentially necessitating different decode algorithms across classes. To address this possibility, we developed a self-motion BMI based on cortical activity as monkeys cycled a hand-held pedal to progress along a virtual track. Unlike during reaching, we found no high-variance dimensions that directly correlated with to-be-decoded variables. This was due to no neurons having consistent correlations between their responses and kinematic variables. Yet we could decode a single variable-self-motion-by nonlinearly leveraging structure that spanned multiple high-variance neural dimensions. Resulting online BMI-control success rates approached those during manual control. These findings make two broad points regarding how to build decode algorithms that harmonize with the empirical structure of neural activity in motor cortex. First, even when decoding from the same cortical region (e.g., arm-related motor cortex), different movement classes may need to employ very different strategies. Although correlations between neural activity and hand velocity are prominent during reaching tasks, they are not a fundamental property of motor cortex and cannot be counted on to be present in general. Second, although one generally desires a low-dimensional readout, it can be beneficial to leverage a multidimensional high-variance subspace. Fully embracing this approach requires highly nonlinear approaches tailored to the task at hand, but can produce near-native levels of performance.SIGNIFICANCE STATEMENT Many brain-machine interface decoders have been constructed for controlling movements normally performed with the arm. Yet it is unclear how these will function beyond the reach-like scenarios where they were developed. Existing decoders implicitly assume that neural covariance structure, and correlations with to-be-decoded kinematic variables, will be largely preserved across tasks. We find that the correlation between neural activity and hand kinematics, a feature typically exploited when decoding reach-like movements, is essentially absent during another task performed with the arm: cycling through a virtual environment. Nevertheless, the use of a different strategy, one focused on leveraging the highest-variance neural signals, supported high performance real-time brain-machine interface control.
Collapse
Affiliation(s)
- Karen E Schroeder
- Department of Neuroscience, Columbia University Medical Center, New York, New York
- Zuckerman Institute, Columbia University, New York, New York
| | - Sean M Perkins
- Zuckerman Institute, Columbia University, New York, New York
- Department of Biomedical Engineering, Columbia University, New York, New York
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, New York
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, New York
- Zuckerman Institute, Columbia University, New York, New York
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, New York
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York
| |
Collapse
|
12
|
Sohn H, Narain D. Neural implementations of Bayesian inference. Curr Opin Neurobiol 2021; 70:121-129. [PMID: 34678599 DOI: 10.1016/j.conb.2021.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 08/18/2021] [Accepted: 09/09/2021] [Indexed: 10/20/2022]
Abstract
Bayesian inference has emerged as a general framework that captures how organisms make decisions under uncertainty. Recent experimental findings reveal disparate mechanisms for how the brain generates behaviors predicted by normative Bayesian theories. Here, we identify two broad classes of neural implementations for Bayesian inference: a modular class, where each probabilistic component of Bayesian computation is independently encoded and a transform class, where uncertain measurements are converted to Bayesian estimates through latent processes. Many recent experimental neuroscience findings studying probabilistic inference broadly fall into these classes. We identify potential avenues for synthesis across these two classes and the disparities that, at present, cannot be reconciled. We conclude that to distinguish among implementation hypotheses for Bayesian inference, we require greater engagement among theoretical and experimental neuroscientists in an effort that spans different scales of analysis, circuits, tasks, and species.
Collapse
Affiliation(s)
- Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Devika Narain
- Dept. of Neuroscience, Erasmus University Medical Center, Rotterdam, 3015, CN, the Netherlands.
| |
Collapse
|
13
|
Williams AH, Linderman SW. Statistical neuroscience in the single trial limit. Curr Opin Neurobiol 2021; 70:193-205. [PMID: 34861596 DOI: 10.1016/j.conb.2021.10.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 09/29/2021] [Accepted: 10/27/2021] [Indexed: 11/24/2022]
Abstract
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
Collapse
Affiliation(s)
- Alex H Williams
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA
| | - Scott W Linderman
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA.
| |
Collapse
|
14
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
15
|
Low IIC, Williams AH, Campbell MG, Linderman SW, Giocomo LM. Dynamic and reversible remapping of network representations in an unchanging environment. Neuron 2021; 109:2967-2980.e11. [PMID: 34363753 DOI: 10.1016/j.neuron.2021.07.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 02/26/2021] [Accepted: 07/06/2021] [Indexed: 12/14/2022]
Abstract
Neurons in the medial entorhinal cortex alter their firing properties in response to environmental changes. This flexibility in neural coding is hypothesized to support navigation and memory by dividing sensory experience into unique episodes. However, it is unknown how the entorhinal circuit as a whole transitions between different representations when sensory information is not delineated into discrete contexts. Here we describe rapid and reversible transitions between multiple spatial maps of an unchanging task and environment. These remapping events were synchronized across hundreds of neurons, differentially affected navigational cell types, and correlated with changes in running speed. Despite widespread changes in spatial coding, remapping comprised a translation along a single dimension in population-level activity space, enabling simple decoding strategies. These findings provoke reconsideration of how the medial entorhinal cortex dynamically represents space and suggest a remarkable capacity of cortical circuits to rapidly and substantially reorganize their neural representations.
Collapse
Affiliation(s)
- Isabel I C Low
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.
| | - Alex H Williams
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA; Department of Statistics, Stanford University, Stanford, CA, USA
| | - Malcolm G Campbell
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Scott W Linderman
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA; Department of Statistics, Stanford University, Stanford, CA, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.
| |
Collapse
|
16
|
Kao TC, Sadabadi MS, Hennequin G. Optimal anticipatory control as a theory of motor preparation: A thalamo-cortical circuit model. Neuron 2021; 109:1567-1581.e12. [PMID: 33789082 PMCID: PMC8111422 DOI: 10.1016/j.neuron.2021.03.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 10/09/2020] [Accepted: 03/05/2021] [Indexed: 11/21/2022]
Abstract
Across a range of motor and cognitive tasks, cortical activity can be accurately described by low-dimensional dynamics unfolding from specific initial conditions on every trial. These "preparatory states" largely determine the subsequent evolution of both neural activity and behavior, and their importance raises questions regarding how they are, or ought to be, set. Here, we formulate motor preparation as optimal anticipatory control of future movements and show that the solution requires a form of internal feedback control of cortical circuit dynamics. In contrast to a simple feedforward strategy, feedback control enables fast movement preparation by selectively controlling the cortical state in the small subspace that matters for the upcoming movement. Feedback but not feedforward control explains the orthogonality between preparatory and movement activity observed in reaching monkeys. We propose a circuit model in which optimal preparatory control is implemented as a thalamo-cortical loop gated by the basal ganglia.
Collapse
Affiliation(s)
- Ta-Chu Kao
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - Mahdieh S Sadabadi
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, UK
| | - Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| |
Collapse
|
17
|
Bondanelli G, Deneux T, Bathellier B, Ostojic S. Network dynamics underlying OFF responses in the auditory cortex. eLife 2021; 10:e53151. [PMID: 33759763 PMCID: PMC8057817 DOI: 10.7554/elife.53151] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 03/19/2021] [Indexed: 11/13/2022] Open
Abstract
Across sensory systems, complex spatio-temporal patterns of neural activity arise following the onset (ON) and offset (OFF) of stimuli. While ON responses have been widely studied, the mechanisms generating OFF responses in cortical areas have so far not been fully elucidated. We examine here the hypothesis that OFF responses are single-cell signatures of recurrent interactions at the network level. To test this hypothesis, we performed population analyses of two-photon calcium recordings in the auditory cortex of awake mice listening to auditory stimuli, and compared them to linear single-cell and network models. While the single-cell model explained some prominent features of the data, it could not capture the structure across stimuli and trials. In contrast, the network model accounted for the low-dimensional organization of population responses and their global structure across stimuli, where distinct stimuli activated mostly orthogonal dimensions in the neural state-space.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’études cognitives, ENS, PSL University, INSERMParisFrance
- Neural Computation Laboratory, Center for Human Technologies, Istituto Italiano di Tecnologia (IIT)GenoaItaly
| | - Thomas Deneux
- Départment de Neurosciences Intégratives et Computationelles (ICN), Institut des Neurosciences Paris-Saclay (NeuroPSI), UMR 9197 CNRS, Université Paris SudGif-sur-YvetteFrance
| | - Brice Bathellier
- Départment de Neurosciences Intégratives et Computationelles (ICN), Institut des Neurosciences Paris-Saclay (NeuroPSI), UMR 9197 CNRS, Université Paris SudGif-sur-YvetteFrance
- Institut Pasteur, INSERM, Institut de l’AuditionParisFrance
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’études cognitives, ENS, PSL University, INSERMParisFrance
| |
Collapse
|
18
|
Jin C, Chen W, Cao Y, Xu Z, Tan Z, Zhang X, Deng L, Zheng C, Zhou J, Shi H, Feng J. Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nat Commun 2020; 11:5088. [PMID: 33037212 DOI: 10.1101/823377] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 09/04/2020] [Indexed: 05/22/2023] Open
Abstract
Early detection of COVID-19 based on chest CT enables timely treatment of patients and helps control the spread of the disease. We proposed an artificial intelligence (AI) system for rapid COVID-19 detection and performed extensive statistical analysis of CTs of COVID-19 based on the AI system. We developed and evaluated our system on a large dataset with more than 10 thousand CT volumes from COVID-19, influenza-A/B, non-viral community acquired pneumonia (CAP) and non-pneumonia subjects. In such a difficult multi-class diagnosis task, our deep convolutional neural network-based system is able to achieve an area under the receiver operating characteristic curve (AUC) of 97.81% for multi-way classification on test cohort of 3,199 scans, AUC of 92.99% and 93.25% on two publicly available datasets, CC-CCII and MosMedData respectively. In a reader study involving five radiologists, the AI system outperforms all of radiologists in more challenging tasks at a speed of two orders of magnitude above them. Diagnosis performance of chest x-ray (CXR) is compared to that of CT. Detailed interpretation of deep network is also performed to relate system outputs with CT presentations. The code is available at https://github.com/ChenWWWeixiang/diagnosis_covid19 .
Collapse
Affiliation(s)
- Cheng Jin
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Weixiang Chen
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Yukun Cao
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Zhanwei Xu
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Zimeng Tan
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Xin Zhang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Lei Deng
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Jie Zhou
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Heshui Shi
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China.
| | - Jianjiang Feng
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
| |
Collapse
|
19
|
Russo AA, Khajeh R, Bittner SR, Perkins SM, Cunningham JP, Abbott LF, Churchland MM. Neural Trajectories in the Supplementary Motor Area and Motor Cortex Exhibit Distinct Geometries, Compatible with Different Classes of Computation. Neuron 2020; 107:745-758.e6. [PMID: 32516573 DOI: 10.1016/j.neuron.2020.05.020] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 12/25/2019] [Accepted: 05/11/2020] [Indexed: 12/21/2022]
Abstract
The supplementary motor area (SMA) is believed to contribute to higher order aspects of motor control. We considered a key higher order role: tracking progress throughout an action. We propose that doing so requires population activity to display low "trajectory divergence": situations with different future motor outputs should be distinct, even when present motor output is identical. We examined neural activity in SMA and primary motor cortex (M1) as monkeys cycled various distances through a virtual environment. SMA exhibited multiple response features that were absent in M1. At the single-neuron level, these included ramping firing rates and cycle-specific responses. At the population level, they included a helical population-trajectory geometry with shifts in the occupied subspace as movement unfolded. These diverse features all served to reduce trajectory divergence, which was much lower in SMA versus M1. Analogous population-trajectory geometry, also with low divergence, naturally arose in networks trained to internally guide multi-cycle movement.
Collapse
Affiliation(s)
- Abigail A Russo
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Ramin Khajeh
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sean R Bittner
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sean M Perkins
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Biomedical Engineering, Columbia University, New York, NY 10027, USA
| | - John P Cunningham
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Physiology and Cellular Biophysics, Columbia University Medical Center, New York, NY 10032, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
20
|
Bartolo R, Saunders RC, Mitz AR, Averbeck BB. Dimensionality, information and learning in prefrontal cortex. PLoS Comput Biol 2020; 16:e1007514. [PMID: 32330126 PMCID: PMC7202668 DOI: 10.1371/journal.pcbi.1007514] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 05/06/2020] [Accepted: 03/11/2020] [Indexed: 01/12/2023] Open
Abstract
Learning leads to changes in population patterns of neural activity. In this study we wanted to examine how these changes in patterns of activity affect the dimensionality of neural responses and information about choices. We addressed these questions by carrying out high channel count recordings in dorsal-lateral prefrontal cortex (dlPFC; 768 electrodes) while monkeys performed a two-armed bandit reinforcement learning task. The high channel count recordings allowed us to study population coding while monkeys learned choices between actions or objects. We found that the dimensionality of neural population activity was higher across blocks in which animals learned the values of novel pairs of objects, than across blocks in which they learned the values of actions. The increase in dimensionality with learning in object blocks was related to less shared information across blocks, and therefore patterns of neural activity that were less similar, when compared to learning in action blocks. Furthermore, these differences emerged with learning, and were not a simple function of the choice of a visual image or action. Therefore, learning the values of novel objects increases the dimensionality of neural representations in dlPFC. In this study we found that learning to choose rewarding objects increased the diversity of patterns of activity, measured as the dimensionality of the response, observed in dorsal-lateral prefrontal cortex. The dimensionality increase for learning to choose rewarding objects was larger than the dimensionality increase for learning to choose rewarding actions. The dimensionality increase was not a simple function of the diverse set of images used, as the patterns of activity only appeared after learning.
Collapse
Affiliation(s)
- Ramon Bartolo
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Richard C. Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Andrew R. Mitz
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Bruno B. Averbeck
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, United States of America
- * E-mail:
| |
Collapse
|
21
|
Baldwin E, Han J, Luo W, Zhou J, An L, Liu J, Zhang HH, Li H. On fusion methods for knowledge discovery from multi-omics datasets. Comput Struct Biotechnol J 2020; 18:509-517. [PMID: 32206210 PMCID: PMC7078495 DOI: 10.1016/j.csbj.2020.02.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 01/25/2020] [Accepted: 02/19/2020] [Indexed: 12/22/2022] Open
Abstract
Recent years have witnessed the tendency of measuring a biological sample on multiple omics scales for a comprehensive understanding of how biological activities on varying levels are perturbed by genetic variants, environments, and their interactions. This new trend raises substantial challenges to data integration and fusion, of which the latter is a specific type of integration that applies a uniform method in a scalable manner, to solve biological problems which the multi-omics measurements target. Fusion-based analysis has advanced rapidly in the past decade, thanks to application drivers and theoretical breakthroughs in mathematics, statistics, and computer science. We will briefly address these methods from methodological and mathematical perspectives and categorize them into three types of approaches: data fusion (a narrowed definition as compared to the general data fusion concept), model fusion, and mixed fusion. We will demonstrate at least one typical example in each specific category to exemplify the characteristics, principles, and applications of the methods in general, as well as discuss the gaps and potential issues for future studies.
Collapse
Affiliation(s)
- Edwin Baldwin
- Department of Biosystems Engineering, University of Arizona, United States
| | - Jiali Han
- Department of Systems and Industrial Engineering, University of Arizona, United States
| | - Wenting Luo
- Department of Biosystems Engineering, University of Arizona, United States
| | - Jin Zhou
- Department of Epidemiology and Biostatics, University of Arizona, United States
| | - Lingling An
- Department of Biosystems Engineering, University of Arizona, United States.,Department of Epidemiology and Biostatics, University of Arizona, United States
| | - Jian Liu
- Department of Systems and Industrial Engineering, University of Arizona, United States
| | - Hao Helen Zhang
- Department of Mathematics, University of Arizona, United States
| | - Haiquan Li
- Department of Biosystems Engineering, University of Arizona, United States
| |
Collapse
|
22
|
Adesnik H, Naka A. Cracking the Function of Layers in the Sensory Cortex. Neuron 2019; 100:1028-1043. [PMID: 30521778 DOI: 10.1016/j.neuron.2018.10.032] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 08/08/2018] [Accepted: 10/18/2018] [Indexed: 12/24/2022]
Abstract
Understanding how cortical activity generates sensory perceptions requires a detailed dissection of the function of cortical layers. Despite our relatively extensive knowledge of their anatomy and wiring, we have a limited grasp of what each layer contributes to cortical computation. We need to develop a theory of cortical function that is rooted solidly in each layer's component cell types and fine circuit architecture and produces predictions that can be validated by specific perturbations. Here we briefly review the progress toward such a theory and suggest an experimental road map toward this goal. We discuss new methods for the all-optical interrogation of cortical layers, for correlating in vivo function with precise identification of transcriptional cell type, and for mapping local and long-range activity in vivo with synaptic resolution. The new technologies that can crack the function of cortical layers are finally on the immediate horizon.
Collapse
Affiliation(s)
- Hillel Adesnik
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA; The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
| | - Alexander Naka
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA; The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
23
|
Whiteway MR, Butts DA. The quest for interpretable models of neural population activity. Curr Opin Neurobiol 2019; 58:86-93. [PMID: 31426024 DOI: 10.1016/j.conb.2019.07.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 07/14/2019] [Indexed: 11/24/2022]
Abstract
Many aspects of brain function arise from the coordinated activity of large populations of neurons. Recent developments in neural recording technologies are providing unprecedented access to the activity of such populations during increasingly complex experimental contexts; however, extracting scientific insights from such recordings requires the concurrent development of analytical tools that relate this population activity to system-level function. This is a primary motivation for latent variable models, which seek to provide a low-dimensional description of population activity that can be related to experimentally controlled variables, as well as uncontrolled variables such as internal states (e.g. attention and arousal) and elements of behavior. While deriving an understanding of function from traditional latent variable methods relies on low-dimensional visualizations, new approaches are targeting more interpretable descriptions of the components underlying system-level function.
Collapse
Affiliation(s)
- Matthew R Whiteway
- Zuckerman Mind Brain Behavior Institute, Jerome L Greene Science Center, Columbia University, 3227 Broadway, 5th Floor, Quad D, New York, NY 10027, USA
| | - Daniel A Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, 1210 Biology-Psychology Bldg. #144, College Park, MD 20742, USA.
| |
Collapse
|
24
|
Remington ED, Narain D, Hosseini EA, Jazayeri M. Flexible Sensorimotor Computations through Rapid Reconfiguration of Cortical Dynamics. Neuron 2019; 98:1005-1019.e5. [PMID: 29879384 DOI: 10.1016/j.neuron.2018.05.020] [Citation(s) in RCA: 139] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 03/19/2018] [Accepted: 05/11/2018] [Indexed: 10/14/2022]
Abstract
Neural mechanisms that support flexible sensorimotor computations are not well understood. In a dynamical system whose state is determined by interactions among neurons, computations can be rapidly reconfigured by controlling the system's inputs and initial conditions. To investigate whether the brain employs such control mechanisms, we recorded from the dorsomedial frontal cortex of monkeys trained to measure and produce time intervals in two sensorimotor contexts. The geometry of neural trajectories during the production epoch was consistent with a mechanism wherein the measured interval and sensorimotor context exerted control over cortical dynamics by adjusting the system's initial condition and input, respectively. These adjustments, in turn, set the speed at which activity evolved in the production epoch, allowing the animal to flexibly produce different time intervals. These results provide evidence that the language of dynamical systems can be used to parsimoniously link brain activity to sensorimotor computations.
Collapse
Affiliation(s)
- Evan D Remington
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Devika Narain
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; Netherlands Institute for Neuroscience, Amsterdam, the Netherlands; Erasmus Medical Center, Rotterdam, the Netherlands
| | - Eghbal A Hosseini
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
25
|
Kalaska JF. Emerging ideas and tools to study the emergent properties of the cortical neural circuits for voluntary motor control in non-human primates. F1000Res 2019; 8. [PMID: 31275561 PMCID: PMC6544130 DOI: 10.12688/f1000research.17161.1] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/22/2019] [Indexed: 12/22/2022] Open
Abstract
For years, neurophysiological studies of the cerebral cortical mechanisms of voluntary motor control were limited to single-electrode recordings of the activity of one or a few neurons at a time. This approach was supported by the widely accepted belief that single neurons were the fundamental computational units of the brain (the “neuron doctrine”). Experiments were guided by motor-control models that proposed that the motor system attempted to plan and control specific parameters of a desired action, such as the direction, speed or causal forces of a reaching movement in specific coordinate frameworks, and that assumed that the controlled parameters would be expressed in the task-related activity of single neurons. The advent of chronically implanted multi-electrode arrays about 20 years ago permitted the simultaneous recording of the activity of many neurons. This greatly enhanced the ability to study neural control mechanisms at the population level. It has also shifted the focus of the analysis of neural activity from quantifying single-neuron correlates with different movement parameters to probing the structure of multi-neuron activity patterns to identify the emergent computational properties of cortical neural circuits. In particular, recent advances in “dimension reduction” algorithms have attempted to identify specific covariance patterns in multi-neuron activity which are presumed to reflect the underlying computational processes by which neural circuits convert the intention to perform a particular movement into the required causal descending motor commands. These analyses have led to many new perspectives and insights on how cortical motor circuits covertly plan and prepare to initiate a movement without causing muscle contractions, transition from preparation to overt execution of the desired movement, generate muscle-centered motor output commands, and learn new motor skills. Progress is also being made to import optical-imaging and optogenetic toolboxes from rodents to non-human primates to overcome some technical limitations of multi-electrode recording technology.
Collapse
Affiliation(s)
- John F Kalaska
- Groupe de recherche sur le système nerveux central (GRSNC), Département de Neurosciences, Faculté de Médecine, Université de Montréal, C.P. 6128, Succ. Centre-ville, Montréal (Québec), H3C 3J7, Canada
| |
Collapse
|
26
|
Aoi MC, Pillow JW. Model-based targeted dimensionality reduction for neuronal population data. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2018; 31:6690-6699. [PMID: 31274967 PMCID: PMC6605062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Summarizing high-dimensional data using a small number of parameters is a ubiquitous first step in the analysis of neuronal population activity. Recently developed methods use "targeted" approaches that work by identifying multiple, distinct low-dimensional subspaces of activity that capture the population response to individual experimental task variables, such as the value of a presented stimulus or the behavior of the animal. These methods have gained attention because they decompose total neural activity into what are ostensibly different parts of a neuronal computation. However, existing targeted methods have been developed outside of the confines of probabilistic modeling, making some aspects of the procedures ad hoc, or limited in flexibility or interpretability. Here we propose a new model-based method for targeted dimensionality reduction based on a probabilistic generative model of the population response data. The low-dimensional structure of our model is expressed as a low-rank factorization of a linear regression model. We perform efficient inference using a combination of expectation maximization and direct maximization of the marginal likelihood. We also develop an efficient method for estimating the dimensionality of each subspace. We show that our approach outperforms alternative methods in both mean squared error of the parameter estimates, and in identifying the correct dimensionality of encoding using simulated data. We also show that our method provides more accurate inference of low-dimensional subspaces of activity than a competing algorithm, demixed PCA.
Collapse
Affiliation(s)
- Mikio C Aoi
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544
| |
Collapse
|
27
|
Remington ED, Egger SW, Narain D, Wang J, Jazayeri M. A Dynamical Systems Perspective on Flexible Motor Timing. Trends Cogn Sci 2018; 22:938-952. [PMID: 30266152 PMCID: PMC6166486 DOI: 10.1016/j.tics.2018.07.010] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Revised: 07/10/2018] [Accepted: 07/16/2018] [Indexed: 12/22/2022]
Abstract
A hallmark of higher brain function is the ability to rapidly and flexibly adjust behavioral responses based on internal and external cues. Here, we examine the computational principles that allow decisions and actions to unfold flexibly in time. We adopt a dynamical systems perspective and outline how temporal flexibility in such a system can be achieved through manipulations of inputs and initial conditions. We then review evidence from experiments in nonhuman primates that support this interpretation. Finally, we explore the broader utility and limitations of the dynamical systems perspective as a general framework for addressing open questions related to the temporal control of movements, as well as in the domains of learning and sequence generation.
Collapse
Affiliation(s)
- Evan D Remington
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; These authors contributed equally to this work
| | - Seth W Egger
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; These authors contributed equally to this work
| | - Devika Narain
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Netherlands Institute for Neuroscience, Amsterdam, BA 1105, The Netherlands; Erasmus Medical Center, Rotterdam, The Netherlands
| | - Jing Wang
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Bioengineering, University of Missouri, Columbia, MO 65201, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
28
|
Lara AH, Cunningham JP, Churchland MM. Different population dynamics in the supplementary motor area and motor cortex during reaching. Nat Commun 2018; 9:2754. [PMID: 30013188 PMCID: PMC6048147 DOI: 10.1038/s41467-018-05146-z] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Accepted: 06/11/2018] [Indexed: 11/24/2022] Open
Abstract
Neural populations perform computations through their collective activity. Different computations likely require different population-level dynamics. We leverage this assumption to examine neural responses recorded from the supplementary motor area (SMA) and motor cortex. During visually guided reaching, the respective roles of these areas remain unclear; neurons in both areas exhibit preparation-related activity and complex patterns of movement-related activity. To explore population dynamics, we employ a novel "hypothesis-guided" dimensionality reduction approach. This approach reveals commonalities but also stark differences: linear population dynamics, dominated by rotations, are prominent in motor cortex but largely absent in SMA. In motor cortex, the observed dynamics produce patterns resembling muscle activity. Conversely, the non-rotational patterns in SMA co-vary with cues regarding when movement should be initiated. Thus, while SMA and motor cortex display superficially similar single-neuron responses during visually guided reaching, their different population dynamics indicate they are likely performing quite different computations.
Collapse
Affiliation(s)
- A H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA
| | - J P Cunningham
- Department of Statistics, Columbia University, New York, NY, 10027, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, 10027, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA
| | - M M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, 10027, USA.
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, 10032, USA.
| |
Collapse
|
29
|
Williams AH, Kim TH, Wang F, Vyas S, Ryu SI, Shenoy KV, Schnitzer M, Kolda TG, Ganguli S. Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis. Neuron 2018; 98:1099-1115.e8. [PMID: 29887338 DOI: 10.1016/j.neuron.2018.05.015] [Citation(s) in RCA: 112] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 03/18/2018] [Accepted: 05/08/2018] [Indexed: 01/19/2023]
Abstract
Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning.
Collapse
Affiliation(s)
- Alex H Williams
- Neurosciences Graduate Program, Stanford University, Stanford, CA 94305, USA.
| | - Tony Hyun Kim
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA
| | - Forea Wang
- Neurosciences Graduate Program, Stanford University, Stanford, CA 94305, USA
| | - Saurabh Vyas
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Bioengineering Department, Stanford University, Stanford, CA 94305, USA
| | - Stephen I Ryu
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Krishna V Shenoy
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Bioengineering Department, Stanford University, Stanford, CA 94305, USA; Neurobiology Department, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Mark Schnitzer
- Applied Physics Department, Stanford University, Stanford, CA 94305, USA; Biology Department, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; CNC Program, Stanford University, Stanford, CA 94305, USA
| | | | - Surya Ganguli
- Applied Physics Department, Stanford University, Stanford, CA 94305, USA; Neurobiology Department, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
30
|
O’Shea DJ, Shenoy KV. ERAASR: an algorithm for removing electrical stimulation artifacts from multielectrode array recordings. J Neural Eng 2018; 15:026020. [PMID: 29265009 PMCID: PMC5833982 DOI: 10.1088/1741-2552/aaa365] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
OBJECTIVE Electrical stimulation is a widely used and effective tool in systems neuroscience, neural prosthetics, and clinical neurostimulation. However, electrical artifacts evoked by stimulation prevent the detection of spiking activity on nearby recording electrodes, which obscures the neural population response evoked by stimulation. We sought to develop a method to clean artifact-corrupted electrode signals recorded on multielectrode arrays in order to recover the underlying neural spiking activity. APPROACH We created an algorithm, which performs estimation and removal of array artifacts via sequential principal components regression (ERAASR). This approach leverages the similar structure of artifact transients, but not spiking activity, across simultaneously recorded channels on the array, across pulses within a train, and across trials. The ERAASR algorithm requires no special hardware, imposes no requirements on the shape of the artifact or the multielectrode array geometry, and comprises sequential application of straightforward linear methods with intuitive parameters. The approach should be readily applicable to most datasets where stimulation does not saturate the recording amplifier. MAIN RESULTS The effectiveness of the algorithm is demonstrated in macaque dorsal premotor cortex using acute linear multielectrode array recordings and single electrode stimulation. Large electrical artifacts appeared on all channels during stimulation. After application of ERAASR, the cleaned signals were quiescent on channels with no spontaneous spiking activity, whereas spontaneously active channels exhibited evoked spikes which closely resembled spontaneously occurring spiking waveforms. SIGNIFICANCE We hope that enabling simultaneous electrical stimulation and multielectrode array recording will help elucidate the causal links between neural activity and cognition and facilitate naturalistic sensory protheses.
Collapse
Affiliation(s)
- Daniel J. O’Shea
- Neurosciences Program, Stanford University, Stanford, CA, U.S.A
- Departments of Electrical Engineering, Bioengineering, and Neurobiology, Stanford University, Stanford, CA, U.S.A
| | - Krishna V. Shenoy
- Neurosciences Program, Stanford University, Stanford, CA, U.S.A
- Departments of Electrical Engineering, Bioengineering, and Neurobiology, Stanford University, Stanford, CA, U.S.A
- Bio-X Program, Stanford Neurosciences Institute, Stanford University, Stanford, CA, U.S.A
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, U.S.A
| |
Collapse
|
31
|
Russo AA, Bittner SR, Perkins SM, Seely JS, London BM, Lara AH, Miri A, Marshall NJ, Kohn A, Jessell TM, Abbott LF, Cunningham JP, Churchland MM. Motor Cortex Embeds Muscle-like Commands in an Untangled Population Response. Neuron 2018; 97:953-966.e8. [PMID: 29398358 DOI: 10.1016/j.neuron.2018.01.004] [Citation(s) in RCA: 136] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2017] [Revised: 10/24/2017] [Accepted: 12/31/2017] [Indexed: 01/02/2023]
Abstract
Primate motor cortex projects to spinal interneurons and motoneurons, suggesting that motor cortex activity may be dominated by muscle-like commands. Observations during reaching lend support to this view, but evidence remains ambiguous and much debated. To provide a different perspective, we employed a novel behavioral paradigm that facilitates comparison between time-evolving neural and muscle activity. We found that single motor cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid "tangling": moments where similar activity patterns led to dissimilar future patterns. Avoidance of tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low tangling confers noise robustness. Finally, we were able to predict motor cortex activity from muscle activity by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low tangling.
Collapse
Affiliation(s)
- Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Sean R Bittner
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Sean M Perkins
- Zuckerman Institute, Columbia University, New York, NY 10027, USA; Department of Biomedical Engineering, Columbia University, New York, NY 10027, USA
| | - Jeffrey S Seely
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | | | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Andrew Miri
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA; Departments of Biochemistry and Molecular Biophysics, Columbia University Medical Center, New York, NY 10032, USA
| | - Najja J Marshall
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Adam Kohn
- Department of Ophthalmology and Visual Sciences, Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Yeshiva University, Bronx, NY 10461, USA
| | - Thomas M Jessell
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY 10032, USA; Howard Hughes Medical Institute, Columbia University, New York, NY 10032, USA; Departments of Biochemistry and Molecular Biophysics, Columbia University Medical Center, New York, NY 10032, USA
| | - Laurence F Abbott
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY 10032, USA; Department of Physiology and Cellular Biophysics, Columbia University Medical Center, New York, NY 10032, USA; Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY 10032, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY 10032, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
32
|
Abstract
The diversity and huge omics data take biology and biomedicine research and application into a big data era, just like that popular in human society a decade ago. They are opening a new challenge from horizontal data ensemble (e.g., the similar types of data collected from different labs or companies) to vertical data ensemble (e.g., the different types of data collected for a group of person with match information), which requires the integrative analysis in biology and biomedicine and also asks for emergent development of data integration to address the great changes from previous population-guided to newly individual-guided investigations.Data integration is an effective concept to solve the complex problem or understand the complicate system. Several benchmark studies have revealed the heterogeneity and trade-off that existed in the analysis of omics data. Integrative analysis can combine and investigate many datasets in a cost-effective reproducible way. Current integration approaches on biological data have two modes: one is "bottom-up integration" mode with follow-up manual integration, and the other one is "top-down integration" mode with follow-up in silico integration.This paper will firstly summarize the combinatory analysis approaches to give candidate protocol on biological experiment design for effectively integrative study on genomics and then survey the data fusion approaches to give helpful instruction on computational model development for biological significance detection, which have also provided newly data resources and analysis tools to support the precision medicine dependent on the big biomedical data. Finally, the problems and future directions are highlighted for integrative analysis of omics big data.
Collapse
Affiliation(s)
- Xiang-Tian Yu
- Key Laboratory of Systems Biology, Institute of Biochemistry and Cell Biology, Chinese Academy Science, Shanghai, China
| | - Tao Zeng
- Key Laboratory of Systems Biology, Institute of Biochemistry and Cell Biology, Chinese Academy Science, Shanghai, China.
| |
Collapse
|