1
|
Biderman D, Whiteway MR, Hurwitz C, Greenspan N, Lee RS, Vishnubhotla A, Warren R, Pedraja F, Noone D, Schartner M, Huntenburg JM, Khanal A, Meijer GT, Noel JP, Pan-Vazquez A, Socha KZ, Urai AE, Cunningham JP, Sawtell NB, Paninski L. Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling, and cloud-native open-source tools. bioRxiv 2024:2023.04.28.538703. [PMID: 37162966 PMCID: PMC10168383 DOI: 10.1101/2023.04.28.538703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce "Lightning Pose," an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry, and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post-hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We release a cloud application that allows users to label data, train networks, and predict new videos directly from the browser.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Anup Khanal
- University of California Los Angeles, Los Angeles, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
2
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. bioRxiv 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
3
|
Jaroslow DD, Cunningham JP, Smith DI, Steinbauer MJ. Seasonal Phenology and Climate Associated Feeding Activity of Introduced Marchalina hellenica in Southeast Australia. Insects 2023; 14:305. [PMID: 36975990 PMCID: PMC10054368 DOI: 10.3390/insects14030305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/14/2023] [Accepted: 03/17/2023] [Indexed: 06/18/2023]
Abstract
Invasive insects pose an increasing risk to global agriculture, environmental stability, and public health. Giant pine scale (GPS), Marchalina hellenica Gennadius (Hemiptera: Marchalinidae), is a phloem feeding scale insect endemic to the Eastern Mediterranean Basin, where it primarily feeds on Pinus halepensis and other Pinaceae. In 2014, GPS was detected in the southeast of Melbourne, Victoria, Australia, infesting the novel host Pinus radiata. An eradication program was unsuccessful, and with this insect now established within the state, containment and management efforts are underway to stop its spread; however, there remains a need to understand the insect's phenology and behaviour in Australia to better inform control efforts. We documented the annual life cycle and seasonal fluctuations in activity of GPS in Australia over a 32 month period at two contrasting field sites. Onset and duration of life stages were comparable to seasons in Mediterranean conspecifics, although the results imply the timing of GPS life stage progression is broadening or accelerating. GPS density was higher in Australia compared to Mediterranean reports, possibly due to the absence of key natural predators, such as the silver fly, Neoleucopis kartliana Tanasijtshuk (Diptera, Chamaemyiidae). Insect density and honeydew production in the Australian GPS population studied varied among locations and between generations. Although insect activity was well explained by climate, conditions recorded inside infested bark fissures often provided the weakest explanation of GPS activity. Our findings suggest that GPS activity is strongly influenced by climate, and this may in part be related to changes in host quality. An improved understanding of how our changing climate is influencing the phenology of phloem feeding insects such as GPS will help with predictions as to where these insects are likely to flourish and assist with management programs for pest species.
Collapse
Affiliation(s)
- Duncan D. Jaroslow
- Department of Ecology, Environment and Evolution, La Trobe University, Melbourne, VIC 3086, Australia
| | - John P. Cunningham
- School of Applied Systems Biology, La Trobe University, Melbourne, VIC 3086, Australia
- Agriculture Victoria, AgriBio Centre for AgriBioscience, Melbourne, VIC 3086, Australia
| | - David I. Smith
- Agriculture Victoria, Biosecurity and Agricultural Services, Cranbourne, VIC 3977, Australia
- School of Ecosystem and Forest Sciences, University of Melbourne, Parkville, Burnley, VIC 3121, Australia
- ArborCarbon, Murdoch University, Murdoch, WA 6150, Australia
| | - Martin J. Steinbauer
- Department of Ecology, Environment and Evolution, La Trobe University, Melbourne, VIC 3086, Australia
| |
Collapse
|
4
|
Miller AC, Anderson L, Leistedt B, Cunningham JP, Hogg DW, Blei DM. Mapping interstellar dust with Gaussian processes. Ann Appl Stat 2022. [DOI: 10.1214/22-aoas1608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
| | | | - Boris Leistedt
- Center for Computational Astrophysics, Flatiron Institute
| | | | | | | |
Collapse
|
5
|
Marshall NJ, Glaser JI, Trautmann EM, Amematsro EA, Perkins SM, Shadlen MN, Abbott LF, Cunningham JP, Churchland MM. Flexible neural control of motor units. Nat Neurosci 2022; 25:1492-1504. [PMID: 36216998 PMCID: PMC9633430 DOI: 10.1038/s41593-022-01165-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 08/12/2022] [Indexed: 01/13/2023]
Abstract
Voluntary movement requires communication from cortex to the spinal cord, where a dedicated pool of motor units (MUs) activates each muscle. The canonical description of MU function rests upon two foundational tenets. First, cortex cannot control MUs independently but supplies each pool with a common drive. Second, MUs are recruited in a rigid fashion that largely accords with Henneman's size principle. Although this paradigm has considerable empirical support, a direct test requires simultaneous observations of many MUs across diverse force profiles. In this study, we developed an isometric task that allowed stable MU recordings, in a rhesus macaque, even during rapidly changing forces. Patterns of MU activity were surprisingly behavior-dependent and could be accurately described only by assuming multiple drives. Consistent with flexible descending control, microstimulation of neighboring cortical sites recruited different MUs. Furthermore, the cortical population response displayed sufficient degrees of freedom to potentially exert fine-grained control. Thus, MU activity is flexibly controlled to meet task demands, and cortex may contribute to this ability.
Collapse
Affiliation(s)
- Najja J Marshall
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Joshua I Glaser
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Eric M Trautmann
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| | - Elom A Amematsro
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Sean M Perkins
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Michael N Shadlen
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
- Howard Hughes Medical Institute, Columbia University, New York, NY, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
- Department of Physiology and Cellular Biophysics, Columbia University Medical Center, New York, NY, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA.
- Zuckerman Institute, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA.
| |
Collapse
|
6
|
Abe T, Kinsella I, Saxena S, Buchanan EK, Couto J, Briggs J, Kitt SL, Glassman R, Zhou J, Paninski L, Cunningham JP. Neuroscience Cloud Analysis As a Service: An open-source platform for scalable, reproducible data analysis. Neuron 2022; 110:2771-2789.e7. [PMID: 35870448 PMCID: PMC9464703 DOI: 10.1016/j.neuron.2022.06.018] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 05/06/2022] [Accepted: 06/22/2022] [Indexed: 10/17/2022]
Abstract
A key aspect of neuroscience research is the development of powerful, general-purpose data analyses that process large datasets. Unfortunately, modern data analyses have a hidden dependence upon complex computing infrastructure (e.g., software and hardware), which acts as an unaddressed deterrent to analysis users. Although existing analyses are increasingly shared as open-source software, the infrastructure and knowledge needed to deploy these analyses efficiently still pose significant barriers to use. In this work, we develop Neuroscience Cloud Analysis As a Service (NeuroCAAS): a fully automated open-source analysis platform offering automatic infrastructure reproducibility for any data analysis. We show how NeuroCAAS supports the design of simpler, more powerful data analyses and that many popular data analysis tools offered through NeuroCAAS outperform counterparts on typical infrastructure. Pairing rigorous infrastructure management with cloud resources, NeuroCAAS dramatically accelerates the dissemination and use of new data analyses for neuroscientific discovery.
Collapse
Affiliation(s)
- Taiga Abe
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University Medical Center, Columbia University, New York, NY 10027, USA
| | - Ian Kinsella
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - Shreya Saxena
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA; Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32607, USA
| | - E Kelly Buchanan
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University Medical Center, Columbia University, New York, NY 10027, USA
| | - Joao Couto
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - John Briggs
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sian Lee Kitt
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Ryan Glassman
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - John Zhou
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Liam Paninski
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University Medical Center, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - John P Cunningham
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
7
|
Gordon-Rodriguez E, Quinn TP, Cunningham JP. Learning sparse log-ratios for high-throughput sequencing data. Bioinformatics 2021; 38:157-163. [PMID: 34498030 PMCID: PMC8696089 DOI: 10.1093/bioinformatics/btab645] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 08/09/2021] [Accepted: 09/03/2021] [Indexed: 02/03/2023] Open
Abstract
MOTIVATION The automatic discovery of sparse biomarkers that are associated with an outcome of interest is a central goal of bioinformatics. In the context of high-throughput sequencing (HTS) data, and compositional data (CoDa) more generally, an important class of biomarkers are the log-ratios between the input variables. However, identifying predictive log-ratio biomarkers from HTS data is a combinatorial optimization problem, which is computationally challenging. Existing methods are slow to run and scale poorly with the dimension of the input, which has limited their application to low- and moderate-dimensional metagenomic datasets. RESULTS Building on recent advances from the field of deep learning, we present CoDaCoRe, a novel learning algorithm that identifies sparse, interpretable and predictive log-ratio biomarkers. Our algorithm exploits a continuous relaxation to approximate the underlying combinatorial optimization problem. This relaxation can then be optimized efficiently using the modern ML toolbox, in particular, gradient descent. As a result, CoDaCoRe runs several orders of magnitude faster than competing methods, all while achieving state-of-the-art performance in terms of predictive accuracy and sparsity. We verify the outperformance of CoDaCoRe across a wide range of microbiome, metabolite and microRNA benchmark datasets, as well as a particularly high-dimensional dataset that is outright computationally intractable for existing sparse log-ratio selection methods. AVAILABILITY AND IMPLEMENTATION The CoDaCoRe package is available at https://github.com/egr95/R-codacore. Code and instructions for reproducing our results are available at https://github.com/cunningham-lab/codacore. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
| | - Thomas P Quinn
- Applied Artificial Intelligence Institute, Deakin University, Geelong, VIC 3126, Australia
| | - John P Cunningham
- Department of Statistics, Columbia University, New York, NY 10025, USA
| |
Collapse
|
8
|
Whiteway MR, Biderman D, Friedman Y, Dipoppa M, Buchanan EK, Wu A, Zhou J, Bonacchi N, Miska NJ, Noel JP, Rodriguez E, Schartner M, Socha K, Urai AE, Salzman CD, Cunningham JP, Paninski L. Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders. PLoS Comput Biol 2021; 17:e1009439. [PMID: 34550974 PMCID: PMC8489729 DOI: 10.1371/journal.pcbi.1009439] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 10/04/2021] [Accepted: 09/09/2021] [Indexed: 12/02/2022] Open
Abstract
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
Collapse
Affiliation(s)
- Matthew R. Whiteway
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Dan Biderman
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Yoni Friedman
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, Massachusetts, United States of America
| | - Mario Dipoppa
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
| | - E. Kelly Buchanan
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Anqi Wu
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - John Zhou
- Department of Computer Science, Columbia University, New York, New York, United States of America
| | | | - Nathaniel J. Miska
- Sainsbury-Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom
| | - Jean-Paul Noel
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Erica Rodriguez
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | | | - Karolina Socha
- Institute of Ophthalmology, University College London, London, United Kingdom
| | - Anne E. Urai
- Cognitive Psychology Unit, Leiden University, Leiden, The Netherlands
| | - C. Daniel Salzman
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
- Department of Psychiatry, Columbia University, New York, New York, United States of America
- New York State Psychiatric Institute, New York, New York, United States of America
- Kavli Institute for Brain Sciences, New York, New York, United States of America
| | | | - John P. Cunningham
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| |
Collapse
|
9
|
Shad R, Quach N, Fong R, Kasinpila P, Bowles C, Castro M, Guha A, Suarez EE, Jovinge S, Lee S, Boeve T, Amsallem M, Tang X, Haddad F, Shudo Y, Woo YJ, Teuteberg J, Cunningham JP, Langlotz CP, Hiesinger W. Predicting post-operative right ventricular failure using video-based deep learning. Nat Commun 2021; 12:5192. [PMID: 34465780 PMCID: PMC8408163 DOI: 10.1038/s41467-021-25503-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 08/11/2021] [Indexed: 11/22/2022] Open
Abstract
Despite progressive improvements over the decades, the rich temporally resolved data in an echocardiogram remain underutilized. Human assessments reduce the complex patterns of cardiac wall motion, to a small list of measurements of heart function. All modern echocardiography artificial intelligence (AI) systems are similarly limited by design - automating measurements of the same reductionist metrics rather than utilizing the embedded wealth of data. This underutilization is most evident where clinical decision making is guided by subjective assessments of disease acuity. Predicting the likelihood of developing post-operative right ventricular failure (RV failure) in the setting of mechanical circulatory support is one such example. Here we describe a video AI system trained to predict post-operative RV failure using the full spatiotemporal density of information in pre-operative echocardiography. We achieve an AUC of 0.729, and show that this ML system significantly outperforms a team of human experts at the same task on independent evaluation.
Collapse
Affiliation(s)
- Rohan Shad
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Nicolas Quach
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Robyn Fong
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Patpilai Kasinpila
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Cayley Bowles
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Miguel Castro
- Department of Cardiovascular Medicine, Houston Methodist DeBakey Heart Centre, Houston, TX, USA
| | - Ashrith Guha
- Department of Cardiovascular Medicine, Houston Methodist DeBakey Heart Centre, Houston, TX, USA
| | - Erik E Suarez
- Department of Cardiothoracic Surgery, Houston Methodist DeBakey Heart Centre, Houston, TX, USA
| | - Stefan Jovinge
- Department of Cardiovascular Surgery, Spectrum Health Grand Rapids, Grand Rapids, MI, USA
| | - Sangjin Lee
- Department of Cardiovascular Surgery, Spectrum Health Grand Rapids, Grand Rapids, MI, USA
| | - Theodore Boeve
- Department of Cardiovascular Surgery, Spectrum Health Grand Rapids, Grand Rapids, MI, USA
| | - Myriam Amsallem
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
| | - Xiu Tang
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
| | - Francois Haddad
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
| | - Yasuhiro Shudo
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Y Joseph Woo
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Jeffrey Teuteberg
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
- Stanford Artificial Intelligence in Medicine Centre, Stanford, CA, USA
| | | | - Curtis P Langlotz
- Stanford Artificial Intelligence in Medicine Centre, Stanford, CA, USA
- Department of Radiology and Biomedical Informatics, Stanford University, Stanford, CA, USA
| | - William Hiesinger
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA.
- Stanford Artificial Intelligence in Medicine Centre, Stanford, CA, USA.
| |
Collapse
|
10
|
Couto J, Musall S, Sun XR, Khanal A, Gluf S, Saxena S, Kinsella I, Abe T, Cunningham JP, Paninski L, Churchland AK. Chronic, cortex-wide imaging of specific cell populations during behavior. Nat Protoc 2021; 16:3241-3263. [PMID: 34075229 PMCID: PMC8788140 DOI: 10.1038/s41596-021-00527-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 02/26/2021] [Indexed: 02/04/2023]
Abstract
Measurements of neuronal activity across brain areas are important for understanding the neural correlates of cognitive and motor processes such as attention, decision-making and action selection. However, techniques that allow cellular resolution measurements are expensive and require a high degree of technical expertise, which limits their broad use. Wide-field imaging of genetically encoded indicators is a high-throughput, cost-effective and flexible approach to measure activity of specific cell populations with high temporal resolution and a cortex-wide field of view. Here we outline our protocol for assembling a wide-field macroscope setup, performing surgery to prepare the intact skull and imaging neural activity chronically in behaving, transgenic mice. Further, we highlight a processing pipeline that leverages novel, cloud-based methods to analyze large-scale imaging datasets. The protocol targets laboratories that are seeking to build macroscopes, optimize surgical procedures for long-term chronic imaging and/or analyze cortex-wide neuronal recordings. The entire protocol, including steps for assembly and calibration of the macroscope, surgical preparation, imaging and data analysis, requires a total of 8 h. It is designed to be accessible to laboratories with limited expertise in imaging methods or interest in high-throughput imaging during behavior.
Collapse
Affiliation(s)
- Joao Couto
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Simon Musall
- Institute of Biological Information Processing (IBI-3), Forschungszentrum Jülich, Jülich, Germany
- Department of Neurophysiology, Institute of Biology 2, RWTH Aachen University, Aachen, Germany
| | - Xiaonan R Sun
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
- Department of Neurosurgery, Zucker School of Medicine, Hofstra University, Hempstead, NY, USA
| | - Anup Khanal
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Steven Gluf
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - Shreya Saxena
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, USA
| | - Ian Kinsella
- Department of Statistics, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| | - Taiga Abe
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| | - John P Cunningham
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| | - Liam Paninski
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| | - Anne K Churchland
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA.
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
11
|
Kimmel DL, Elsayed GF, Cunningham JP, Newsome WT. Value and choice as separable and stable representations in orbitofrontal cortex. Nat Commun 2020; 11:3466. [PMID: 32651373 PMCID: PMC7351792 DOI: 10.1038/s41467-020-17058-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 06/10/2020] [Indexed: 12/13/2022] Open
Abstract
Value-based decision-making requires different variables-including offer value, choice, expected outcome, and recent history-at different times in the decision process. Orbitofrontal cortex (OFC) is implicated in value-based decision-making, but it is unclear how downstream circuits read out complex OFC responses into separate representations of the relevant variables to support distinct functions at specific times. We recorded from single OFC neurons while macaque monkeys made cost-benefit decisions. Using a novel analysis, we find separable neural dimensions that selectively represent the value, choice, and expected reward of the present and previous offers. The representations are generally stable during periods of behavioral relevance, then transition abruptly at key task events and between trials. Applying new statistical methods, we show that the sensitivity, specificity and stability of the representations are greater than expected from the population's low-level features-dimensionality and temporal smoothness-alone. The separability and stability suggest a mechanism-linear summation over static synaptic weights-by which downstream circuits can select for specific variables at specific times.
Collapse
Affiliation(s)
- Daniel L Kimmel
- Mortimer Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA.
- Department of Psychiatry, Columbia University, New York, NY, 10032, USA.
| | | | - John P Cunningham
- Mortimer Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA
- Department of Statistics, Columbia University, New York, NY, 10027, USA
| | - William T Newsome
- Department of Neurobiology and Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
12
|
Russo AA, Khajeh R, Bittner SR, Perkins SM, Cunningham JP, Abbott LF, Churchland MM. Neural Trajectories in the Supplementary Motor Area and Motor Cortex Exhibit Distinct Geometries, Compatible with Different Classes of Computation. Neuron 2020; 107:745-758.e6. [PMID: 32516573 DOI: 10.1016/j.neuron.2020.05.020] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 12/25/2019] [Accepted: 05/11/2020] [Indexed: 12/21/2022]
Abstract
The supplementary motor area (SMA) is believed to contribute to higher order aspects of motor control. We considered a key higher order role: tracking progress throughout an action. We propose that doing so requires population activity to display low "trajectory divergence": situations with different future motor outputs should be distinct, even when present motor output is identical. We examined neural activity in SMA and primary motor cortex (M1) as monkeys cycled various distances through a virtual environment. SMA exhibited multiple response features that were absent in M1. At the single-neuron level, these included ramping firing rates and cycle-specific responses. At the population level, they included a helical population-trajectory geometry with shifts in the occupied subspace as movement unfolded. These diverse features all served to reduce trajectory divergence, which was much lower in SMA versus M1. Analogous population-trajectory geometry, also with low divergence, naturally arose in networks trained to internally guide multi-cycle movement.
Collapse
Affiliation(s)
- Abigail A Russo
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Ramin Khajeh
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sean R Bittner
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sean M Perkins
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Biomedical Engineering, Columbia University, New York, NY 10027, USA
| | - John P Cunningham
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Physiology and Cellular Biophysics, Columbia University Medical Center, New York, NY 10032, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
13
|
Najafi F, Elsayed GF, Cao R, Pnevmatikakis E, Latham PE, Cunningham JP, Churchland AK. Excitatory and Inhibitory Subnetworks Are Equally Selective during Decision-Making and Emerge Simultaneously during Learning. Neuron 2019; 105:165-179.e8. [PMID: 31753580 DOI: 10.1016/j.neuron.2019.09.045] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 02/28/2019] [Accepted: 09/25/2019] [Indexed: 12/23/2022]
Abstract
Inhibitory neurons, which play a critical role in decision-making models, are often simplified as a single pool of non-selective neurons lacking connection specificity. This assumption is supported by observations in the primary visual cortex: inhibitory neurons are broadly tuned in vivo and show non-specific connectivity in slice. The selectivity of excitatory and inhibitory neurons within decision circuits and, hence, the validity of decision-making models are unknown. We simultaneously measured excitatory and inhibitory neurons in the posterior parietal cortex of mice judging multisensory stimuli. Surprisingly, excitatory and inhibitory neurons were equally selective for the animal's choice, both at the single-cell and population level. Further, both cell types exhibited similar changes in selectivity and temporal dynamics during learning, paralleling behavioral improvements. These observations, combined with modeling, argue against circuit architectures assuming non-selective inhibitory neurons. Instead, they argue for selective subnetworks of inhibitory and excitatory neurons that are shaped by experience to support expert decision-making.
Collapse
Affiliation(s)
- Farzaneh Najafi
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | | | - Robin Cao
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | | | - Peter E Latham
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | | | | |
Collapse
|
14
|
Masry A, Cunningham JP, Clarke AR. From laboratory to the field: consistent effects of experience on host location by the fruit fly parasitoid Diachasmimorpha kraussii (Hymenoptera: Braconidae). Insect Sci 2019; 26:863-872. [PMID: 29505704 DOI: 10.1111/1744-7917.12587] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 10/09/2017] [Accepted: 10/26/2017] [Indexed: 06/08/2023]
Abstract
Associative learning is well documented in Hymenopteran parasitoids, where it is thought to be an adaptive mechanism for increasing successful host location in complex environments. Based on this learning capacity, it has been suggested that providing prerelease training to parasitoids reared for inundative release may lead to a subsequent increase in their efficacy as biological control agents. Using the fruit fly parasitoid Diachasmimorpha krausii we tested this hypothesis in a series of associative learning experiments which involved the parasitoid, two host fruits (tomatoes and nectarine), and one host fly (Bactrocera tryoni). In sequential Y-tube olfactometer studies, large field-cage studies, and then open field studies, naïve wasps showed a consistent preference for nectarines over tomatoes. The preference for nectarines was retained, but not significantly increased, for wasps which had prior training exposure to nectarines. However, and again consistently at all three spatial scales, prior experience on tomatoes led to significantly increased attraction to this fruit by tomato-trained wasps, including those liberated freely in the environment. These results, showing consistency of learning at multiple spatial scales, gives confidence to the many laboratory-based learning studies which are extrapolated to the field without testing. The experiment also provides direct experimental support for the proposed practice of enhancing the quality of inundatively released parasitoids through associative learning.
Collapse
Affiliation(s)
- Ayad Masry
- School of Earth, Environmental and Biological Sciences, Queensland University of Technology (QUT), Brisbane, Queensland, Australia
| | - John P Cunningham
- Agriculture Victoria Research, AgriBio, Centre for AgriBioscience, Bundoora, Victoria, Australia
| | - Anthony R Clarke
- School of Earth, Environmental and Biological Sciences, Queensland University of Technology (QUT), Brisbane, Queensland, Australia
| |
Collapse
|
15
|
Saxena S, Cunningham JP. Towards the neural population doctrine. Curr Opin Neurobiol 2019; 55:103-111. [DOI: 10.1016/j.conb.2019.02.002] [Citation(s) in RCA: 110] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 01/30/2019] [Accepted: 02/07/2019] [Indexed: 01/06/2023]
|
16
|
Paninski L, Cunningham JP. Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience. Curr Opin Neurobiol 2019; 50:232-241. [PMID: 29738986 DOI: 10.1016/j.conb.2018.04.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 03/12/2018] [Accepted: 04/06/2018] [Indexed: 01/01/2023]
Abstract
Modern large-scale multineuronal recording methodologies, including multielectrode arrays, calcium imaging, and optogenetic techniques, produce single-neuron resolution data of a magnitude and precision that were the realm of science fiction twenty years ago. The major bottlenecks in systems and circuit neuroscience no longer lie in simply collecting data from large neural populations, but also in understanding this data: developing novel scientific questions, with corresponding analysis techniques and experimental designs to fully harness these new capabilities and meaningfully interrogate these questions. Advances in methods for signal processing, network analysis, dimensionality reduction, and optimal control-developed in lockstep with advances in experimental neurotechnology-promise major breakthroughs in multiple fundamental neuroscience problems. These trends are clear in a broad array of subfields of modern neuroscience; this review focuses on recent advances in methods for analyzing neural time-series data with single-neuronal precision.
Collapse
Affiliation(s)
- L Paninski
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States; Department of Neuroscience, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States.
| | - J P Cunningham
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States
| |
Collapse
|
17
|
Lara AH, Elsayed GF, Zimnik AJ, Cunningham JP, Churchland MM. Conservation of preparatory neural events in monkey motor cortex regardless of how movement is initiated. eLife 2018; 7:31826. [PMID: 30132759 PMCID: PMC6112854 DOI: 10.7554/elife.31826] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Accepted: 07/26/2018] [Indexed: 11/13/2022] Open
Abstract
A time-consuming preparatory stage is hypothesized to precede voluntary movement. A putative neural substrate of motor preparation occurs when a delay separates instruction and execution cues. When readiness is sustained during the delay, sustained neural activity is observed in motor and premotor areas. Yet whether delay-period activity reflects an essential preparatory stage is controversial. In particular, it has remained ambiguous whether delay-period-like activity appears before non-delayed movements. To overcome that ambiguity, we leveraged a recently developed analysis method that parses population responses into putatively preparatory and movement-related components. We examined cortical responses when reaches were initiated after an imposed delay, at a self-chosen time, or reactively with low latency and no delay. Putatively preparatory events were conserved across all contexts. Our findings support the hypothesis that an appropriate preparatory state is consistently achieved before movement onset. However, our results reveal that this process can consume surprisingly little time.
Collapse
Affiliation(s)
- Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, United States
| | - Gamaleldin F Elsayed
- Department of Neuroscience, Columbia University Medical Center, New York, United States.,Center for Theoretical Neuroscience, Columbia University, New York, United States
| | - Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, United States
| | - John P Cunningham
- Center for Theoretical Neuroscience, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University Medical Center, New York, Unitedstate.,Department of Statistics, Columbia University, New York, United States
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University Medical Center, New York, Unitedstate.,David Mahoney Center for Brain and Behavior Research, Columbia University Medical Center, New York, United States.,Kavli Institute for Brain Science, Columbia University Medical Center, New York, United States
| |
Collapse
|
18
|
Lara AH, Cunningham JP, Churchland MM. Different population dynamics in the supplementary motor area and motor cortex during reaching. Nat Commun 2018; 9:2754. [PMID: 30013188 PMCID: PMC6048147 DOI: 10.1038/s41467-018-05146-z] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Accepted: 06/11/2018] [Indexed: 11/24/2022] Open
Abstract
Neural populations perform computations through their collective activity. Different computations likely require different population-level dynamics. We leverage this assumption to examine neural responses recorded from the supplementary motor area (SMA) and motor cortex. During visually guided reaching, the respective roles of these areas remain unclear; neurons in both areas exhibit preparation-related activity and complex patterns of movement-related activity. To explore population dynamics, we employ a novel "hypothesis-guided" dimensionality reduction approach. This approach reveals commonalities but also stark differences: linear population dynamics, dominated by rotations, are prominent in motor cortex but largely absent in SMA. In motor cortex, the observed dynamics produce patterns resembling muscle activity. Conversely, the non-rotational patterns in SMA co-vary with cues regarding when movement should be initiated. Thus, while SMA and motor cortex display superficially similar single-neuron responses during visually guided reaching, their different population dynamics indicate they are likely performing quite different computations.
Collapse
Affiliation(s)
- A H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA
| | - J P Cunningham
- Department of Statistics, Columbia University, New York, NY, 10027, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, 10027, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA
| | - M M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, 10027, USA.
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, 10032, USA.
| |
Collapse
|
19
|
Russo AA, Bittner SR, Perkins SM, Seely JS, London BM, Lara AH, Miri A, Marshall NJ, Kohn A, Jessell TM, Abbott LF, Cunningham JP, Churchland MM. Motor Cortex Embeds Muscle-like Commands in an Untangled Population Response. Neuron 2018; 97:953-966.e8. [PMID: 29398358 DOI: 10.1016/j.neuron.2018.01.004] [Citation(s) in RCA: 134] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2017] [Revised: 10/24/2017] [Accepted: 12/31/2017] [Indexed: 01/02/2023]
Abstract
Primate motor cortex projects to spinal interneurons and motoneurons, suggesting that motor cortex activity may be dominated by muscle-like commands. Observations during reaching lend support to this view, but evidence remains ambiguous and much debated. To provide a different perspective, we employed a novel behavioral paradigm that facilitates comparison between time-evolving neural and muscle activity. We found that single motor cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid "tangling": moments where similar activity patterns led to dissimilar future patterns. Avoidance of tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low tangling confers noise robustness. Finally, we were able to predict motor cortex activity from muscle activity by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low tangling.
Collapse
Affiliation(s)
- Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Sean R Bittner
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Sean M Perkins
- Zuckerman Institute, Columbia University, New York, NY 10027, USA; Department of Biomedical Engineering, Columbia University, New York, NY 10027, USA
| | - Jeffrey S Seely
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | | | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Andrew Miri
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA; Departments of Biochemistry and Molecular Biophysics, Columbia University Medical Center, New York, NY 10032, USA
| | - Najja J Marshall
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Adam Kohn
- Department of Ophthalmology and Visual Sciences, Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Yeshiva University, Bronx, NY 10461, USA
| | - Thomas M Jessell
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY 10032, USA; Howard Hughes Medical Institute, Columbia University, New York, NY 10032, USA; Departments of Biochemistry and Molecular Biophysics, Columbia University Medical Center, New York, NY 10032, USA
| | - Laurence F Abbott
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY 10032, USA; Department of Physiology and Cellular Biophysics, Columbia University Medical Center, New York, NY 10032, USA; Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY 10032, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY 10032, USA; Zuckerman Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY 10032, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
20
|
Elsayed GF, Cunningham JP. Structure in neural population recordings: an expected byproduct of simpler phenomena? Nat Neurosci 2017; 20:1310-1318. [PMID: 28783140 PMCID: PMC5577566 DOI: 10.1038/nn.4617] [Citation(s) in RCA: 79] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Accepted: 06/30/2017] [Indexed: 12/12/2022]
Abstract
Neuroscientists increasingly analyze the joint activity of multi-neuron
recordings to identify population-level structure that is believed to be
significant and scientifically novel. Claims of significant population structure
support hypotheses in many brain areas. However, these claims require first
investigating the possibility that the population structure in question is an
expected byproduct of simpler features known to exist in data. Classically, this
critical examination can be either intuited or addressed with conventional
controls. However, these approaches fail when considering population data,
raising concerns about the scientific merit of population-level studies. Here we
develop a framework to test the novelty of population-level findings against
simpler features such as correlations across times, neurons and conditions. We
apply this framework to test two recent population findings in prefrontal and
motor cortices, providing essential context to those studies. More broadly, the
methodologies we introduce provide a general neural population control for many
population-level hypotheses.
Collapse
Affiliation(s)
- Gamaleldin F Elsayed
- Center for Theoretical Neuroscience, Columbia University, New York, New York, USA.,Department of Neuroscience, Columbia University Medical Center, New York, New York, USA.,Grossman Center for the Statistics of Mind, Columbia University, New York, New York, USA
| | - John P Cunningham
- Center for Theoretical Neuroscience, Columbia University, New York, New York, USA.,Grossman Center for the Statistics of Mind, Columbia University, New York, New York, USA.,Department of Statistics, Columbia University, New York, New York, USA
| |
Collapse
|
21
|
Miri A, Warriner CL, Seely JS, Elsayed GF, Cunningham JP, Churchland MM, Jessell TM. Behaviorally Selective Engagement of Short-Latency Effector Pathways by Motor Cortex. Neuron 2017; 95:683-696.e11. [PMID: 28735748 PMCID: PMC5593145 DOI: 10.1016/j.neuron.2017.06.042] [Citation(s) in RCA: 77] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Revised: 05/27/2017] [Accepted: 06/26/2017] [Indexed: 12/23/2022]
Abstract
Blocking motor cortical output with lesions or pharmacological inactivation has identified movements that require motor cortex. Yet, when and how motor cortex influences muscle activity during movement execution remains unresolved. We addressed this ambiguity using measurement and perturbation of motor cortical activity together with electromyography in mice during two forelimb movements that differ in their requirement for cortical involvement. Rapid optogenetic silencing and electrical stimulation indicated that short-latency pathways linking motor cortex with spinal motor neurons are selectively activated during one behavior. Analysis of motor cortical activity revealed a dramatic change between behaviors in the coordination of firing patterns across neurons that could account for this differential influence. Thus, our results suggest that changes in motor cortical output patterns enable a behaviorally selective engagement of short-latency effector pathways. The model of motor cortical influence implied by our findings helps reconcile previous observations on the function of motor cortex.
Collapse
Affiliation(s)
- Andrew Miri
- Department of Neuroscience, Columbia University, New York, NY 10032, USA; Department of Biochemistry and Molecular Biophysics, Columbia University, New York, NY 10032, USA; Kavli Institute of Brain Science, Columbia University, New York, NY 10032, USA; Howard Hughes Medical Institute, Columbia University, New York, NY 10032, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10032, USA.
| | - Claire L Warriner
- Department of Neuroscience, Columbia University, New York, NY 10032, USA; Department of Biochemistry and Molecular Biophysics, Columbia University, New York, NY 10032, USA; Kavli Institute of Brain Science, Columbia University, New York, NY 10032, USA; Howard Hughes Medical Institute, Columbia University, New York, NY 10032, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10032, USA
| | - Jeffrey S Seely
- Department of Neuroscience, Columbia University, New York, NY 10032, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10032, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10032, USA; David Mahoney Center for Brain and Behavior Research, Columbia University, New York, NY 10032, USA; Kavli Institute of Brain Science, Columbia University, New York, NY 10032, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10032, USA
| | - Gamaleldin F Elsayed
- Department of Neuroscience, Columbia University, New York, NY 10032, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10032, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10032, USA
| | - John P Cunningham
- Department of Statistics, Columbia University, New York, NY 10032, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10032, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10032, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10032, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY 10032, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10032, USA; David Mahoney Center for Brain and Behavior Research, Columbia University, New York, NY 10032, USA; Kavli Institute of Brain Science, Columbia University, New York, NY 10032, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10032, USA
| | - Thomas M Jessell
- Department of Neuroscience, Columbia University, New York, NY 10032, USA; Department of Biochemistry and Molecular Biophysics, Columbia University, New York, NY 10032, USA; Kavli Institute of Brain Science, Columbia University, New York, NY 10032, USA; Howard Hughes Medical Institute, Columbia University, New York, NY 10032, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10032, USA
| |
Collapse
|
22
|
Seely JS, Kaufman MT, Ryu SI, Shenoy KV, Cunningham JP, Churchland MM. Tensor Analysis Reveals Distinct Population Structure that Parallels the Different Computational Roles of Areas M1 and V1. PLoS Comput Biol 2016; 12:e1005164. [PMID: 27814353 PMCID: PMC5096707 DOI: 10.1371/journal.pcbi.1005164] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 09/21/2016] [Indexed: 01/08/2023] Open
Abstract
Cortical firing rates frequently display elaborate and heterogeneous temporal structure. One often wishes to compute quantitative summaries of such structure—a basic example is the frequency spectrum—and compare with model-based predictions. The advent of large-scale population recordings affords the opportunity to do so in new ways, with the hope of distinguishing between potential explanations for why responses vary with time. We introduce a method that assesses a basic but previously unexplored form of population-level structure: when data contain responses across multiple neurons, conditions, and times, they are naturally expressed as a third-order tensor. We examined tensor structure for multiple datasets from primary visual cortex (V1) and primary motor cortex (M1). All V1 datasets were ‘simplest’ (there were relatively few degrees of freedom) along the neuron mode, while all M1 datasets were simplest along the condition mode. These differences could not be inferred from surface-level response features. Formal considerations suggest why tensor structure might differ across modes. For idealized linear models, structure is simplest across the neuron mode when responses reflect external variables, and simplest across the condition mode when responses reflect population dynamics. This same pattern was present for existing models that seek to explain motor cortex responses. Critically, only dynamical models displayed tensor structure that agreed with the empirical M1 data. These results illustrate that tensor structure is a basic feature of the data. For M1 the tensor structure was compatible with only a subset of existing models. Neuroscientists commonly measure the time-varying activity of neurons in the brain. Early studies explored how such activity directly encodes sensory stimuli. Since then neural responses have also been found to encode abstract parameters such as expected reward. Yet not all aspects of neural activity directly encode identifiable parameters: patterns of activity sometimes reflect the evolution of underlying internal computations, and may be only obliquely related to specific parameters. For example, it remains debated whether cortical activity during movement relates to parameters such as reach velocity, to parameters such as muscle activity, or to underlying computations that culminate in the production of muscle activity. To address this question we exploited an unexpected fact. When activity directly encodes a parameter it tends to be mathematically simple in a very particular way. When activity reflects the evolution of a computation being performed by the network, it tends to be mathematically simple in a different way. We found that responses in a visual area were simple in the first way, consistent with encoding of parameters. We found that responses in a motor area were simple in the second way, consistent with participation in the underlying computations that culminate in movement.
Collapse
Affiliation(s)
- Jeffrey S. Seely
- Department of Neuroscience, Columbia University Medical Center, New York, NY, United States of America
| | - Matthew T. Kaufman
- Neurosciences Program,Stanford University, Stanford, CA, United States of America
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, United States of America
| | - Stephen I. Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
- Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA, United States of America
| | - Krishna V. Shenoy
- Neurosciences Program,Stanford University, Stanford, CA, United States of America
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
- Department of Bioengineering, Stanford University, Stanford, CA, United States of America
- Department of Neurobiology, Stanford University, Stanford, CA, United States of America
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, United States of America
- Howard Hughes Medical Institute Stanford University, Stanford, CA, United States of America
| | - John P. Cunningham
- Grossman Center for the Statistics of Mind, Columbia University Medical Center, New York, NY, United States of America
- Department of Statistics, Columbia University, New York, NY, United States of America
| | - Mark M. Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, United States of America
- Grossman Center for the Statistics of Mind, Columbia University Medical Center, New York, NY, United States of America
- David Mahoney Center for Brain and Behavior Research, Columbia University Medical Center, New York, NY, United States of America
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, United States of America
- * E-mail:
| |
Collapse
|
23
|
Elsayed GF, Lara AH, Kaufman MT, Churchland MM, Cunningham JP. Reorganization between preparatory and movement population responses in motor cortex. Nat Commun 2016; 7:13239. [PMID: 27807345 PMCID: PMC5095296 DOI: 10.1038/ncomms13239] [Citation(s) in RCA: 188] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2016] [Accepted: 09/14/2016] [Indexed: 12/25/2022] Open
Abstract
Neural populations can change the computation they perform on very short timescales. Although such flexibility is common, the underlying computational strategies at the population level remain unknown. To address this gap, we examined population responses in motor cortex during reach preparation and movement. We found that there exist exclusive and orthogonal population-level subspaces dedicated to preparatory and movement computations. This orthogonality yielded a reorganization in response correlations: the set of neurons with shared response properties changed completely between preparation and movement. Thus, the same neural population acts, at different times, as two separate circuits with very different properties. This finding is not predicted by existing motor cortical models, which predict overlapping preparation-related and movement-related subspaces. Despite orthogonality, responses in the preparatory subspace were lawfully related to subsequent responses in the movement subspace. These results reveal a population-level strategy for performing separate but linked computations. Single neuron responses are highly complex and dynamic yet they are able to flexibly represent behaviour through their collective activity. Here the authors demonstrate that population activity patterns of motor cortex neurons are orthogonal during successive task epochs that are linked through a simple linear function.
Collapse
Affiliation(s)
- Gamaleldin F Elsayed
- Center for Theoretical Neuroscience, Columbia University, New York, New York 10032, USA.,Department of Neuroscience, Columbia University Medical Center, New York, New York 10032, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, New York 10032, USA
| | - Matthew T Kaufman
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, New York 10032, USA.,Grossman Center for the Statistics of Mind, Columbia University, 1255 Amsterdam Avenue, New York, New York 10027, USA.,David Mahoney Center for Brain and Behavior Research, Columbia University Medical Center, New York, New York 10032, USA.,Kavli Institute for Brain Science, Columbia University Medical Center, New York, New York 10032, USA
| | - John P Cunningham
- Center for Theoretical Neuroscience, Columbia University, New York, New York 10032, USA.,Grossman Center for the Statistics of Mind, Columbia University, 1255 Amsterdam Avenue, New York, New York 10027, USA.,Department of Statistics, Columbia University, 1255 Amsterdam Avenue, Room 1005 SSW, MC 4690, New York, New York 10027, USA
| |
Collapse
|
24
|
Abstract
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
Collapse
Affiliation(s)
- Josh Merel
- Neurobiology and Behavior program, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - David Carlson
- Department of Statistics, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Neurobiology and Behavior program, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
| | - John P. Cunningham
- Neurobiology and Behavior program, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
| |
Collapse
|
25
|
Abstract
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.
Collapse
Affiliation(s)
- Josh Merel
- Neurobiology and Behavior Program, Columbia University, New York, New York, United States of America
| | - Donald M. Pianto
- Statistics Department, Columbia University, New York, New York, United States of America
- Statistics Department, University of Brasília, Brasília, Distrito Federal, Brazil
| | - John P. Cunningham
- Statistics Department, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Statistics Department, Columbia University, New York, New York, United States of America
| |
Collapse
|
26
|
Abstract
The motor cortex was the one of the first cortical areas to be explored electrophysiologically, yet little agreement has emerged regarding its basic response properties. Often it is assumed that single-neuron responses reflect a preference for a particular movement or movement variable. It may be further assumed that movement is generated by (or at least accompanied by) a growing population-level preference for the relevant movement. This view has been attractive because it provides a canonical form for the single neuron, a link between preparatory and movement activity, a way of interpreting the population response, and a platform for designing analyses and couching hypotheses. However, this traditional view yields predictions that are at odds with basic features of the data. We discuss an alternative simplified model, in which outgoing commands are produced by dynamics that generate different output patterns as a function of the initial preparatory state. For reaching tasks, we hypothesized simple quasioscillatory dynamics because they provide a natural basis set for the empirical patterns of muscle activity. The predictions of the dynamical model match the data well at both the single-neuron and population levels, and the quasioscillatory patterns explain many of the otherwise odd features of the neural responses.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Grossman Center for the Statistics of Mind, David Mahoney Center for Brain and Behavior Research, Kavli Institute for Brain Science, Columbia University Medical Center, New York, New York 10032
| | - John P Cunningham
- Department of Statistics, Grossman Center for the Statistics of Mind, Center for Theoretical Neuroscience, Institute for Data Science and Engineering, Columbia University, New York, New York 10027
| |
Collapse
|
27
|
Gilboa E, Saatçi Y, Cunningham JP. Scaling Multidimensional Inference for Structured Gaussian Processes. IEEE Trans Pattern Anal Mach Intell 2015; 37:424-436. [PMID: 26353252 DOI: 10.1109/tpami.2013.192] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Exact Gaussian process (GP) regression has O(N(3)) runtime for data size N, making it intractable for large N . Many algorithms for improving GP scaling approximate the covariance with lower rank matrices. Other work has exploited structure inherent in particular covariance functions, including GPs with implied Markov structure, and inputs on a lattice (both enable O(N) or O(N log N) runtime). However, these GP advances have not been well extended to the multidimensional input setting, despite the preponderance of multidimensional applications. This paper introduces and tests three novel extensions of structured GPs to multidimensional inputs, for models with additive and multiplicative kernels. First we present a new method for inference in additive GPs, showing a novel connection between the classic backfitting method and the Bayesian framework. We extend this model using two advances: a variant of projection pursuit regression, and a Laplace approximation for non-Gaussian observations. Lastly, for multiplicative kernel structure, we present a novel method for GPs with inputs on a multidimensional grid. We illustrate the power of these three advances on several data sets, achieving performance equal to or very close to the naive GP at orders of magnitude less cost.
Collapse
|
28
|
Abstract
A new distributed computing framework for data analysis enables neuroscientists to meet the computational demands of modern experimental technologies.
Collapse
Affiliation(s)
- John P Cunningham
- Department of Statistics, Columbia University, New York, New York, USA
| |
Collapse
|
29
|
Abstract
Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.
Collapse
Affiliation(s)
- John P Cunningham
- Department of Statistics, Columbia University, New York, New York, USA
| | - Byron M Yu
- 1] Department of Electrical and Computer Engineering, Department of Biomedical Engineering, Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA. [2] Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA. [3] Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
30
|
Gilboa E, Cunningham JP, Nehorai A, Gruev V. Image interpolation and denoising for division of focal plane sensors using Gaussian processes. Opt Express 2014; 22:15277-15291. [PMID: 24977618 DOI: 10.1364/oe.22.015277] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.
Collapse
|
31
|
June RK, Cunningham JP, Fyhrie DP. A Novel Method for Curvefitting the Stretched Exponential Function to Experimental Data. Biomed Eng Res 2013; 2:153-158. [PMID: 24683538 DOI: 10.5963/ber0204001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
The stretched exponential function has many applications in modeling numerous types of experimental relaxation data. However, problems arise when using standard algorithms to fit this function: we have observed that different initializations result in distinct fitted parameters. To avoid this problem, we developed a novel algorithm for fitting the stretched exponential model to relaxation data. This method is advantageous both because it requires only a single adjustable parameter and because it does not require initialization in the solution space. We tested this method on simulated data and experimental stress-relaxation data from bone and cartilage and found favorable results compared to a commonly-used Quasi-Newton method. For the simulated data, strong correlations were found between the simulated and fitted parameters suggesting that this method can accurately determine stretched exponential parameters. When this method was tested on experimental data, high quality fits were observed for both bone and cartilage stress-relaxation data that were significantly better than those determined with the Quasi-Newton algorithm.
Collapse
Affiliation(s)
- Ronald K June
- Department of Mechanical and Industrial Engineering, Montana State University, Bozeman, MT
| | - John P Cunningham
- Department of Electrical Engineering, Stanford University, Stanford, CA
| | - David P Fyhrie
- Departments of Biomedical Engineering and Orthopaedics, University of California, Davis, Davis, CA
| |
Collapse
|
32
|
Gilja V, Nuyujukian P, Chestek CA, Cunningham JP, Yu BM, Fan JM, Ryu SI, Shenoy KV. A brain machine interface control algorithm designed from a feedback control perspective. Annu Int Conf IEEE Eng Med Biol Soc 2013; 2012:1318-22. [PMID: 23366141 DOI: 10.1109/embc.2012.6346180] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
We present a novel brain machine interface (BMI) control algorithm, the recalibrated feedback intention-trained Kalman filter (ReFIT-KF). The design of ReFIT-KF is motivated from a feedback control perspective applied to existing BMI control algorithms. The result is two design innovations that alter the modeling assumptions made by these algorithms and the methods by which these algorithms are trained. In online neural control experiments recording from a 96-electrode array implanted in M1 of a macaque monkey, the ReFIT-KF control algorithm demonstrates large performance improvements over the current state of the art velocity Kalman filter, reducing target acquisition time by a factor of two, while maintaining a 500 ms hold period, thereby increasing the clinical viability of BMI systems.
Collapse
Affiliation(s)
- Vikash Gilja
- Dept. of Computer Science, Stanford University, Stanford, CA, USA
| | | | | | | | | | | | | | | |
Collapse
|
33
|
Zalucki MP, Cunningham JP, Downes S, Ward P, Lange C, Meissle M, Schellhorn NA, Zalucki JM. No evidence for change in oviposition behaviour of Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae) after widespread adoption of transgenic insecticidal cotton. Bull Entomol Res 2012; 102:468-76. [PMID: 22314028 DOI: 10.1017/s0007485311000848] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Cotton growing landscapes in Australia have been dominated by dual-toxin transgenic Bt varieties since 2004. The cotton crop has thus effectively become a sink for the main target pest, Helicoverpa armigera. Theory predicts that there should be strong selection on female moths to avoid laying on such plants. We assessed oviposition, collected from two cotton-growing regions, by female moths when given a choice of tobacco, cotton and cabbage. Earlier work in the 1980s and 1990s on populations from the same geographic locations indicated these hosts were on average ranked as high, mid and low preference plants, respectively, and that host rankings had a heritable component. In the present study, we found no change in the relative ranking of hosts by females, with most eggs being laid on tobacco, then cotton and least on cabbage. As in earlier work, some females laid most eggs on cotton and aspects of oviposition behaviour had a heritable component. Certainly, cotton is not avoided as a host, and the implications of these finding for managing resistance to Bt cotton are discussed.
Collapse
Affiliation(s)
- M P Zalucki
- School of Biological Sciences, The University of Queensland, St Lucia, Brisbane, 4072, Australia
| | - J P Cunningham
- School of Biological Sciences, The University of Queensland, St Lucia, Brisbane, 4072, Australia
| | - S Downes
- CSIRO Ecosystem Sciences, Australian Cotton Research Institute, Narrabri, 2390, NSW
| | - P Ward
- School of Biological Sciences, The University of Queensland, St Lucia, Brisbane, 4072, Australia
| | - C Lange
- School of Biological Sciences, The University of Queensland, St Lucia, Brisbane, 4072, Australia
| | - M Meissle
- CSIRO Ecosystem Sciences, Brisbane, 4001, Australia
| | | | - J M Zalucki
- School of Environment, Griffith University, Nathan, Brisbane, 4111, Australia
| |
Collapse
|
34
|
Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature 2012; 487:51-6. [PMID: 22722855 PMCID: PMC3393826 DOI: 10.1038/nature11129] [Citation(s) in RCA: 773] [Impact Index Per Article: 64.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2011] [Accepted: 04/05/2012] [Indexed: 11/25/2022]
Abstract
Most theories of motor cortex have assumed that neural activity represents movement parameters. This view derives from what is known about primary visual cortex, where neural activity represents patterns of light. Yet it is unclear how well the analogy between motor and visual cortex holds. Single-neuron responses in motor cortex are complex, and there is marked disagreement regarding which movement parameters are represented. A better analogy might be with other motor systems, where a common principle is rhythmic neural activity. Here we find that motor cortex responses during reaching contain a brief but strong oscillatory component, something quite unexpected for a non-periodic behaviour. Oscillation amplitude and phase followed naturally from the preparatory state, suggesting a mechanistic role for preparatory neural activity. These results demonstrate an unexpected yet surprisingly simple structure in the population response. This underlying structure explains many of the confusing features of individual neural responses.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Kavli Institute for Brain Science, David Mahoney Center, Columbia University Medical Center, New York, New York 10032, USA.
| | | | | | | | | | | | | |
Collapse
|
35
|
Zhao M, Batista A, Cunningham JP, Chestek C, Rivera-Alvidrez Z, Kalmar R, Ryu S, Shenoy K, Iyengar S. An L 1-regularized logistic model for detecting short-term neuronal interactions. J Comput Neurosci 2011; 32:479-97. [DOI: 10.1007/s10827-011-0365-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2010] [Revised: 09/16/2011] [Accepted: 09/19/2011] [Indexed: 02/07/2023]
|
36
|
Chestek CA, Gilja V, Nuyujukian P, Foster JD, Fan JM, Kaufman MT, Churchland MM, Rivera-Alvidrez Z, Cunningham JP, Ryu SI, Shenoy KV. Long-term stability of neural prosthetic control signals from silicon cortical arrays in rhesus macaque motor cortex. J Neural Eng 2011; 8:045005. [PMID: 21775782 DOI: 10.1088/1741-2560/8/4/045005] [Citation(s) in RCA: 241] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Cortically-controlled prosthetic systems aim to help disabled patients by translating neural signals from the brain into control signals for guiding prosthetic devices. Recent reports have demonstrated reasonably high levels of performance and control of computer cursors and prosthetic limbs, but to achieve true clinical viability, the long-term operation of these systems must be better understood. In particular, the quality and stability of the electrically-recorded neural signals require further characterization. Here, we quantify action potential changes and offline neural decoder performance over 382 days of recording from four intracortical arrays in three animals. Action potential amplitude decreased by 2.4% per month on average over the course of 9.4, 10.4, and 31.7 months in three animals. During most time periods, decoder performance was not well correlated with action potential amplitude (p > 0.05 for three of four arrays). In two arrays from one animal, action potential amplitude declined by an average of 37% over the first 2 months after implant. However, when using simple threshold-crossing events rather than well-isolated action potentials, no corresponding performance loss was observed during this time using an offline decoder. One of these arrays was effectively used for online prosthetic experiments over the following year. Substantial short-term variations in waveforms were quantified using a wireless system for contiguous recording in one animal, and compared within and between days for all three animals. Overall, this study suggests that action potential amplitude declines more slowly than previously supposed, and performance can be maintained over the course of multiple years when decoding from threshold-crossing events rather than isolated action potentials. This suggests that neural prosthetic systems may provide high performance over multiple years in human clinical trials.
Collapse
Affiliation(s)
- Cynthia A Chestek
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
37
|
Cunningham JP, Nuyujukian P, Gilja V, Chestek CA, Ryu SI, Shenoy KV. A closed-loop human simulator for investigating the role of feedback control in brain-machine interfaces. J Neurophysiol 2011; 105:1932-49. [PMID: 20943945 PMCID: PMC3075301 DOI: 10.1152/jn.00503.2010] [Citation(s) in RCA: 111] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2010] [Accepted: 10/07/2010] [Indexed: 11/22/2022] Open
Abstract
Neural prosthetic systems seek to improve the lives of severely disabled people by decoding neural activity into useful behavioral commands. These systems and their decoding algorithms are typically developed "offline," using neural activity previously gathered from a healthy animal, and the decoded movement is then compared with the true movement that accompanied the recorded neural activity. However, this offline design and testing may neglect important features of a real prosthesis, most notably the critical role of feedback control, which enables the user to adjust neural activity while using the prosthesis. We hypothesize that understanding and optimally designing high-performance decoders require an experimental platform where humans are in closed-loop with the various candidate decode systems and algorithms. It remains unexplored the extent to which the subject can, for a particular decode system, algorithm, or parameter, engage feedback and other strategies to improve decode performance. Closed-loop testing may suggest different choices than offline analyses. Here we ask if a healthy human subject, using a closed-loop neural prosthesis driven by synthetic neural activity, can inform system design. We use this online prosthesis simulator (OPS) to optimize "online" decode performance based on a key parameter of a current state-of-the-art decode algorithm, the bin width of a Kalman filter. First, we show that offline and online analyses indeed suggest different parameter choices. Previous literature and our offline analyses agree that neural activity should be analyzed in bins of 100- to 300-ms width. OPS analysis, which incorporates feedback control, suggests that much shorter bin widths (25-50 ms) yield higher decode performance. Second, we confirm this surprising finding using a closed-loop rhesus monkey prosthetic system. These findings illustrate the type of discovery made possible by the OPS, and so we hypothesize that this novel testing approach will help in the design of prosthetic systems that will translate well to human patients.
Collapse
Affiliation(s)
- John P Cunningham
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305-4075, USA
| | | | | | | | | | | |
Collapse
|
38
|
Churchland MM, Cunningham JP, Kaufman MT, Ryu SI, Shenoy KV. Cortical preparatory activity: representation of movement or first cog in a dynamical machine? Neuron 2010; 68:387-400. [PMID: 21040842 DOI: 10.1016/j.neuron.2010.09.015] [Citation(s) in RCA: 280] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2010] [Indexed: 11/25/2022]
Abstract
The motor cortices are active during both movement and movement preparation. A common assumption is that preparatory activity constitutes a subthreshold form of movement activity: a neuron active during rightward movements becomes modestly active during preparation of a rightward movement. We asked whether this pattern of activity is, in fact, observed. We found that it was not: at the level of a single neuron, preparatory tuning was weakly correlated with movement-period tuning. Yet, somewhat paradoxically, preparatory tuning could be captured by a preferred direction in an abstract "space" that described the population-level pattern of movement activity. In fact, this relationship accounted for preparatory responses better than did traditional tuning models. These results are expected if preparatory activity provides the initial state of a dynamical system whose evolution produces movement activity. Our results thus suggest that preparatory activity may not represent specific factors, and may instead play a more mechanistic role.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA.
| | | | | | | | | |
Collapse
|
39
|
Chestek CA, Cunningham JP, Gilja V, Nuyujukian P, Ryu SI, Shenoy KV. Neural prosthetic systems: current problems and future directions. Annu Int Conf IEEE Eng Med Biol Soc 2010; 2009:3369-75. [PMID: 19963796 DOI: 10.1109/iembs.2009.5332822] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
By decoding neural activity into useful behavioral commands, neural prosthetic systems seek to improve the lives of severely disabled human patients. Motor decoding algorithms, which map neural spiking data to control parameters of a device such as a prosthetic arm, have received particular attention in the literature. Here, we highlight several outstanding problems that exist in most current approaches to decode algorithm design. These include two problems that we argue will unlikely result in further dramatic increases in performance, specifically spike sorting and spiking models. We also discuss three issues that have been less examined in the literature, and we argue that addressing these issues may result in dramatic future increases in performance. These include: non-stationarity of recorded waveforms, limitations of a linear mappings between neural activity and movement kinematics, and the low signal to noise ratio of the neural data. We demonstrate these problems with data from 39 experimental sessions with a non-human primate performing reaches and with recent literature. In all, this study suggests that research in cortically-controlled prosthetic systems may require reprioritization to achieve performance that is acceptable for a clinically viable human system.
Collapse
Affiliation(s)
- Cindy A Chestek
- Dept of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | | | | | | | | | | |
Collapse
|
40
|
Cunningham JP, Gilja V, Ryu SI, Shenoy KV. Methods for estimating neural firing rates, and their application to brain-machine interfaces. Neural Netw 2009; 22:1235-46. [PMID: 19349143 PMCID: PMC2783748 DOI: 10.1016/j.neunet.2009.02.004] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2008] [Revised: 02/20/2009] [Accepted: 02/24/2009] [Indexed: 11/28/2022]
Abstract
Neural spike trains present analytical challenges due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of a spike train's underlying firing rate. Numerous methods for estimating neural firing rates have been developed in recent years, but to date no systematic comparison has been made between them. In this study, we review both classic and current firing rate estimation techniques. We compare the advantages and drawbacks of these methods. Then, in an effort to understand their relevance to the field of neural prostheses, we also apply these estimators to experimentally gathered neural data from a prosthetic arm-reaching paradigm. Using these estimates of firing rate, we apply standard prosthetic decoding algorithms to compare the performance of the different firing rate estimators, and, perhaps surprisingly, we find minimal differences. This study serves as a review of available spike train smoothers and a first quantitative comparison of their performance for brain-machine interfaces.
Collapse
Affiliation(s)
- John P Cunningham
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305-4075, USA
| | | | | | | |
Collapse
|
41
|
Yu BM, Cunningham JP, Santhanam G, Ryu SI, Shenoy KV, Sahani M. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J Neurophysiol 2009; 102:614-35. [PMID: 19357332 PMCID: PMC2712272 DOI: 10.1152/jn.90941.2008] [Citation(s) in RCA: 301] [Impact Index Per Article: 20.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2008] [Accepted: 03/24/2009] [Indexed: 11/22/2022] Open
Abstract
We consider the problem of extracting smooth, low-dimensional neural trajectories that summarize the activity recorded simultaneously from many neurons on individual experimental trials. Beyond the benefit of visualizing the high-dimensional, noisy spiking activity in a compact form, such trajectories can offer insight into the dynamics of the neural circuitry underlying the recorded activity. Current methods for extracting neural trajectories involve a two-stage process: the spike trains are first smoothed over time, then a static dimensionality-reduction technique is applied. We first describe extensions of the two-stage methods that allow the degree of smoothing to be chosen in a principled way and that account for spiking variability, which may vary both across neurons and across time. We then present a novel method for extracting neural trajectories-Gaussian-process factor analysis (GPFA)-which unifies the smoothing and dimensionality-reduction operations in a common probabilistic framework. We applied these methods to the activity of 61 neurons recorded simultaneously in macaque premotor and motor cortices during reach planning and execution. By adopting a goodness-of-fit metric that measures how well the activity of each neuron can be predicted by all other recorded neurons, we found that the proposed extensions improved the predictive ability of the two-stage methods. The predictive ability was further improved by going to GPFA. From the extracted trajectories, we directly observed a convergence in neural state during motor planning, an effect that was shown indirectly by previous studies. We then show how such methods can be a powerful tool for relating the spiking activity across a neural population to the subject's behavior on a single-trial basis. Finally, to assess how well the proposed methods characterize neural population activity when the underlying time course is known, we performed simulations that revealed that GPFA performed tens of percent better than the best two-stage method.
Collapse
Affiliation(s)
- Byron M Yu
- Department of Electrical Engineering, Neurosciences Program, Stanford University, Stanford, CA, USA
| | | | | | | | | | | |
Collapse
|
42
|
Chang C, Cunningham JP, Glover GH. Influence of heart rate on the BOLD signal: the cardiac response function. Neuroimage 2008; 44:857-69. [PMID: 18951982 DOI: 10.1016/j.neuroimage.2008.09.029] [Citation(s) in RCA: 463] [Impact Index Per Article: 28.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2008] [Revised: 08/12/2008] [Accepted: 09/22/2008] [Indexed: 11/17/2022] Open
Abstract
It has previously been shown that low-frequency fluctuations in both respiratory volume and cardiac rate can induce changes in the blood-oxygen level dependent (BOLD) signal. Such physiological noise can obscure the detection of neural activation using fMRI, and it is therefore important to model and remove the effects of this noise. While a hemodynamic response function relating respiratory variation (RV) and the BOLD signal has been described [Birn, R.M., Smith, M.A., Jones, T.B., Bandettini, P.A., 2008b. The respiration response function: The temporal dynamics of fMRI signal fluctuations related to changes in respiration. Neuroimage 40, 644-654.], no such mapping for heart rate (HR) has been proposed. In the current study, the effects of RV and HR are simultaneously deconvolved from resting state fMRI. It is demonstrated that a convolution model including RV and HR can explain significantly more variance in gray matter BOLD signal than a model that includes RV alone, and an average HR response function is proposed that well characterizes our subject population. It is observed that the voxel-wise morphology of the deconvolved RV responses is preserved when HR is included in the model, and that its form is adequately modeled by Birn et al.'s previously-described respiration response function. Furthermore, it is shown that modeling out RV and HR can significantly alter functional connectivity maps of the default-mode network.
Collapse
Affiliation(s)
- Catie Chang
- Department of Electrical Engineering, Stanford University, Lucas MRI/S Center, Stanford, CA 94305-5488, USA.
| | | | | |
Collapse
|
43
|
Abstract
Neural prosthetic systems have been designed to estimate continuous reach trajectories (motor prostheses) and to predict discrete reach targets (communication prostheses). In the latter case, reach targets are typically decoded from neural spiking activity during an instructed delay period before the reach begins. Such systems use targets placed in radially symmetric geometries independent of the tuning properties of the neurons available. Here we seek to automate the target placement process and increase decode accuracy in communication prostheses by selecting target locations based on the neural population at hand. Motor prostheses that incorporate intended target information could also benefit from this consideration. We present an optimal target placement algorithm that approximately maximizes decode accuracy with respect to target locations. In simulated neural spiking data fit from two monkeys, the optimal target placement algorithm yielded statistically significant improvements up to 8 and 9% for two and sixteen targets, respectively. For four and eight targets, gains were more modest, as the target layouts found by the algorithm closely resembled the canonical layouts. We trained a monkey in this paradigm and tested the algorithm with experimental neural data to confirm some of the results found in simulation. In all, the algorithm can serve not only to create new target layouts that outperform canonical layouts, but it can also confirm or help select among multiple canonical layouts. The optimal target placement algorithm developed here is the first algorithm of its kind, and it should both improve decode accuracy and help automate target placement for neural prostheses.
Collapse
Affiliation(s)
- John P Cunningham
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305-4075, USA
| | | | | | | | | |
Collapse
|
44
|
Shenoy KV, Santhanam G, Ryu SI, Afshar A, Yu BM, Gilja V, Linderman MD, Kalmar RS, Cunningham JP, Kemere CT, Batista AP, Churchland MM, Meng TH. Increasing the performance of cortically-controlled prostheses. Conf Proc IEEE Eng Med Biol Soc 2008; Suppl:6652-6. [PMID: 17959477 DOI: 10.1109/iembs.2006.260912] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Neural prostheses have received considerable attention due to their potential to dramatically improve the quality of life of severely disabled patients. Cortically-controlled prostheses are able to translate neural activity from cerebral cortex into control signals for guiding computer cursors or prosthetic limbs. Non-invasive and invasive electrode techniques can be used to measure neural activity, with the latter promising considerably higher levels of performance and therefore functionality to patients. We review here some of our recent experimental and computational work aimed at establishing a principled design methodology to increase electrode-based cortical prosthesis performance to near theoretical limits. Studies discussed include translating unprecedentedly brief periods of "plan" activity into high information rate (6.5 bits/s)control signals, improving decode algorithms and optimizing visual target locations for further performance increases, and recording from chronically implanted arrays in freely behaving monkeys to characterize neuron stability. Taken together, these results should substantially increase the clinical viability of cortical prostheses.
Collapse
Affiliation(s)
- Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
45
|
Chestek CA, Batista AP, Santhanam G, Yu BM, Afshar A, Cunningham JP, Gilja V, Ryu SI, Churchland MM, Shenoy KV. Single-neuron stability during repeated reaching in macaque premotor cortex. J Neurosci 2007; 27:10742-50. [PMID: 17913908 PMCID: PMC6672821 DOI: 10.1523/jneurosci.0959-07.2007] [Citation(s) in RCA: 124] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Some movements that animals and humans make are highly stereotyped, repeated with little variation. The patterns of neural activity associated with repeats of a movement may be highly similar, or the same movement may arise from different patterns of neural activity, if the brain exploits redundancies in the neural projections to muscles. We examined the stability of the relationship between neural activity and behavior. We asked whether the variability in neural activity that we observed during repeated reaching was consistent with a noisy but stable relationship, or with a changing relationship, between neural activity and behavior. Monkeys performed highly similar reaches under tight behavioral control, while many neurons in the dorsal aspect of premotor cortex and the primary motor cortex were simultaneously monitored for several hours. Neural activity was predominantly stable over time in all measured properties: firing rate, directional tuning, and contribution to a decoding model that predicted kinematics from neural activity. The small changes in neural activity that we did observe could be accounted for primarily by subtle changes in behavior. We conclude that the relationship between neural activity and practiced behavior is reasonably stable, at least on timescales of minutes up to 48 h. This finding has significant implications for the design of neural prosthetic systems because it suggests that device recalibration need not be overly frequent, It also has implications for studies of neural plasticity because a stable baseline permits identification of nonstationary shifts.
Collapse
Affiliation(s)
- Cynthia A Chestek
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
46
|
Abstract
Neural prosthetic systems have been designed to estimate continuous reach trajectories as well as discrete reach targets. In the latter case, reach targets are typically decoded from neural activity during an instructed delay period, before the reach begins. We have recently characterized the decoding speed and accuracy achievable by such a system. The results were obtained using canonical target layouts, independent of the tuning properties of the neurons available. Here we seek to increase decode accuracy by judiciously selecting the locations of the reach targets based on the characteristics of the neural population at hand. We present an optimal target placement algorithm that approximately maximizes decode accuracy with respect to target locations. Using maximum likelihood decoding, the optimal target placement algorithm yielded up to 11 and 12% improvement for two and sixteen targets, respectively. For four and eight targets, gains were more modest (5 and 3%, respectively) as the target layouts found by the algorithm closely resembled the canonical layouts. Thus, the algorithm can serve not only to find target layouts that outperform canonical layouts, but it can also confirm or help select among multiple canonical layouts. These results indicate that the optimal target placement algorithm is a valuable tool for designing high-performance prosthetic systems.
Collapse
|
47
|
Hull CD, Cunningham JP, Moore CJ, Zalucki MP, Cribb BW. Discrepancy between antennal and behavioral responses for enantiomers of alpha-pinene: electrophysiology and behavior of Helicoverpa armigera (Lepidoptera). J Chem Ecol 2005; 30:2071-84. [PMID: 15609838 DOI: 10.1023/b:joec.0000045596.13384.7e] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The ability of adult cotton bollworm, Helicoverpa armigera (Hübner), to distinguish and respond to enantiomers of alpha-pinene was investigated with electrophysiological and behavioral methods. Electroantennogram recordings using mixtures of the enantiomers at saturating dose levels, and single unit electrophysiology, indicated that the two forms were detected by the same receptor neurons. The relative size of the electroantennogram response was higher for the (-) compared to the (+) form, indicating greater affinity for the (-) form at the level of the dendrites. Behavioral assays investigated the ability of moths to discriminate between, and respond to the (+) and (-) forms of alpha-pinene. Moths with no odor conditioning showed an innate preference for (+)-alpha-pinene. This preference displayed by naive moths was not significantly different from the preferences of moths conditioned on (+)-alpha-pinene. However, we found a significant difference in preference between moths conditioned on the (-) enantiomer compared to naive moths and moths conditioned on (+)-alpha-pinene, showing that learning plays an important role in the behavioral response. Moths are less able to distinguish between enantiomers of alpha-pinene than different odors (e.g., phenylacetaldehyde versus (-)-alpha-pinene) in learning experiments. The relevance of receptor discrimination of enantiomers and learning ability of the moths in host plant choice is discussed.
Collapse
Affiliation(s)
- C D Hull
- Centre for Microscopy and Microanalysis, The University of Queensland, Brisbane, 4072, Australia
| | | | | | | | | |
Collapse
|
48
|
Nasca TJ, Veloski JJ, Monnier JA, Cunningham JP, Valerio S, Lewis TJ, Gonnella JS. Minimum instructional and program-specific administrative costs of educating residents in internal medicine. Arch Intern Med 2001; 161:760-6. [PMID: 11231711 DOI: 10.1001/archinte.161.5.760] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
BACKGROUND The cost associated with education of residents is of interest from an educational as well as a political perspective. Most studies report a single institution's actual incurred costs, based on traditional cost accounting methods. We quantified the minimum instructional and program-specific administrative costs for residency training in internal medicine. METHODS Using the Accreditation Council for Graduate Medical Education program requirements for internal medicine as minimum standards for teaching and administrative effort, we quantified the minimum instructional and administrative costs for sponsorship of an accredited residency program in internal medicine. We also analyzed the impact of resident complement and program curricular emphasis (outpatient, inpatient, or traditional) on the per-resident cost. The main outcome measure was the minimum annual per-resident cost of instruction and program-specific administration. RESULTS Using the assumptions in this model, we estimated the annual cost per resident of implementing the program requirements to be $50,648, $35,477, $28,517, and $26,197 for inpatient intensive residency programs with resident complements of 21, 42, 84, and 126, respectively. For outpatient intensive residency programs of identical resident complements, we estimated the annual per-resident cost to be $58,025, $42,853, $35,894, and $33,574 for similar resident complements. Fixed costs mandated by the program requirements, which did not vary across program size or configuration, were estimated to be $640,737. CONCLUSIONS There are fixed and variable costs associated with sponsorship of accredited internal medicine residency programs. The minimum cost per resident of education and departmental administration varies inversely with program size within the sizes examined.
Collapse
Affiliation(s)
- T J Nasca
- Room 108, College Bldg, 1025 Walnut St, Philadelphia, PA 19107, USA.
| | | | | | | | | | | | | |
Collapse
|
49
|
Abstract
The effect of experience on pre- and post-alighting host selection in adult female Helicoverpa armigera was tested in an indoor flight cage, and in a large greenhouse. The moths had experienced either tobacco or tomato plants (both are hosts of H. armigera) for 3 days, or were given no experience. Individuals were then released and their host selection assessed. All individuals caught in the greenhouse were identified and tested for post-alighting acceptance on each host. Experience significantly influenced both pre- and post-alighting host selection in ovipositing moths. This modification in behaviour is attributed to 'learning', and presents the first detailed evidence for learning in moths. Possible behavioural mechanisms involved are discussed, and a hypothesis is presented regarding learning in post-alighting host acceptance. The existence of learning in H. armigera, a highly polyphagous agricultural pest, is discussed in the light of current theories on environmental predictability and the advantages of learning. Copyright 1998 The Association for the Study of Animal Behaviour.
Collapse
Affiliation(s)
- JP Cunningham
- Department of Biology, Imperial College of Science, Technology & Medicine
| | | | | | | |
Collapse
|
50
|
|