1
|
Tolooshams B, Matias S, Wu H, Temereanca S, Uchida N, Murthy VN, Masset P, Ba D. Interpretable deep learning for deconvolutional analysis of neural signals. Neuron 2025; 113:1151-1168.e13. [PMID: 40081364 PMCID: PMC12006907 DOI: 10.1016/j.neuron.2025.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 11/06/2024] [Accepted: 02/09/2025] [Indexed: 03/16/2025]
Abstract
The widespread adoption of deep learning to model neural activity often relies on "black-box" approaches that lack an interpretable connection between neural activity and network parameters. Here, we propose using algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We introduce our method, deconvolutional unrolled neural learning (DUNL), and demonstrate its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. We uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and across striatum during unstructured, naturalistic experiments. Our work leverages advances in interpretable deep learning to provide a mechanistic understanding of neural activity.
Collapse
Affiliation(s)
- Bahareh Tolooshams
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA; Computing + mathematical sciences, California Institute of Technology, Pasadena, CA 91125, USA
| | - Sara Matias
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA
| | - Hao Wu
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA
| | - Simona Temereanca
- Carney Institute for Brain Science, Brown University, Providence, RI 02906, USA
| | - Naoshige Uchida
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge, MA 02138, USA
| | - Venkatesh N Murthy
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge, MA 02138, USA
| | - Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Psychology, McGill University, Montréal, QC H3A 1G1, Canada; Mila - Quebec Artificial Intelligence Institute, Montréal, QC H2S 3H1, Canada.
| | - Demba Ba
- Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
2
|
Tolooshams B, Matias S, Wu H, Temereanca S, Uchida N, Murthy VN, Masset P, Ba D. Interpretable deep learning for deconvolutional analysis of neural signals. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.05.574379. [PMID: 38260512 PMCID: PMC10802267 DOI: 10.1101/2024.01.05.574379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on "black-box" approaches that lack an interpretable link between neural activity and network parameters. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and in the striatum during unstructured, naturalistic experiments. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural activity.
Collapse
Affiliation(s)
- Bahareh Tolooshams
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, 91125
| | - Sara Matias
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Hao Wu
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Simona Temereanca
- Carney Institute for Brain Science, Brown University, Providence, RI, 02906
| | - Naoshige Uchida
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Venkatesh N. Murthy
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Paul Masset
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
- Department of Psychology, McGill University, Montréal QC, H3A 1G1
| | - Demba Ba
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge MA, 02138
| |
Collapse
|
3
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
4
|
Garwood IC, Major AJ, Antonini MJ, Correa J, Lee Y, Sahasrabudhe A, Mahnke MK, Miller EK, Brown EN, Anikeeva P. Multifunctional fibers enable modulation of cortical and deep brain activity during cognitive behavior in macaques. SCIENCE ADVANCES 2023; 9:eadh0974. [PMID: 37801492 PMCID: PMC10558126 DOI: 10.1126/sciadv.adh0974] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 09/05/2023] [Indexed: 10/08/2023]
Abstract
Recording and modulating neural activity in vivo enables investigations of the neurophysiology underlying behavior and disease. However, there is a dearth of translational tools for simultaneous recording and localized receptor-specific modulation. We address this limitation by translating multifunctional fiber neurotechnology previously only available for rodent studies to enable cortical and subcortical neural recording and modulation in macaques. We record single-neuron and broader oscillatory activity during intracranial GABA infusions in the premotor cortex and putamen. By applying state-space models to characterize changes in electrophysiology, we uncover that neural activity evoked by a working memory task is reshaped by even a modest local inhibition. The recordings provide detailed insight into the electrophysiological effect of neurotransmitter receptor modulation in both cortical and subcortical structures in an awake macaque. Our results demonstrate a first-time application of multifunctional fibers for causal studies of neuronal activity in behaving nonhuman primates and pave the way for clinical translation of fiber-based neurotechnology.
Collapse
Affiliation(s)
- Indie C. Garwood
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Alex J. Major
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Marc-Joseph Antonini
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Josefina Correa
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Youngbin Lee
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Atharva Sahasrabudhe
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Meredith K. Mahnke
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Earl K. Miller
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Emery N. Brown
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
- Institute for Medical Engineering and Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Anaesthesia, Harvard Medical School, Boston, MA, USA
| | - Polina Anikeeva
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
5
|
Bastos AM, Donoghue JA, Brincat SL, Mahnke M, Yanar J, Correa J, Waite AS, Lundqvist M, Roy J, Brown EN, Miller EK. Neural effects of propofol-induced unconsciousness and its reversal using thalamic stimulation. eLife 2021; 10:60824. [PMID: 33904411 PMCID: PMC8079153 DOI: 10.7554/elife.60824] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 03/28/2021] [Indexed: 01/05/2023] Open
Abstract
The specific circuit mechanisms through which anesthetics induce unconsciousness have not been completely characterized. We recorded neural activity from the frontal, parietal, and temporal cortices and thalamus while maintaining unconsciousness in non-human primates (NHPs) with the anesthetic propofol. Unconsciousness was marked by slow frequency (~1 Hz) oscillations in local field potentials, entrainment of local spiking to Up states alternating with Down states of little or no spiking activity, and decreased coherence in frequencies above 4 Hz. Thalamic stimulation ‘awakened’ anesthetized NHPs and reversed the electrophysiologic features of unconsciousness. Unconsciousness is linked to cortical and thalamic slow frequency synchrony coupled with decreased spiking, and loss of higher-frequency dynamics. This may disrupt cortical communication/integration.
Collapse
Affiliation(s)
- André M Bastos
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Jacob A Donoghue
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Scott L Brincat
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Meredith Mahnke
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Jorge Yanar
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Josefina Correa
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Ayan S Waite
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Mikael Lundqvist
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Jefferson Roy
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Emery N Brown
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,The Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital/Harvard Medical School, Boston, United States.,The Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, United States
| | - Earl K Miller
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
6
|
Casile A, Faghih RT, Brown EN. Robust point-process Granger causality analysis in presence of exogenous temporal modulations and trial-by-trial variability in spike trains. PLoS Comput Biol 2021; 17:e1007675. [PMID: 33493162 PMCID: PMC7861554 DOI: 10.1371/journal.pcbi.1007675] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 02/04/2021] [Accepted: 11/17/2020] [Indexed: 11/18/2022] Open
Abstract
Assessing directional influences between neurons is instrumental to understand how brain circuits process information. To this end, Granger causality, a technique originally developed for time-continuous signals, has been extended to discrete spike trains. A fundamental assumption of this technique is that the temporal evolution of neuronal responses must be due only to endogenous interactions between recorded units, including self-interactions. This assumption is however rarely met in neurophysiological studies, where the response of each neuron is modulated by other exogenous causes such as, for example, other unobserved units or slow adaptation processes. Here, we propose a novel point-process Granger causality technique that is robust with respect to the two most common exogenous modulations observed in real neuronal responses: within-trial temporal variations in spiking rate and between-trial variability in their magnitudes. This novel method works by explicitly including both types of modulations into the generalized linear model of the neuronal conditional intensity function (CIF). We then assess the causal influence of neuron i onto neuron j by measuring the relative reduction of neuron j's point process likelihood obtained considering or removing neuron i. CIF's hyper-parameters are set on a per-neuron basis by minimizing Akaike's information criterion. In synthetic data sets, generated by means of random processes or networks of integrate-and-fire units, the proposed method recovered with high accuracy, sensitivity and robustness the underlying ground-truth connectivity pattern. Application of presently available point-process Granger causality techniques produced instead a significant number of false positive connections. In real spiking responses recorded from neurons in the monkey pre-motor cortex (area F5), our method revealed many causal relationships between neurons as well as the temporal structure of their interactions. Given its robustness our method can be effectively applied to real neuronal data. Furthermore, its explicit estimate of the effects of unobserved causes on the recorded neuronal firing patterns can help decomposing their temporal variations into endogenous and exogenous components.
Collapse
Affiliation(s)
- Antonino Casile
- Istituto Italiano di Tecnologia, Center for Translational Neurophysiology of Speech and Communication (CTNSC), Ferrara, Italy
- Harvard Medical School, Department of Neurobiology, Boston, Massachusetts, United States of America
- * E-mail: ,
| | - Rose T. Faghih
- Department of Electrical and Computer Engineering, University of Houston, Houston, Texas, United States of America
| | - Emery N. Brown
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| |
Collapse
|
7
|
Feng J, Wu H, Zeng Y, Wang Y. Weakly supervised learning in neural encoding for the position of the moving finger of a macaque. Cognit Comput 2020. [DOI: 10.1007/s12559-020-09742-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
8
|
Rad KR, Maleki A. A scalable estimate of the out‐of‐sample prediction error via approximate leave‐one‐out cross‐validation. J R Stat Soc Series B Stat Methodol 2020. [DOI: 10.1111/rssb.12374] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
9
|
Sekhar S, Ramesh P, Bassetto G, Zrenner E, Macke JH, Rathbun DL. Characterizing Retinal Ganglion Cell Responses to Electrical Stimulation Using Generalized Linear Models. Front Neurosci 2020; 14:378. [PMID: 32477044 PMCID: PMC7235533 DOI: 10.3389/fnins.2020.00378] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Accepted: 03/27/2020] [Indexed: 11/26/2022] Open
Abstract
The ability to preferentially stimulate different retinal pathways is an important area of research for improving visual prosthetics. Recent work has shown that different classes of retinal ganglion cells (RGCs) have distinct linear electrical input filters for low-amplitude white noise stimulation. The aim of this study is to provide a statistical framework for characterizing how RGCs respond to white-noise electrical stimulation. We used a nested family of Generalized Linear Models (GLMs) to partition neural responses into different components—progressively adding covariates to the GLM which captured non-stationarity in neural activity, a linear dependence on the stimulus, and any remaining non-linear interactions. We found that each of these components resulted in increased model performance, but that even the non-linear model left a substantial fraction of neural variability unexplained. The broad goal of this paper is to provide a much-needed theoretical framework to objectively quantify stimulus paradigms in terms of the types of neural responses that they elicit (linear vs. non-linear vs. stimulus-independent variability). In turn, this aids the prosthetic community in the search for optimal stimulus parameters that avoid indiscriminate retinal activation and adaptation caused by excessively large stimulus pulses, and avoid low fidelity responses (low signal-to-noise ratio) caused by excessively weak stimulus pulses.
Collapse
Affiliation(s)
- Sudarshan Sekhar
- Institute for Ophthalmic Research, Eberhard Karls University Tübingen, Tübingen, Germany.,Graduate Training Center of Neuroscience, International Max Planck Research School, Tübingen, Germany.,Systems Neuroscience Center, School of Medicine, University of Pittsburgh, Pittsburgh, PA, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, United States.,Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, United States
| | - Poornima Ramesh
- Computational Neuroengineering, Department for Electrical and Computer Engineering, Technische Universität München, Munich, Germany
| | - Giacomo Bassetto
- Computational Neuroengineering, Department for Electrical and Computer Engineering, Technische Universität München, Munich, Germany.,Neural System Analysis, Research Center Caesar, Bonn, Germany
| | - Eberhart Zrenner
- Institute for Ophthalmic Research, Eberhard Karls University Tübingen, Tübingen, Germany.,Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany
| | - Jakob H Macke
- Computational Neuroengineering, Department for Electrical and Computer Engineering, Technische Universität München, Munich, Germany.,Neural System Analysis, Research Center Caesar, Bonn, Germany
| | - Daniel L Rathbun
- Institute for Ophthalmic Research, Eberhard Karls University Tübingen, Tübingen, Germany.,Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany.,Bernstein Center for Computational Neuroscience Tübingen, Tübingen, Germany.,Department of Ophthalmology, Henry Ford Health System, Detroit, MI, United States
| |
Collapse
|
10
|
Estimating the Parameters of Fitzhugh-Nagumo Neurons from Neural Spiking Data. Brain Sci 2019; 9:brainsci9120364. [PMID: 31835351 PMCID: PMC6956007 DOI: 10.3390/brainsci9120364] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/03/2019] [Accepted: 12/05/2019] [Indexed: 11/16/2022] Open
Abstract
A theoretical and computational study on the estimation of the parameters of a single Fitzhugh-Nagumo model is presented. The difference of this work from a conventional system identification is that the measured data only consist of discrete and noisy neural spiking (spike times) data, which contain no amplitude information. The goal can be achieved by applying a maximum likelihood estimation approach where the likelihood function is derived from point process statistics. The firing rate of the neuron was assumed as a nonlinear map (logistic sigmoid) relating it to the membrane potential variable. The stimulus data were generated by a phased cosine Fourier series having fixed amplitude and frequency but a randomly shot phase (shot at each repeated trial). Various values of amplitude, stimulus component size, and sample size were applied to examine the effect of stimulus to the identification process. Results are presented in tabular and graphical forms, which also include statistical analysis (mean and standard deviation of the estimates). We also tested our model using realistic data from a previous research (H1 neurons of blowflies) and found that the estimates have a tendency to converge.
Collapse
|
11
|
Characterizing and dissociating multiple time-varying modulatory computations influencing neuronal activity. PLoS Comput Biol 2019; 15:e1007275. [PMID: 31513570 PMCID: PMC6759185 DOI: 10.1371/journal.pcbi.1007275] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 09/24/2019] [Accepted: 07/18/2019] [Indexed: 11/19/2022] Open
Abstract
In many brain areas, sensory responses are heavily modulated by factors including attentional state, context, reward history, motor preparation, learned associations, and other cognitive variables. Modelling the effect of these modulatory factors on sensory responses has proven challenging, mostly due to the time-varying and nonlinear nature of the underlying computations. Here we present a computational model capable of capturing and dissociating multiple time-varying modulatory effects on neuronal responses on the order of milliseconds. The model’s performance is tested on extrastriate perisaccadic visual responses in nonhuman primates. Visual neurons respond to stimuli presented around the time of saccades differently than during fixation. These perisaccadic changes include sensitivity to the stimuli presented at locations outside the neuron’s receptive field, which suggests a contribution of multiple sources to perisaccadic response generation. Current computational approaches cannot quantitatively characterize the contribution of each modulatory source in response generation, mainly due to the very short timescale on which the saccade takes place. In this study, we use a high spatiotemporal resolution experimental paradigm along with a novel extension of the generalized linear model framework (GLM), termed the sparse-variable GLM, to allow for time-varying model parameters representing the temporal evolution of the system with a resolution on the order of milliseconds. We used this model framework to precisely map the temporal evolution of the spatiotemporal receptive field of visual neurons in the middle temporal area during the execution of a saccade. Moreover, an extended model based on a factorization of the sparse-variable GLM allowed us to disassociate and quantify the contribution of individual sources to the perisaccadic response. Our results show that our novel framework can precisely capture the changes in sensitivity of neurons around the time of saccades, and provide a general framework to quantitatively track the role of multiple modulatory sources over time. The sensory responses of neurons in many brain areas, particularly those in higher prefrontal or parietal areas, are strongly influenced by factors including task rules, attentional state, context, reward history, motor preparation, learned associations, and other cognitive variables. These modulations often occur in combination, or on fast timescales which present a challenge for both experimental and modelling approaches aiming to describe the underlying mechanisms or computations. Here we present a computational model capable of capturing and dissociating multiple time-varying modulatory effects on spiking responses on the order of milliseconds. The model’s performance is evaluated by testing its ability to reproduce and dissociate multiple changes in visual sensitivity occurring in extrastriate visual cortex around the time of rapid eye movements. No previous model is capable of capturing these changes with as fine a resolution as that presented here. Our model both provides specific insight into the nature and time course of changes in visual sensitivity around the time of eye movements, and offers a general framework applicable to a wide variety of contexts in which sensory processing is modulated dynamically by multiple time-varying cognitive or behavioral factors, to understand the neuronal computations underpinning these modulations and make predictions about the underlying mechanisms.
Collapse
|
12
|
Liu S, Iriate-Diaz J, Hatsopoulos NG, Ross CF, Takahashi K, Chen Z. Dynamics of motor cortical activity during naturalistic feeding behavior. J Neural Eng 2019; 16:026038. [PMID: 30721881 DOI: 10.1088/1741-2552/ab0474] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE The orofacial primary motor cortex (MIo) plays a critical role in controlling tongue and jaw movements during oral motor functions, such as chewing, swallowing and speech. However, the neural mechanisms of MIo during naturalistic feeding are still poorly understood. There is a strong need for a systematic study of motor cortical dynamics during feeding behavior. APPROACH To investigate the neural dynamics and variability of MIo neuronal activity during naturalistic feeding, we used chronically implanted micro-electrode arrays to simultaneously recorded ensembles of neuronal activity in the MIo of two monkeys (Macaca mulatta) while eating various types of food. We developed a Bayesian nonparametric latent variable model to reveal latent structures of neuronal population activity of the MIo and identify the complex mapping between MIo ensemble spike activity and high-dimensional kinematics. MAIN RESULTS Rhythmic neuronal firing patterns and oscillatory dynamics are evident in single-unit activity. At the population level, we uncovered the neural dynamics of rhythmic chewing, and quantified the neural variability at multiple timescales (complete feeding sequences, chewing sequence stages, chewing gape cycle phases) across food types. Our approach accommodates time-warping of chewing sequences and automatic model selection, and maps the latent states to chewing behaviors at fine timescales. SIGNIFICANCE Our work shows that neural representations of MIo ensembles display spatiotemporal patterns in chewing gape cycles at different chew sequence stages, and these patterns vary in a stage-dependent manner. Unsupervised learning and decoding analysis may reveal the link between complex MIo spatiotemporal patterns and chewing kinematics.
Collapse
Affiliation(s)
- Shizhao Liu
- Department of Psychiatry, Department of Neuroscience & Physiology, New York University School of Medicine, New York, NY 10016, United States of America. Department of Biomedical Engineering, Tsinghua University, Beijing, People's Republic of China
| | | | | | | | | | | |
Collapse
|
13
|
Johnson TD, Coleman TP, Rangel LM. A flexible likelihood approach for predicting neural spiking activity from oscillatory phase. J Neurosci Methods 2019; 311:307-317. [PMID: 30367887 PMCID: PMC6387742 DOI: 10.1016/j.jneumeth.2018.10.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 10/10/2018] [Accepted: 10/17/2018] [Indexed: 11/18/2022]
Abstract
Background: The synchronous ionic currents that give rise to neural oscillations have complex influences on neuronal spiking activity that are challenging to characterize. New method: Here we present a method to estimate probabilistic relationships between neural spiking activity and the phase of field oscillations using a generalized linear model (GLM) with an overcomplete basis of circular functions. We first use an L1-regularized maximum likelihood procedure to select an active set of regressors from the overcomplete set and perform model fitting using standard maximum likelihood estimation. An information theoretic model selection procedure is then used to identify an optimal subset of regressors and associated coefficients that minimize overfitting. To assess goodness of fit, we apply the time-rescaling theorem and compare model predictions to original data using quantile-quantile plots. Results: Spike-phase relationships in synthetic data were robustly characterized. When applied to in vivo hippocampal data from an awake behaving rat, our method captured a multimodal relationship between the spiking activity of a CA1 interneuron, a theta (5–10 Hz) rhythm, and a nested high gamma (65–135 Hz) rhythm. Comparison with existing methods: Previous methods for characterizing spike-phase relationships are often only suitable for unimodal relationships, impose specific relationship shapes, or have limited ability to assess the accuracy or fit of their characterizations. Conclusions: This method advances the way spike-phase relationships are visualized and quantified, and captures multimodal spike-phase relationships, including relationships with multiple nested rhythms. Overall, our method is a powerful tool for revealing a wide range of neural circuit interactions.
Collapse
Affiliation(s)
- Teryn D Johnson
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, United States.
| | - Todd P Coleman
- Department of Bioengineering, University of California, San Diego, La Jolla, CA 92093, United States.
| | - Lara M Rangel
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, United States.
| |
Collapse
|
14
|
Abstract
Generalized linear models (GLMs) have a wide range of applications in systems neuroscience describing the encoding of stimulus and behavioral variables, as well as the dynamics of single neurons. However, in any given experiment, many variables that have an impact on neural activity are not observed or not modeled. Here we demonstrate, in both theory and practice, how these omitted variables can result in biased parameter estimates for the effects that are included. In three case studies, we estimate tuning functions for common experiments in motor cortex, hippocampus, and visual cortex. We find that including traditionally omitted variables changes estimates of the original parameters and that modulation originally attributed to one variable is reduced after new variables are included. In GLMs describing single-neuron dynamics, we then demonstrate how postspike history effects can also be biased by omitted variables. Here we find that omitted variable bias can lead to mistaken conclusions about the stability of single-neuron firing. Omitted variable bias can appear in any model with confounders-where omitted variables modulate neural activity and the effects of the omitted variables covary with the included effects. Understanding how and to what extent omitted variable bias affects parameter estimates is likely to be important for interpreting the parameters and predictions of many neural encoding models.
Collapse
Affiliation(s)
- Ian H Stevenson
- Department of Psychological Sciences, Department of Biomedical Engineering, and CT Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, U.S.A.
| |
Collapse
|
15
|
Ombao H, Fiecas M, Ting CM, Low YF. Statistical models for brain signals with properties that evolve across trials. Neuroimage 2018; 180:609-618. [DOI: 10.1016/j.neuroimage.2017.11.061] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 10/25/2017] [Accepted: 11/27/2017] [Indexed: 01/03/2023] Open
|
16
|
A separable two-dimensional random field model of binary response data from multi-day behavioral experiments. J Neurosci Methods 2018; 307:175-187. [PMID: 29679704 DOI: 10.1016/j.jneumeth.2018.04.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Revised: 03/13/2018] [Accepted: 04/12/2018] [Indexed: 11/21/2022]
Abstract
BACKGROUND The study of learning in populations of subjects can provide insights into the changes that occur in the brain with aging, drug intervention, and psychiatric disease. NEW METHOD We introduce a separable two-dimensional (2D) random field (RF) model for analyzing binary response data acquired during the learning of object-reward associations across multiple days. The method can quantify the variability of performance within a day and across days, and can capture abrupt changes in learning. RESULTS We apply the method to data from young and aged macaque monkeys performing a reversal-learning task. The method provides an estimate of performance within a day for each age group, and a learning rate across days for each monkey. We find that, as a group, the older monkeys require more trials to learn the object discriminations than do the young monkeys, and that the cognitive flexibility of the younger group is higher. We also use the model estimates of performance as features for clustering the monkeys into two groups. The clustering results in two groups that, for the most part, coincide with those formed by the age groups. Simulation studies suggest that clustering captures inter-individual differences in performance levels. COMPARISON WITH EXISTING METHOD(S) In comparison with generalized linear models, this method is better able to capture the inherent two-dimensional nature of the data and find between group differences. CONCLUSIONS Applied to binary response data from groups of individuals performing multi-day behavioral experiments, the model discriminates between-group differences and identifies subgroups.
Collapse
|
17
|
Allsop SA, Wichmann R, Mills F, Burgos-Robles A, Chang CJ, Felix-Ortiz AC, Vienne A, Beyeler A, Izadmehr EM, Glober G, Cum MI, Stergiadou J, Anandalingam KK, Farris K, Namburi P, Leppla CA, Weddington JC, Nieh EH, Smith AC, Ba D, Brown EN, Tye KM. Corticoamygdala Transfer of Socially Derived Information Gates Observational Learning. Cell 2018; 173:1329-1342.e18. [PMID: 29731170 DOI: 10.1016/j.cell.2018.04.004] [Citation(s) in RCA: 200] [Impact Index Per Article: 28.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Revised: 12/27/2017] [Accepted: 04/03/2018] [Indexed: 01/15/2023]
Abstract
Observational learning is a powerful survival tool allowing individuals to learn about threat-predictive stimuli without directly experiencing the pairing of the predictive cue and punishment. This ability has been linked to the anterior cingulate cortex (ACC) and the basolateral amygdala (BLA). To investigate how information is encoded and transmitted through this circuit, we performed electrophysiological recordings in mice observing a demonstrator mouse undergo associative fear conditioning and found that BLA-projecting ACC (ACC→BLA) neurons preferentially encode socially derived aversive cue information. Inhibition of ACC→BLA alters real-time amygdala representation of the aversive cue during observational conditioning. Selective inhibition of the ACC→BLA projection impaired acquisition, but not expression, of observational fear conditioning. We show that information derived from observation about the aversive value of the cue is transmitted from the ACC to the BLA and that this routing of information is critically instructive for observational fear conditioning. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Stephen A Allsop
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Romy Wichmann
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Fergil Mills
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Anthony Burgos-Robles
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Chia-Jung Chang
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Ada C Felix-Ortiz
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Alienor Vienne
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Anna Beyeler
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Ehsan M Izadmehr
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Gordon Glober
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Meghan I Cum
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Johanna Stergiadou
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Kavitha K Anandalingam
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Kathryn Farris
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Praneeth Namburi
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Christopher A Leppla
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Javier C Weddington
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Edward H Nieh
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Anne C Smith
- Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ 85724, USA
| | - Demba Ba
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Emery N Brown
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA; The Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Kay M Tye
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
18
|
Zhang Y, Malem-Shinitski N, Allsop SA, M Tye K, Ba D. Estimating a Separably Markov Random Field from Binary Observations. Neural Comput 2018; 30:1046-1079. [PMID: 29381446 DOI: 10.1162/neco_a_01059] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A fundamental problem in neuroscience is to characterize the dynamics of spiking from the neurons in a circuit that is involved in learning about a stimulus or a contingency. A key limitation of current methods to analyze neural spiking data is the need to collapse neural activity over time or trials, which may cause the loss of information pertinent to understanding the function of a neuron or circuit. We introduce a new method that can determine not only the trial-to-trial dynamics that accompany the learning of a contingency by a neuron, but also the latency of this learning with respect to the onset of a conditioned stimulus. The backbone of the method is a separable two-dimensional (2D) random field (RF) model of neural spike rasters, in which the joint conditional intensity function of a neuron over time and trials depends on two latent Markovian state sequences that evolve separately but in parallel. Classical tools to estimate state-space models cannot be applied readily to our 2D separable RF model. We develop efficient statistical and computational tools to estimate the parameters of the separable 2D RF model. We apply these to data collected from neurons in the prefrontal cortex in an experiment designed to characterize the neural underpinnings of the associative learning of fear in mice. Overall, the separable 2D RF model provides a detailed, interpretable characterization of the dynamics of neural spiking that accompany the learning of a contingency.
Collapse
Affiliation(s)
- Yingzhuo Zhang
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A.
| | | | - Stephen A Allsop
- Department of Brain and Cognitive Sciences and Picower Institute for Learning and Memory, Cambridge, MA 02139, U.S.A.
| | - Kay M Tye
- Department of Brain and Cognitive Sciences and Picower Institute for Learning and Memory, Cambridge, MA 02139, U.S.A.
| | - Demba Ba
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A.
| |
Collapse
|
19
|
Abstract
Rapid growth in sensor and recording technologies is spurring rapid growth in time series data. Nonstationary and oscillatory structure in time series is commonly analyzed using time-varying spectral methods. These widely used techniques lack a statistical inference framework applicable to the entire time series. We develop a state-space multitaper (SS-MT) framework for time-varying spectral analysis of nonstationary time series. We efficiently implement the SS-MT spectrogram estimation algorithm in the frequency domain as parallel 1D complex Kalman filters. In analyses of human EEGs recorded under general anesthesia, the SS-MT paradigm provides enhanced denoising (>10 dB) and spectral resolution relative to standard multitaper methods, a flexible time-domain decomposition of the time series, and a broadly applicable, empirical Bayes’ framework for statistical inference. Time series are an important data class that includes recordings ranging from radio emissions, seismic activity, global positioning data, and stock prices to EEG measurements, vital signs, and voice recordings. Rapid growth in sensor and recording technologies is increasing the production of time series data and the importance of rapid, accurate analyses. Time series data are commonly analyzed using time-varying spectral methods to characterize their nonstationary and often oscillatory structure. Current methods provide local estimates of data features. However, they do not offer a statistical inference framework that applies to the entire time series. The important advances that we report are state-space multitaper (SS-MT) methods, which provide a statistical inference framework for time-varying spectral analysis of nonstationary time series. We model nonstationary time series as a sequence of second-order stationary Gaussian processes defined on nonoverlapping intervals. We use a frequency-domain random-walk model to relate the spectral representations of the Gaussian processes across intervals. The SS-MT algorithm efficiently computes spectral updates using parallel 1D complex Kalman filters. An expectation–maximization algorithm computes static and dynamic model parameter estimates. We test the framework in time-varying spectral analyses of simulated time series and EEG recordings from patients receiving general anesthesia. Relative to standard multitaper (MT), SS-MT gave enhanced spectral resolution and noise reduction (>10 dB) and allowed statistical comparisons of spectral properties among arbitrary time series segments. SS-MT also extracts time-domain estimates of signal components. The SS-MT paradigm is a broadly applicable, empirical Bayes’ framework for statistical inference that can help ensure accurate, reproducible findings from nonstationary time series analyses.
Collapse
|
20
|
Arai K, Kass RE. Inferring oscillatory modulation in neural spike trains. PLoS Comput Biol 2017; 13:e1005596. [PMID: 28985231 PMCID: PMC5646905 DOI: 10.1371/journal.pcbi.1005596] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2015] [Revised: 10/18/2017] [Accepted: 05/24/2017] [Indexed: 12/05/2022] Open
Abstract
Oscillations are observed at various frequency bands in continuous-valued neural recordings like the electroencephalogram (EEG) and local field potential (LFP) in bulk brain matter, and analysis of spike-field coherence reveals that spiking of single neurons often occurs at certain phases of the global oscillation. Oscillatory modulation has been examined in relation to continuous-valued oscillatory signals, and independently from the spike train alone, but behavior or stimulus triggered firing-rate modulation, spiking sparseness, presence of slow modulation not locked to stimuli and irregular oscillations with large variability in oscillatory periods, present challenges to searching for temporal structures present in the spike train. In order to study oscillatory modulation in real data collected under a variety of experimental conditions, we describe a flexible point-process framework we call the Latent Oscillatory Spike Train (LOST) model to decompose the instantaneous firing rate in biologically and behaviorally relevant factors: spiking refractoriness, event-locked firing rate non-stationarity, and trial-to-trial variability accounted for by baseline offset and a stochastic oscillatory modulation. We also extend the LOST model to accommodate changes in the modulatory structure over the duration of the experiment, and thereby discover trial-to-trial variability in the spike-field coherence of a rat primary motor cortical neuron to the LFP theta rhythm. Because LOST incorporates a latent stochastic auto-regressive term, LOST is able to detect oscillations when the firing rate is low, the modulation is weak, and when the modulating oscillation has a broad spectral peak. Oscillatory modulation of neural activity in the brain is widely observed under conditions associated with a variety of cognitive tasks and mental states. Within individual neurons, oscillations may be uncovered in the moment-to-moment variation in neural firing rate. This, however, is often challenging because many factors may affect fluctuations in neural firing rate and, in addition, neurons fire irregular sets of action potentials, or spike trains, due to an unknown combination of meaningful signals and extraneous noise. We have devised a statistical Latent Oscillatory Spike Train (LOST) model with accompanying model-fitting technology, that is able to detect subtle oscillations in spike trains by taking into account both spiking noise and temporal variation in the oscillation itself. The method couples two techniques developed for other purposes in the literature on Bayesian analysis. Using data simulated from theoretical neurons and real data recorded from cortical motor neurons, we demonstrate the method’s ability to track changes in the modulatory structure of the oscillation across experimental trials.
Collapse
Affiliation(s)
- Kensuke Arai
- Department of Statistics, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
- * E-mail:
| | - Robert E. Kass
- Department of Statistics, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
21
|
Rahnama Rad K, Machado TA, Paninski L. Robust and scalable Bayesian analysis of spatial neural tuning function data. Ann Appl Stat 2017. [DOI: 10.1214/16-aoas996] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
22
|
Fiecas M, Ombao H. Modeling the Evolution of Dynamic Brain Processes During an Associative Learning Experiment. J Am Stat Assoc 2017. [DOI: 10.1080/01621459.2016.1165683] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Mark Fiecas
- Department of Statistics, University of Warwick, Coventry, UK
| | - Hernando Ombao
- Department of Statistics, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
23
|
Vasques X, Vanel L, Villette G, Cif L. Morphological Neuron Classification Using Machine Learning. Front Neuroanat 2016; 10:102. [PMID: 27847467 PMCID: PMC5088188 DOI: 10.3389/fnana.2016.00102] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2016] [Accepted: 10/07/2016] [Indexed: 01/20/2023] Open
Abstract
Classification and quantitative characterization of neuronal morphologies from histological neuronal reconstruction is challenging since it is still unclear how to delineate a neuronal cell class and which are the best features to define them by. The morphological neuron characterization represents a primary source to address anatomical comparisons, morphometric analysis of cells, or brain modeling. The objectives of this paper are (i) to develop and integrate a pipeline that goes from morphological feature extraction to classification and (ii) to assess and compare the accuracy of machine learning algorithms to classify neuron morphologies. The algorithms were trained on 430 digitally reconstructed neurons subjectively classified into layers and/or m-types using young and/or adult development state population of the somatosensory cortex in rats. For supervised algorithms, linear discriminant analysis provided better classification results in comparison with others. For unsupervised algorithms, the affinity propagation and the Ward algorithms provided slightly better results.
Collapse
Affiliation(s)
- Xavier Vasques
- Laboratoire de Recherche en Neurosciences CliniquesSaint-André-de-Sangonis, France
- International Business Machines Corporation SystemsParis, France
| | - Laurent Vanel
- International Business Machines Corporation SystemsParis, France
| | | | - Laura Cif
- Département de Neurochirurgie, Hôpital Gui de Chauliac, Centre Hospitalier
Régional Universitaire de MontpellierMontpellier, France
- Université de Montpellier 1Montpellier, France
| |
Collapse
|
24
|
Inferring Cortical Variability from Local Field Potentials. J Neurosci 2016; 36:4121-35. [PMID: 27053217 DOI: 10.1523/jneurosci.2502-15.2016] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2015] [Accepted: 02/22/2016] [Indexed: 01/02/2023] Open
Abstract
UNLABELLED The responses of sensory neurons can be quite different to repeated presentations of the same stimulus. Here, we demonstrate a direct link between the trial-to-trial variability of cortical neuron responses and network activity that is reflected in local field potentials (LFPs). Spikes and LFPs were recorded with a multielectrode array from the middle temporal (MT) area of the visual cortex of macaques during the presentation of continuous optic flow stimuli. A maximum likelihood-based modeling framework was used to predict single-neuron spiking responses using the stimulus, the LFPs, and the activity of other recorded neurons. MT neuron responses were strongly linked to gamma oscillations (maximum at 40 Hz) as well as to lower-frequency delta oscillations (1-4 Hz), with consistent phase preferences across neurons. The predicted modulation associated with the LFP was largely complementary to that driven by visual stimulation, as well as the activity of other neurons, and accounted for nearly half of the trial-to-trial variability in the spiking responses. Moreover, the LFP model predictions accurately captured the temporal structure of noise correlations between pairs of simultaneously recorded neurons, and explained the variation in correlation magnitudes observed across the population. These results therefore identify signatures of network activity related to the variability of cortical neuron responses, and suggest their central role in sensory cortical function. SIGNIFICANCE STATEMENT The function of sensory neurons is nearly always cast in terms of representing sensory stimuli. However, recordings from visual cortex in awake animals show that a large fraction of neural activity is not predictable from the stimulus. We show that this variability is predictable given the simultaneously recorded measures of network activity, local field potentials. A model that combines elements of these signals with the stimulus processing of the neuron can predict neural responses dramatically better than current models, and can predict the structure of correlations across the cortical population. In identifying ways to understand stimulus processing in the context of ongoing network activity, this work thus provides a foundation to understand the role of sensory cortex in combining sensory and cognitive variables.
Collapse
|
25
|
Jones SR. When brain rhythms aren't 'rhythmic': implication for their mechanisms and meaning. Curr Opin Neurobiol 2016; 40:72-80. [PMID: 27400290 DOI: 10.1016/j.conb.2016.06.010] [Citation(s) in RCA: 168] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Revised: 06/09/2016] [Accepted: 06/21/2016] [Indexed: 01/26/2023]
Abstract
Rhythms are a prominent signature of brain activity. Their expression is correlated with numerous examples of healthy information processing and their fluctuations are a marker of disease states. Yet, their causal or epiphenomenal role in brain function is still highly debated. We review recent studies showing brain rhythms are not always 'rhythmic', by which we mean representative of repeated cycles of activity. Rather, high power and continuous rhythms in averaged signals can represent brief transient events on single trials whose density accumulates in the average. We also review evidence showing time-domain signals with vastly different waveforms can exhibit identical spectral-domain frequency and power. Further, non-oscillatory waveform feature can create spurious high spectral power. Knowledge of these possibilities is essential when interpreting rhythms and is easily missed without considering pre-processed data. Lastly, we discuss how these findings suggest new directions to pursue in our quest to discover the mechanism and meaning of brain rhythms.
Collapse
Affiliation(s)
- Stephanie R Jones
- Department of Neuroscience Brown University Providence, RI 02912, United States.
| |
Collapse
|
26
|
Flexible models for spike count data with both over- and under- dispersion. J Comput Neurosci 2016; 41:29-43. [PMID: 27008191 DOI: 10.1007/s10827-016-0603-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2015] [Revised: 03/14/2016] [Accepted: 03/18/2016] [Indexed: 10/22/2022]
Abstract
A key observation in systems neuroscience is that neural responses vary, even in controlled settings where stimuli are held constant. Many statistical models assume that trial-to-trial spike count variability is Poisson, but there is considerable evidence that neurons can be substantially more or less variable than Poisson depending on the stimuli, attentional state, and brain area. Here we examine a set of spike count models based on the Conway-Maxwell-Poisson (COM-Poisson) distribution that can flexibly account for both over- and under-dispersion in spike count data. We illustrate applications of this noise model for Bayesian estimation of tuning curves and peri-stimulus time histograms. We find that COM-Poisson models with group/observation-level dispersion, where spike count variability is a function of time or stimulus, produce more accurate descriptions of spike counts compared to Poisson models as well as negative-binomial models often used as alternatives. Since dispersion is one determinant of parameter standard errors, COM-Poisson models are also likely to yield more accurate model comparison. More generally, these methods provide a useful, model-based framework for inferring both the mean and variability of neural responses.
Collapse
|
27
|
Vyas S, Huang H, Gale JT, Sarma SV, Montgomery EB. Neuronal Complexity in Subthalamic Nucleus is Reduced in Parkinson's Disease. IEEE Trans Neural Syst Rehabil Eng 2015; 24:36-45. [PMID: 26168436 DOI: 10.1109/tnsre.2015.2453254] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Several theories posit increased Subthalamic Nucleus (STN) activity is causal to Parkinsonism, yet in our previous study we showed that activity from 113 STN neurons from two epilepsy patients and 103 neurons from nine Parkinson's disease (PD) patients demonstrated no significant differences in frequencies or in the coefficients of variation of mean discharge frequencies per 1-s epochs. We continued our analysis using point process modeling to capture higher order temporal dynamics; in particular, bursting, beta-band oscillations, excitatory and inhibitory ensemble interactions, and neuronal complexity. We used this analysis as input to a logistic regression classifier and were able to differentiate between PD and epilepsy neurons with an accuracy of 92%. We also found neuronal complexity, i.e., the number of states in a neuron's point process model, and inhibitory ensemble dynamics, which can be interpreted as a reduction in complexity, to be the most important features with respect to classification accuracy. Even in a dataset with no significant differences in firing rate, we observed differences between PD and epilepsy for other single-neuron measures. Our results suggest PD comes with a reduction in neuronal "complexity," which translates to a neuron's ability to encode information; the more complexity, the more information the neuron can encode. This is also consistent with studies correlating disease to loss of variability in neuronal activity, as the lower the complexity, the less variability.
Collapse
|
28
|
Abstract
The signal-to-noise ratio (SNR), a commonly used measure of fidelity in physical systems, is defined as the ratio of the squared amplitude or variance of a signal relative to the variance of the noise. This definition is not appropriate for neural systems in which spiking activity is more accurately represented as point processes. We show that the SNR estimates a ratio of expected prediction errors and extend the standard definition to one appropriate for single neurons by representing neural spiking activity using point process generalized linear models (PP-GLM). We estimate the prediction errors using the residual deviances from the PP-GLM fits. Because the deviance is an approximate χ(2) random variable, we compute a bias-corrected SNR estimate appropriate for single-neuron analysis and use the bootstrap to assess its uncertainty. In the analyses of four systems neuroscience experiments, we show that the SNRs are -10 dB to -3 dB for guinea pig auditory cortex neurons, -18 dB to -7 dB for rat thalamic neurons, -28 dB to -14 dB for monkey hippocampal neurons, and -29 dB to -20 dB for human subthalamic neurons. The new SNR definition makes explicit in the measure commonly used for physical systems the often-quoted observation that single neurons have low SNRs. The neuron's spiking history is frequently a more informative covariate for predicting spiking propensity than the applied stimulus. Our new SNR definition extends to any GLM system in which the factors modulating the response can be expressed as separate components of a likelihood function.
Collapse
|
29
|
Deng X, Faghih RT, Barbieri R, Paulk AC, Asaad WF, Brown EN, Dougherty DD, Widge AS, Eskandar EN, Eden UT. Estimating a dynamic state to relate neural spiking activity to behavioral signals during cognitive tasks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2015:7808-13. [PMID: 26738103 PMCID: PMC6118213 DOI: 10.1109/embc.2015.7320203] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
An important question in neuroscience is understanding the relationship between high-dimensional electrophysiological data and complex, dynamic behavioral data. One general strategy to address this problem is to define a low-dimensional representation of essential cognitive features describing this relationship. Here we describe a general state-space method to model and fit a low-dimensional cognitive state process that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas. In particular, we apply this model to data recorded in the lateral prefrontal cortex (PFC) and caudate nucleus of non-human primates as they perform learning and adaptation in a rule-switching task. First, we define a model for a cognitive state process related to learning, and estimate the progression of this learning state through the experiments. Next, we formulate a point process generalized linear model to relate the spiking activity of each PFC and caudate neuron to the stimated learning state. Then, we compute the posterior densities of the cognitive state using a recursive Bayesian decoding algorithm. We demonstrate that accurate decoding of a learning state is possible with a simple point process model of population spiking. Our analyses also allow us to compare decoding accuracy across neural populations in the PFC and caudate nucleus.
Collapse
Affiliation(s)
- Xinyi Deng
- Graduate Program in Statistics, Boston University, Boston, MA 02215 USA ()
| | - Rose T. Faghih
- Department of Brain and Cognitive Sciences, and the Neuroscience Statistics Research Laboratory at Massachusetts Institute of Technology, Cambridge, MA 02139 USA, and also with Massachusetts General Hospital, Boston, MA 02114 USA ()
| | - Riccardo Barbieri
- Neuroscience Statistics Research Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114 USA, and also with Massachusetts Institute of Technology, Cambridge, MA 02139 USA ()
| | - Angelique C. Paulk
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114 USA ()
| | - Wael F. Asaad
- Departments of Neurosurgery and Neuroscience, Alpert Medical School, Brown University and Rhode Island Hospital, Providence, RI 02912 USA ()
| | - Emery N. Brown
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02115 USA, and also with the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA ()
| | - Darin D. Dougherty
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114 USA ()
| | - Alik S. Widge
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114 USA, and also with Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139 USA ()
| | - Emad N. Eskandar
- Nayef Al-Rodhan Laboratories, Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114 USA ()
| | - Uri T. Eden
- Department of Mathematics and Statistics, Boston University, Boston, MA 02215 USA ()
| |
Collapse
|
30
|
Chu CJ, Tanaka N, Diaz J, Edlow BL, Wu O, Hämäläinen M, Stufflebeam S, Cash SS, Kramer MA. EEG functional connectivity is partially predicted by underlying white matter connectivity. Neuroimage 2014; 108:23-33. [PMID: 25534110 DOI: 10.1016/j.neuroimage.2014.12.033] [Citation(s) in RCA: 79] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2014] [Revised: 12/09/2014] [Accepted: 12/11/2014] [Indexed: 01/15/2023] Open
Abstract
Over the past decade, networks have become a leading model to illustrate both the anatomical relationships (structural networks) and the coupling of dynamic physiology (functional networks) linking separate brain regions. The relationship between these two levels of description remains incompletely understood and an area of intense research interest. In particular, it is unclear how cortical currents relate to underlying brain structural architecture. In addition, although theory suggests that brain communication is highly frequency dependent, how structural connections influence overlying functional connectivity in different frequency bands has not been previously explored. Here we relate functional networks inferred from statistical associations between source imaging of EEG activity and underlying cortico-cortical structural brain connectivity determined by probabilistic white matter tractography. We evaluate spontaneous fluctuating cortical brain activity over a long time scale (minutes) and relate inferred functional networks to underlying structural connectivity for broadband signals, as well as in seven distinct frequency bands. We find that cortical networks derived from source EEG estimates partially reflect both direct and indirect underlying white matter connectivity in all frequency bands evaluated. In addition, we find that when structural support is absent, functional connectivity is significantly reduced for high frequency bands compared to low frequency bands. The association between cortical currents and underlying white matter connectivity highlights the obligatory interdependence of functional and structural networks in the human brain. The increased dependence on structural support for the coupling of higher frequency brain rhythms provides new evidence for how underlying anatomy directly shapes emergent brain dynamics at fast time scales.
Collapse
Affiliation(s)
- C J Chu
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - N Tanaka
- Harvard Medical School, Boston, MA, USA; MGH/HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA; Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - J Diaz
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - B L Edlow
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; MGH/HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| | - O Wu
- Harvard Medical School, Boston, MA, USA; MGH/HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA; Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - M Hämäläinen
- Harvard Medical School, Boston, MA, USA; MGH/HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA; Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - S Stufflebeam
- Harvard Medical School, Boston, MA, USA; MGH/HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA; Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - S S Cash
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - M A Kramer
- Department of Mathematics and Statistics, Boston University, Boston, MA, USA
| |
Collapse
|
31
|
Pnevmatikakis EA, Rad KR, Huggins J, Paninski L. Fast Kalman Filtering and Forward–Backward Smoothing via a Low-Rank Perturbative Approach. J Comput Graph Stat 2014. [DOI: 10.1080/10618600.2012.760461] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
32
|
Aarts E, Verhage M, Veenvliet JV, Dolan CV, van der Sluis S. A solution to dependency: using multilevel analysis to accommodate nested data. Nat Neurosci 2014; 17:491-6. [PMID: 24671065 DOI: 10.1038/nn.3648] [Citation(s) in RCA: 400] [Impact Index Per Article: 36.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2013] [Accepted: 01/10/2014] [Indexed: 12/11/2022]
|
33
|
Prerau MJ, Lipton PA, Eichenbaum HB, Eden UT. Characterizing context-dependent differential firing activity in the hippocampus and entorhinal cortex. Hippocampus 2014; 24:476-92. [PMID: 24436108 DOI: 10.1002/hipo.22243] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/09/2014] [Indexed: 11/06/2022]
Abstract
The rat hippocampus and entorhinal cortex have been shown to possess neurons with place fields that modulate their firing properties under different behavioral contexts. Such context-dependent changes in neural activity are commonly studied through electrophysiological experiments in which a rat performs a continuous spatial alternation task on a T-maze. Previous research has analyzed context-based differential firing during this task by describing differences in the mean firing activity between left-turn and right-turn experimental trials. In this article, we develop qualitative and quantitative methods to characterize and compare changes in trial-to-trial firing rate variability for sets of experimental contexts. We apply these methods to cells in the CA1 region of hippocampus and in the dorsocaudal medial entorhinal cortex (dcMEC), characterizing the context-dependent differences in spiking activity during spatial alternation. We identify a subset of cells with context-dependent changes in firing rate variability. Additionally, we show that dcMEC populations encode turn direction uniformly throughout the T-maze stem, whereas CA1 populations encode context at major waypoints in the spatial trajectory. Our results suggest scenarios in which individual cells that sparsely provide information on turn direction might combine in the aggregate to produce a robust population encoding.
Collapse
Affiliation(s)
- Michael J Prerau
- Graduate Program in Neuroscience; Center for Memory and Brain; Massachusetts General Hospital, Department of Anesthesia, Critical Care, and Pain Medicine
| | | | | | | |
Collapse
|
34
|
Citi L, Ba D, Brown EN, Barbieri R. Likelihood Methods for Point Processes with Refractoriness. Neural Comput 2014; 26:237-63. [DOI: 10.1162/neco_a_00548] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Likelihood-based encoding models founded on point processes have received significant attention in the literature because of their ability to reveal the information encoded by spiking neural populations. We propose an approximation to the likelihood of a point-process model of neurons that holds under assumptions about the continuous time process that are physiologically reasonable for neural spike trains: the presence of a refractory period, the predictability of the conditional intensity function, and its integrability. These are properties that apply to a large class of point processes arising in applications other than neuroscience. The proposed approach has several advantages over conventional ones. In particular, one can use standard fitting procedures for generalized linear models based on iteratively reweighted least squares while improving the accuracy of the approximation to the likelihood and reducing bias in the estimation of the parameters of the underlying continuous-time model. As a result, the proposed approach can use a larger bin size to achieve the same accuracy as conventional approaches would with a smaller bin size. This is particularly important when analyzing neural data with high mean and instantaneous firing rates. We demonstrate these claims on simulated and real neural spiking activity. By allowing a substantive increase in the required bin size, our algorithm has the potential to lower the barrier to the use of point-process methods in an increasing number of applications.
Collapse
Affiliation(s)
- Luca Citi
- Department of Anesthesia, Massachusetts General Hospital–Harvard Medical School, Boston, MA 02129, and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02142, U.S.A
| | - Demba Ba
- Department of Anesthesia, Massachusetts General Hospital–Harvard Medical School, Boston, MA 02129, and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02142, U.S.A
| | - Emery N. Brown
- Department of Anesthesia, Massachusetts General Hospital–Harvard Medical School, Boston, MA 02129, and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02142, U.S.A
| | - Riccardo Barbieri
- Department of Anesthesia, Massachusetts General Hospital–Harvard Medical School, Boston, MA 02129, and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02142, U.S.A
| |
Collapse
|
35
|
Chen Z, Gomperts SN, Yamamoto J, Wilson MA. Neural representation of spatial topology in the rodent hippocampus. Neural Comput 2014; 26:1-39. [PMID: 24102128 PMCID: PMC3967246 DOI: 10.1162/neco_a_00538] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Pyramidal cells in the rodent hippocampus often exhibit clear spatial tuning in navigation. Although it has been long suggested that pyramidal cell activity may underlie a topological code rather than a topographic code, it remains unclear whether an abstract spatial topology can be encoded in the ensemble spiking activity of hippocampal place cells. Using a statistical approach developed previously, we investigate this question and related issues in greater detail. We recorded ensembles of hippocampal neurons as rodents freely foraged in one- and two-dimensional spatial environments and used a "decode-to-uncover" strategy to examine the temporally structured patterns embedded in the ensemble spiking activity in the absence of observed spatial correlates during periods of rodent navigation or awake immobility. Specifically, the spatial environment was represented by a finite discrete state space. Trajectories across spatial locations ("states") were associated with consistent hippocampal ensemble spiking patterns, which were characterized by a state transition matrix. From this state transition matrix, we inferred a topology graph that defined the connectivity in the state space. In both one- and two-dimensional environments, the extracted behavior patterns from the rodent hippocampal population codes were compared against randomly shuffled spike data. In contrast to a topographic code, our results support the efficiency of topological coding in the presence of sparse sample size and fuzzy space mapping. This computational approach allows us to quantify the variability of ensemble spiking activity, examine hippocampal population codes during off-line states, and quantify the topological complexity of the environment.
Collapse
Affiliation(s)
- Zhe Chen
- Department of Brain and Cognitive Sciences and Picower Institute for Learning and Memory, MIT, Cambridge, MA 02139, U.S.A.
| | | | | | | |
Collapse
|
36
|
Kramer MA, Eden UT. Assessment of cross-frequency coupling with confidence using generalized linear models. J Neurosci Methods 2013; 220:64-74. [PMID: 24012829 DOI: 10.1016/j.jneumeth.2013.08.006] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2013] [Revised: 08/05/2013] [Accepted: 08/06/2013] [Indexed: 01/25/2023]
Abstract
BACKGROUND Brain voltage activity displays distinct neuronal rhythms spanning a wide frequency range. How rhythms of different frequency interact - and the function of these interactions - remains an active area of research. Many methods have been proposed to assess the interactions between different frequency rhythms, in particular measures that characterize the relationship between the phase of a low frequency rhythm and the amplitude envelope of a high frequency rhythm. However, an optimal analysis method to assess this cross-frequency coupling (CFC) does not yet exist. NEW METHOD Here we describe a new procedure to assess CFC that utilizes the generalized linear modeling (GLM) framework. RESULTS We illustrate the utility of this procedure in three synthetic examples. The proposed GLM-CFC procedure allows a rapid and principled assessment of CFC with confidence bounds, scales with the intensity of the CFC, and accurately detects biphasic coupling. COMPARISON WITH EXISTING METHODS Compared to existing methods, the proposed GLM-CFC procedure is easily interpretable, possesses confidence intervals that are easy and efficient to compute, and accurately detects biphasic coupling. CONCLUSIONS The GLM-CFC statistic provides a method for accurate and statistically rigorous assessment of CFC.
Collapse
Affiliation(s)
- M A Kramer
- Department of Mathematics and Statistics, Boston University, 111 Cummington Mall, Boston, MA 02215, United States.
| | | |
Collapse
|
37
|
Shi JV, Wielaard J, Smith RT, Sajda P. Perceptual decision making "through the eyes" of a large-scale neural model of v1. Front Psychol 2013; 4:161. [PMID: 23626580 PMCID: PMC3630335 DOI: 10.3389/fpsyg.2013.00161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2012] [Accepted: 03/14/2013] [Indexed: 11/13/2022] Open
Abstract
Sparse coding has been posited as an efficient information processing strategy employed by sensory systems, particularly visual cortex. Substantial theoretical and experimental work has focused on the issue of sparse encoding, namely how the early visual system maps the scene into a sparse representation. In this paper we investigate the complementary issue of sparse decoding, for example given activity generated by a realistic mapping of the visual scene to neuronal spike trains, how do downstream neurons best utilize this representation to generate a “decision.” Specifically we consider both sparse (L1-regularized) and non-sparse (L2 regularized) linear decoding for mapping the neural dynamics of a large-scale spiking neuron model of primary visual cortex (V1) to a two alternative forced choice (2-AFC) perceptual decision. We show that while both sparse and non-sparse linear decoding yield discrimination results quantitatively consistent with human psychophysics, sparse linear decoding is more efficient in terms of the number of selected informative dimension.
Collapse
Affiliation(s)
- Jianing V Shi
- Department of Biomedical Engineering, Columbia University New York, NY, USA
| | | | | | | |
Collapse
|
38
|
Cajigas I, Malik WQ, Brown EN. nSTAT: open-source neural spike train analysis toolbox for Matlab. J Neurosci Methods 2012; 211:245-64. [PMID: 22981419 PMCID: PMC3491120 DOI: 10.1016/j.jneumeth.2012.08.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2012] [Revised: 08/06/2012] [Accepted: 08/07/2012] [Indexed: 11/23/2022]
Abstract
Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - generalized linear model (PP-GLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT--an open source neural spike train analysis toolbox for Matlab®. By adopting an object-oriented programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems.
Collapse
Affiliation(s)
- I Cajigas
- Department of Anesthesia and Critical Care, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA.
| | | | | |
Collapse
|
39
|
Banerjee A, Dean HL, Pesaran B. Parametric models to relate spike train and LFP dynamics with neural information processing. Front Comput Neurosci 2012; 6:51. [PMID: 22837745 PMCID: PMC3403111 DOI: 10.3389/fncom.2012.00051] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2012] [Accepted: 07/03/2012] [Indexed: 11/28/2022] Open
Abstract
Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial-by-trial behavioral performance than existing models of neural information processing. Our results highlight the utility of the unified modeling framework for characterizing spike-LFP recordings obtained during behavioral performance.
Collapse
Affiliation(s)
- Arpan Banerjee
- *Correspondence: Arpan Banerjee, Center for Neural Science, New York University, 4 Washington Place, Room 809, New York, NY 10003, USA. e-mail: ;
| | | | | |
Collapse
|
40
|
Santaniello S, Montgomery EB, Gale JT, Sarma SV. Non-stationary discharge patterns in motor cortex under subthalamic nucleus deep brain stimulation. Front Integr Neurosci 2012; 6:35. [PMID: 22754509 PMCID: PMC3385519 DOI: 10.3389/fnint.2012.00035] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2012] [Accepted: 05/31/2012] [Indexed: 11/29/2022] Open
Abstract
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) directly modulates the basal ganglia (BG), but how such stimulation impacts the cortex upstream is largely unknown. There is evidence of cortical activation in 6-hydroxydopamine (OHDA)-lesioned rodents and facilitation of motor evoked potentials in Parkinson's disease (PD) patients, but the impact of the DBS settings on the cortical activity in normal vs. Parkinsonian conditions is still debated. We use point process models to analyze non-stationary activation patterns and inter-neuronal dependencies in the motor and sensory cortices of two non-human primates during STN DBS. These features are enhanced after treatment with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), which causes a consistent PD-like motor impairment, while high-frequency (HF) DBS (i.e., ≥100 Hz) strongly reduces the short-term patterns (period: 3–7 ms) both before and after MPTP treatment, and elicits a short-latency post-stimulus activation. Low-frequency DBS (i.e., ≤50 Hz), instead, has negligible effects on the non-stationary features. Finally, by using tools from the information theory [i.e., receiver operating characteristic (ROC) curve and information rate (IR)], we show that the predictive power of these models is dependent on the DBS settings, i.e., the probability of spiking of the cortical neurons (which is captured by the point process models) is significantly conditioned on the timely delivery of the DBS input. This dependency increases with the DBS frequency and is significantly larger for high- vs. low-frequency DBS. Overall, the selective suppression of non-stationary features and the increased modulation of the spike probability suggest that HF STN DBS enhances the neuronal activation in motor and sensory cortices, presumably because of reinforcement mechanisms, which perhaps involve the overlap between feedback antidromic and feed-forward orthodromic responses along the BG-thalamo-cortical loop.
Collapse
Affiliation(s)
- Sabato Santaniello
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA
| | | | | | | |
Collapse
|
41
|
Saxena S, Schieber MH, Thakor NV, Sarma SV. Aggregate input-output models of neuronal populations. IEEE Trans Biomed Eng 2012; 59:2030-9. [PMID: 22552544 DOI: 10.1109/tbme.2012.2196699] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
An extraordinary amount of electrophysiological data has been collected from various brain nuclei to help us understand how neural activity in one region influences another region. In this paper, we exploit the point process modeling (PPM) framework and describe a method for constructing aggregate input-output (IO) stochastic models that predict spiking activity of a population of neurons in the "output" region as a function of the spiking activity of a population of neurons in the "input" region. We first build PPMs of each output neuron as a function of all input neurons, and then cluster the output neurons using the model parameters. Output neurons that lie within the same cluster have the same functional dependence on the input neurons. We first applied our method to simulated data, and successfully uncovered the predetermined relationship between the two regions. We then applied our method to experimental data to understand the input-output relationship between motor cortical neurons and 1) somatosensory and 2) premotor cortical neurons during a behavioral task. Our aggregate IO models highlighted interesting physiological dependences including relative effects of inhibition/excitation from input neurons and extrinsic factors on output neurons.
Collapse
Affiliation(s)
- Shreya Saxena
- Department of Electrical Engineering and Computer Sciences, Massachusetts Institute of Technology, Cambridge MA 02139, USA.
| | | | | | | |
Collapse
|
42
|
State-space analysis of time-varying higher-order spike correlation for multiple neural spike train data. PLoS Comput Biol 2012; 8:e1002385. [PMID: 22412358 PMCID: PMC3297562 DOI: 10.1371/journal.pcbi.1002385] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2011] [Accepted: 12/28/2011] [Indexed: 11/23/2022] Open
Abstract
Precise spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess spike synchrony in simultaneously recorded multiple neural spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying spike interactions by means of a state-space analysis. Discretized parallel spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of spike interaction parameters. This method can simultaneously estimate the dynamic pairwise spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural spike train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to spike data. In addition, we formulate a test method for the presence of higher-order spike correlation even in nonstationary spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated spike data with known underlying correlation dynamics. Finally, we apply the methods to neural spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higher-order spike correlation organizes dynamically in relation to a behavioral demand. Nearly half a century ago, the Canadian psychologist D. O. Hebb postulated the formation of assemblies of tightly connected cells in cortical recurrent networks because of changes in synaptic weight (Hebb's learning rule) by repetitive sensory stimulation of the network. Consequently, the activation of such an assembly for processing sensory or behavioral information is likely to be expressed by precisely coordinated spiking activities of the participating neurons. However, the available analysis techniques for multiple parallel neural spike data do not allow us to reveal the detailed structure of transiently active assemblies as indicated by their dynamical pairwise and higher-order spike correlations. Here, we construct a state-space model of dynamic spike interactions, and present a recursive Bayesian method that makes it possible to trace multiple neurons exhibiting such precisely coordinated spiking activities in a time-varying manner. We also formulate a hypothesis test of the underlying dynamic spike correlation, which enables us to detect the assemblies activated in association with behavioral events. Therefore, the proposed method can serve as a useful tool to test Hebb's cell assembly hypothesis.
Collapse
|
43
|
Eden UT, Amirnovin R, Eskandar EN. Using point process models to describe rhythmic spiking in the subthalamic nucleus of Parkinson's patients. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2011:757-60. [PMID: 22254421 DOI: 10.1109/iembs.2011.6090173] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Neurological disease is often associated with changes in firing activity in specific brain areas. Accurate statistical models of neural spiking can provide insight into the mechanisms by which the disease develops and clinical symptoms manifest. Point process theory provides a powerful framework for constructing, fitting, and evaluating the quality of neural spiking models. We illustrate an application of point process modeling to the problem of characterizing abnormal oscillatory firing patterns of neurons in the subthalamic nucleus (STN) of patients with Parkinson's disease (PD). We characterize the firing properties of these neurons by constructing conditional intensity models using spline basis functions that relate the spiking of each neuron to movement variables and the neuron's past firing history, both at short and long time scales. By calculating maximum likelihood estimators for all of the parameters and their significance levels, we are able to describe the relative propensity of aberrant STN spiking in terms of factors associated with voluntary movements, with intrinsic properties of the neurons, and factors that may be related to dysregulated network dynamics.
Collapse
Affiliation(s)
- Uri T Eden
- Department of Mathematics and Statistics, Boston University, Boston, MA 02215, USA.
| | | | | |
Collapse
|
44
|
Amarasingham A, Harrison MT, Hatsopoulos NG, Geman S. Conditional modeling and the jitter method of spike resampling. J Neurophysiol 2011; 107:517-31. [PMID: 22031767 DOI: 10.1152/jn.00633.2011] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The existence and role of fine-temporal structure in the spiking activity of central neurons is the subject of an enduring debate among physiologists. To a large extent, the problem is a statistical one: what inferences can be drawn from neurons monitored in the absence of full control over their presynaptic environments? In principle, properly crafted resampling methods can still produce statistically correct hypothesis tests. We focus on the approach to resampling known as jitter. We review a wide range of jitter techniques, illustrated by both simulation experiments and selected analyses of spike data from motor cortical neurons. We rely on an intuitive and rigorous statistical framework known as conditional modeling to reveal otherwise hidden assumptions and to support precise conclusions. Among other applications, we review statistical tests for exploring any proposed limit on the rate of change of spiking probabilities, exact tests for the significance of repeated fine-temporal patterns of spikes, and the construction of acceptance bands for testing any purported relationship between sensory or motor variables and synchrony or other fine-temporal events.
Collapse
Affiliation(s)
- Asohan Amarasingham
- Department of Mathematics, The City College of New York, and Program in Cognitive Neuroscience, The Graduate Center, City University of New York, New York, New York, USA
| | | | | | | |
Collapse
|
45
|
Neuroplasticity of the sensorimotor cortex during learning. Neural Plast 2011; 2011:310737. [PMID: 21949908 PMCID: PMC3178113 DOI: 10.1155/2011/310737] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2011] [Accepted: 07/12/2011] [Indexed: 11/17/2022] Open
Abstract
We will discuss some of the current issues in understanding plasticity in the sensorimotor (SM) cortices on the behavioral, neurophysiological, and synaptic levels. We will focus our paper on reaching and grasping movements in the rat. In addition, we will discuss our preliminary work utilizing inhibition of protein kinase Mζ (PKMζ), which has recently been shown necessary and sufficient for the maintenance of long-term potentiation (LTP) (Ling et al., 2002). With this new knowledge and inhibitors to this system, as well as the ability to overexpress this system, we can start to directly modulate LTP and determine its influence on behavior as well as network level processing dependent at least in part due to this form of LTP. We will also briefly introduce the use of brain machine interface (BMI) paradigms to ask questions about sensorimotor plasticity and discuss current analysis techniques that may help in our understanding of neuroplasticity.
Collapse
|
46
|
Sarma SV, Nguyen DP, Czanner G, Wirth S, Wilson MA, Suzuki W, Brown EN. Computing confidence intervals for point process models. Neural Comput 2011; 23:2731-45. [PMID: 21851280 DOI: 10.1162/neco_a_00198] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specification of the model, estimation of model parameters given observed data, verification of the model using goodness of fit, and characterization of the model using confidence bounds. Of these steps, only the first three have been applied widely in the literature, suggesting the need to dedicate a discussion to how the time-rescaling theorem, in combination with parametric bootstrap sampling, can be generally used to compute confidence bounds of point process models. In our first example, we use a generalized linear model of spiking propensity to demonstrate that confidence bounds derived from bootstrap simulations are consistent with those computed from closed-form analytic solutions. In our second example, we consider an adaptive point process model of hippocampal place field plasticity for which no analytical confidence bounds can be derived. We demonstrate how to simulate bootstrap samples from adaptive point process models, how to use these samples to generate confidence bounds, and how to statistically test the hypothesis that neural representations at two time points are significantly different. These examples have been designed as useful guides for performing scientific inference based on point process models.
Collapse
Affiliation(s)
- Sridevi V Sarma
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | | | | | | | | | |
Collapse
|
47
|
Prerau MJ, Eden UT. A general likelihood framework for characterizing the time course of neural activity. Neural Comput 2011; 23:2537-66. [PMID: 21732865 DOI: 10.1162/neco_a_00185] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We develop a general likelihood-based framework for use in the estimation of neural firing rates, which is designed to choose the temporal smoothing parameters that maximize the likelihood of missing data. This general framework is algorithm-independent and thus can be applied to a multitude of established methods for firing rate or conditional intensity estimation. As a simple example of the use of the general framework, we apply it to the peristimulus time histogram and kernel smoother, the methods most widely used for firing rate estimation in the electrophysiological literature and practice. In doing so, we illustrate how the use of the framework can employ the general point process likelihood as a principled cost function and can provide substantial improvements in estimation accuracy for even the most basic of rate estimation algorithms. In particular, the resultant kernel smoother is simple to implement, efficient to compute, and can accurately determine the bandwidth of a given rate process from individual spike trains. We perform a simulation study to illustrate how the likelihood framework enables the kernel smoother to pick the bandwidth parameter that best predicts missing data, and we show applications to real experimental spike train data. Additionally, we discuss how the general likelihood framework may be used in conjunction with more sophisticated methods for firing rate and conditional intensity estimation and suggest possible applications.
Collapse
Affiliation(s)
- Michael J Prerau
- Graduate Program in Neuroscience, Boston University, Boston, MA 02215, USA.
| | | |
Collapse
|
48
|
Salimpour Y, Soltanian-Zadeh H, Salehi S, Emadi N, Abouzari M. Neuronal spike train analysis in likelihood space. PLoS One 2011; 6:e21256. [PMID: 21738626 PMCID: PMC3124490 DOI: 10.1371/journal.pone.0021256] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2010] [Accepted: 05/26/2011] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Conventional methods for spike train analysis are predominantly based on the rate function. Additionally, many experiments have utilized a temporal coding mechanism. Several techniques have been used for analyzing these two sources of information separately, but using both sources in a single framework remains a challenging problem. Here, an innovative technique is proposed for spike train analysis that considers both rate and temporal information. METHODOLOGY/PRINCIPAL FINDINGS Point process modeling approach is used to estimate the stimulus conditional distribution, based on observation of repeated trials. The extended Kalman filter is applied for estimation of the parameters in a parametric model. The marked point process strategy is used in order to extend this model from a single neuron to an entire neuronal population. Each spike train is transformed into a binary vector and then projected from the observation space onto the likelihood space. This projection generates a newly structured space that integrates temporal and rate information, thus improving performance of distribution-based classifiers. In this space, the stimulus-specific information is used as a distance metric between two stimuli. To illustrate the advantages of the proposed technique, spiking activity of inferior temporal cortex neurons in the macaque monkey are analyzed in both the observation and likelihood spaces. Based on goodness-of-fit, performance of the estimation method is demonstrated and the results are subsequently compared with the firing rate-based framework. CONCLUSIONS/SIGNIFICANCE From both rate and temporal information integration and improvement in the neural discrimination of stimuli, it may be concluded that the likelihood space generates a more accurate representation of stimulus space. Further, an understanding of the neuronal mechanism devoted to visual object categorization may be addressed in this framework as well.
Collapse
Affiliation(s)
- Yousef Salimpour
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland, United States of America
| | - Hamid Soltanian-Zadeh
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- Center of Excellence for Control and Intelligent Processing, Department of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
- Image Analysis Laboratory, Department of Radiology, Henry Ford Health System, Detroit, Michigan, United States of America
| | - Sina Salehi
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- Research Group for Brain and Cognitive Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Nazli Emadi
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- Research Group for Brain and Cognitive Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mehdi Abouzari
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- Research Group for Brain and Cognitive Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
49
|
Haslinger R, Pipa G, Brown E. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking. Neural Comput 2011; 22:2477-506. [PMID: 20608868 DOI: 10.1162/neco_a_00015] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Collapse
Affiliation(s)
- Robert Haslinger
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA.
| | | | | |
Collapse
|
50
|
Adaptation to a cortex-controlled robot attached at the pelvis and engaged during locomotion in rats. J Neurosci 2011; 31:3110-28. [PMID: 21414932 DOI: 10.1523/jneurosci.2335-10.2011] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
Brain-machine interfaces (BMIs) should ideally show robust adaptation of the BMI across different tasks and daily activities. Most BMIs have used overpracticed tasks. Little is known about BMIs in dynamic environments. How are mechanically body-coupled BMIs integrated into ongoing rhythmic dynamics, for example, in locomotion? To examine this, we designed a novel BMI using neural discharge in the hindlimb/trunk motor cortex in rats during locomotion to control a robot attached at the pelvis. We tested neural adaptation when rats experienced (1) control locomotion, (2) "simple elastic load" (a robot load on locomotion without any BMI neural control), and (3) "BMI with elastic load" (in which the robot loaded locomotion and a BMI neural control could counter this load). Rats significantly offset applied loads with the BMI while preserving more normal pelvic height compared with load alone. Adaptation occurred over ∼100-200 step cycles in a trial. Firing rates increased in both the loaded conditions compared with baseline. Mean phases of the discharge of cells in the step cycle shifted significantly between BMI and the simple load condition. Over time, more BMI cells became positively correlated with the external force and modulated more deeply, and the network correlations of neurons on a 100 ms timescale increased. Loading alone showed none of these effects. The BMI neural changes of rate and force correlations persisted or increased over repeated trials. Our results show that rats have the capacity to use motor adaptation and motor learning to fairly rapidly engage hindlimb/trunk-coupled BMIs in their locomotion.
Collapse
|