1
|
Meyer LM, Zamani M, Rokai J, Demosthenous A. Deep learning-based spike sorting: a survey. J Neural Eng 2024; 21:061003. [PMID: 39454590 DOI: 10.1088/1741-2552/ad8b6c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 10/25/2024] [Indexed: 10/28/2024]
Abstract
Objective.Deep learning is increasingly permeating neuroscience, leading to a rise in signal-processing applications for extracellular recordings. These signals capture the activity of small neuronal populations, necessitating 'spike sorting' to assign action potentials (spikes) to their underlying neurons. With the rise in publications delving into new methodologies and techniques for deep learning-based spike sorting, it is crucial to synthesise these findings critically. This survey provides an in-depth evaluation of the approaches, methodologies and outcomes presented in recent articles, shedding light on the current state-of-the-art.Approach.Twenty-four articles published until December 2023 on deep learning-based spike sorting have been examined. The proposed methods are divided into three sub-problems of spike sorting: spike detection, feature extraction and classification. Moreover, integrated systems, i.e. models that detect spikes and extract features or do classification within a single network, are included.Main results.Although most algorithms have been developed for single-channel recordings, models utilising multi-channel data have already shown promising results, with efficient hardware implementations running quantised models on application-specific integrated circuits and field programmable gate arrays. Convolutional neural networks have been used extensively for spike detection and classification as the data can be processed spatiotemporally while maintaining low-parameter models and increasing generalisation and efficiency. Autoencoders have been mainly utilised for dimensionality reduction, enabling subsequent clustering with standard methods. Also, integrated systems have shown great potential in solving the spike sorting problem from end to end.Significance.This survey explores recent articles on deep learning-based spike sorting and highlights the capabilities of deep neural networks in overcoming associated challenges, but also highlights potential biases of certain models. Serving as a resource for both newcomers and seasoned researchers in the field, this work provides insights into the latest advancements and may inspire future model development.
Collapse
Affiliation(s)
- Luca M Meyer
- Currently not Affiliated with any Institution, Wiesbaden, Germany
| | - Majid Zamani
- School of Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
| | - János Rokai
- Institute of Cognitive Neurosciences and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Andreas Demosthenous
- Department of Electronic and Electrical Engineering, University College London, London, United Kingdom
| |
Collapse
|
2
|
Sani OG, Pesaran B, Shanechi MM. Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks. Nat Neurosci 2024; 27:2033-2045. [PMID: 39242944 PMCID: PMC11452342 DOI: 10.1038/s41593-024-01731-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural-behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural-behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural-behavioral data.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, CA, USA.
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
- Alfred E. Mann Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
3
|
Colins Rodriguez A, Perich MG, Miller LE, Humphries MD. Motor Cortex Latent Dynamics Encode Spatial and Temporal Arm Movement Parameters Independently. J Neurosci 2024; 44:e1777232024. [PMID: 39060178 PMCID: PMC11358606 DOI: 10.1523/jneurosci.1777-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 07/12/2024] [Accepted: 07/17/2024] [Indexed: 07/28/2024] Open
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where three male monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show that this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, and also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matt G Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montreal, Quebec H3T 1J4, Canada
- Québec Artificial Intelligence Institute (Mila), Montreal, Quebec H2S 3H1, Canada
| | - Lee E Miller
- Department of Biomedical Engineering, Northwestern University, Chicago, Illinois 60208
| | - Mark D Humphries
- School of Psychology, University of Nottingham, Nottingham NG7 2RD, United Kingdom
| |
Collapse
|
4
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex of male macaques. Nat Commun 2024; 15:5738. [PMID: 38982106 PMCID: PMC11233555 DOI: 10.1038/s41467-024-50203-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 07/02/2024] [Indexed: 07/11/2024] Open
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here, we have male macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlational analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between neurons in prefrontal cortex maintain a stable population code and context-invariant beliefs during naturalistic behavior.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA.
- Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA.
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
- Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
5
|
Rodriguez AC, Perich MG, Miller L, Humphries MD. Motor cortex latent dynamics encode spatial and temporal arm movement parameters independently. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.26.542452. [PMID: 37292834 PMCID: PMC10246015 DOI: 10.1101/2023.05.26.542452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: Each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, but also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matthew G. Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montréal, Canada
- Québec Artificial Intelligence Institute (Mila), Québec, Canada
| | - Lee Miller
- Northwestern University, Department of Biomedical Engineering, Chicago, USA
| | - Mark D. Humphries
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
6
|
Chen W, Wang Y, Yang Y. Efficient Estimation of Directed Connectivity in Nonlinear and Nonstationary Spiking Neuron Networks. IEEE Trans Biomed Eng 2024; 71:841-854. [PMID: 37756180 DOI: 10.1109/tbme.2023.3319956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
OBJECTIVE Studying directed connectivity within spiking neuron networks can help understand neural mechanisms. Existing methods assume linear time-invariant neural dynamics with a fixed time lag in information transmission, while spiking networks usually involve complex dynamics that are nonlinear and nonstationary, and have varying time lags. METHODS We develop a Gated Recurrent Unit (GRU)-Point Process (PP) method to estimate directed connectivity within spiking networks. We use a GRU to describe the dependency of the target neuron's current firing rate on the source neurons' past spiking events and a PP to relate the target neuron's firing rate to its current 0-1 spiking event. The GRU model uses recurrent states and gate/activation functions to deal with varying time lags, nonlinearity, and nonstationarity in a parameter-efficient manner. We estimate the model using maximum likelihood and compute directed information as our measure of directed connectivity. RESULTS We conduct simulations using artificial spiking networks and a biophysical model of Parkinson's disease to show that GRU-PP systematically addresses varying time lags, nonlinearity, and nonstationarity, and estimates directed connectivity with high accuracy and data efficiency. We also use a non-human-primate dataset to show that GRU-PP correctly identifies the biophysically-plausible stronger PMd-to-M1 connectivity than M1-to-PMd connectivity during reaching. In all experiments, the GRU-PP consistently outperforms state-of-the-art methods. CONCLUSION The GRU-PP method efficiently estimates directed connectivity in varying time lag, nonlinear, and nonstationary spiking neuron networks. SIGNIFICANCE The proposed method can serve as a directed connectivity analysis tool for investigating complex spiking neuron network dynamics.
Collapse
|
7
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
8
|
Taeckens EA, Shah S. A spiking neural network with continuous local learning for robust online brain machine interface. J Neural Eng 2024; 20:066042. [PMID: 38173230 DOI: 10.1088/1741-2552/ad1787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 12/20/2023] [Indexed: 01/05/2024]
Abstract
Objective.Spiking neural networks (SNNs) are powerful tools that are well suited for brain machine interfaces (BMI) due to their similarity to biological neural systems and computational efficiency. They have shown comparable accuracy to state-of-the-art methods, but current training methods require large amounts of memory, and they cannot be trained on a continuous input stream without pausing periodically to perform backpropagation. An ideal BMI should be capable training continuously without interruption to minimize disruption to the user and adapt to changing neural environments.Approach.We propose a continuous SNN weight update algorithm that can be trained to perform regression learning with no need for storing past spiking events in memory. As a result, the amount of memory needed for training is constant regardless of the input duration. We evaluate the accuracy of the network on recordings of neural data taken from the premotor cortex of a primate performing reaching tasks. Additionally, we evaluate the SNN in a simulated closed loop environment and observe its ability to adapt to sudden changes in the input neural structure.Main results.The continuous learning SNN achieves the same peak correlation (ρ=0.7) as existing SNN training methods when trained offline on real neural data while reducing the total memory usage by 92%. Additionally, it matches state-of-the-art accuracy in a closed loop environment, demonstrates adaptability when subjected to multiple types of neural input disruptions, and is capable of being trained online without any prior offline training.Significance.This work presents a neural decoding algorithm that can be trained rapidly in a closed loop setting. The algorithm increases the speed of acclimating a new user to the system and also can adapt to sudden changes in neural behavior with minimal disruption to the user.
Collapse
Affiliation(s)
- Elijah A Taeckens
- Department of Electrical and Computer Engineering, University of Maryland, College Park, United States of America
| | - Sahil Shah
- Department of Electrical and Computer Engineering, University of Maryland, College Park, United States of America
| |
Collapse
|
9
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 PMCID: PMC11735406 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
10
|
Safaie M, Chang JC, Park J, Miller LE, Dudman JT, Perich MG, Gallego JA. Preserved neural dynamics across animals performing similar behaviour. Nature 2023; 623:765-771. [PMID: 37938772 PMCID: PMC10665198 DOI: 10.1038/s41586-023-06714-0] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 10/04/2023] [Indexed: 11/09/2023]
Abstract
Animals of the same species exhibit similar behaviours that are advantageously adapted to their body and environment. These behaviours are shaped at the species level by selection pressures over evolutionary timescales. Yet, it remains unclear how these common behavioural adaptations emerge from the idiosyncratic neural circuitry of each individual. The overall organization of neural circuits is preserved across individuals1 because of their common evolutionarily specified developmental programme2-4. Such organization at the circuit level may constrain neural activity5-8, leading to low-dimensional latent dynamics across the neural population9-11. Accordingly, here we suggested that the shared circuit-level constraints within a species would lead to suitably preserved latent dynamics across individuals. We analysed recordings of neural populations from monkey and mouse motor cortex to demonstrate that neural dynamics in individuals from the same species are surprisingly preserved when they perform similar behaviour. Neural population dynamics were also preserved when animals consciously planned future movements without overt behaviour12 and enabled the decoding of planned and ongoing movement across different individuals. Furthermore, we found that preserved neural dynamics extend beyond cortical regions to the dorsal striatum, an evolutionarily older structure13,14. Finally, we used neural network models to demonstrate that behavioural similarity is necessary but not sufficient for this preservation. We posit that these emergent dynamics result from evolutionary constraints on brain development and thus reflect fundamental properties of the neural basis of behaviour.
Collapse
Affiliation(s)
- Mostafa Safaie
- Department of Bioengineering, Imperial College London, London, UK
| | - Joanna C Chang
- Department of Bioengineering, Imperial College London, London, UK
| | - Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, TX, USA
| | - Lee E Miller
- Departments of Physiology, Biomedical Engineering and Physical Medicine and Rehabilitation, Northwestern University and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Joshua T Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, TX, USA
| | - Matthew G Perich
- Département de Neurosciences, Faculté de Médecine, Université de Montréal, Montreal, Quebec, Canada.
- Mila, Quebec Artificial Intelligence Institute, Montreal, Quebec, Canada.
| | - Juan A Gallego
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
11
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.30.551169. [PMID: 37577498 PMCID: PMC10418097 DOI: 10.1101/2023.07.30.551169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here we have macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlation analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between prefrontal cortex neurons maintains a stable population code and context-invariant beliefs during naturalistic behavior with closed action-perception loops.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
12
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. SpikeSEE: An energy-efficient dynamic scenes processing framework for retinal prostheses. Neural Netw 2023; 164:357-368. [PMID: 37167749 DOI: 10.1016/j.neunet.2023.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 04/29/2023] [Accepted: 05/01/2023] [Indexed: 05/13/2023]
Abstract
Intelligent and low-power retinal prostheses are highly demanded in this era, where wearable and implantable devices are used for numerous healthcare applications. In this paper, we propose an energy-efficient dynamic scenes processing framework (SpikeSEE) that combines a spike representation encoding technique and a bio-inspired spiking recurrent neural network (SRNN) model to achieve intelligent processing and extreme low-power computation for retinal prostheses. The spike representation encoding technique could interpret dynamic scenes with sparse spike trains, decreasing the data volume. The SRNN model, inspired by the human retina's special structure and spike processing method, is adopted to predict the response of ganglion cells to dynamic scenes. Experimental results show that the Pearson correlation coefficient of the proposed SRNN model achieves 0.93, which outperforms the state-of-the-art processing framework for retinal prostheses. Thanks to the spike representation and SRNN processing, the model can extract visual features in a multiplication-free fashion. The framework achieves 8 times power reduction compared with the convolutional recurrent neural network (CRNN) processing-based framework. Our proposed SpikeSEE predicts the response of ganglion cells more accurately with lower energy consumption, which alleviates the precision and power issues of retinal prostheses and provides a potential solution for wearable or implantable prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, 100850, China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China.
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China.
| |
Collapse
|
13
|
Deeti S, Cheng K, Graham P, Wystrach A. Scanning behaviour in ants: an interplay between random-rate processes and oscillators. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2023:10.1007/s00359-023-01628-8. [PMID: 37093284 DOI: 10.1007/s00359-023-01628-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 03/05/2023] [Accepted: 03/29/2023] [Indexed: 04/25/2023]
Abstract
At the start of a journey home or to a foraging site, ants often stop, interrupting their forward movement, turn on the spot a number of times, and fixate in different directions. These scanning bouts are thought to provide visual information for choosing a path to travel. The temporal organization of such scanning bouts has implications about the neural organisation of navigational behaviour. We examined (1) the temporal distribution of the start of such scanning bouts and (2) the dynamics of saccadic body turns and fixations that compose a scanning bout in Australian desert ants, Melophorus bagoti, as they came out of a walled channel onto open field at the start of their homeward journey. Ants were caught when they neared their nest and displaced to different locations to start their journey home again. The observed parameters were mostly similar across familiar and unfamiliar locations. The turning angles of saccadic body turning to the right or left showed some stereotypy, with a peak just under 45°. The direction of such saccades appears to be determined by a slow oscillatory process as described in other insect species. In timing, however, both the distribution of inter-scanning-bout intervals and individual fixation durations showed exponential characteristics, the signature for a random-rate or Poisson process. Neurobiologically, therefore, there must be some process that switches behaviour (starting a scanning bout or ending a fixation) with equal probability at every moment in time. We discuss how chance events in the ant brain that occasionally reach a threshold for triggering such behaviours can generate the results.
Collapse
Affiliation(s)
- Sudhakar Deeti
- School of Natural Sciences, Macquarie University, Sydney, NSW 2019, Australia
| | - Ken Cheng
- School of Natural Sciences, Macquarie University, Sydney, NSW 2019, Australia.
| | - Paul Graham
- School of Life Sciences, University of Sussex, Brighton, UK
| | - Antoine Wystrach
- Centre de Recherches Sur La Cognition Animale, CBI, CNRS, Université Paul Sabatier, Toulouse, France
| |
Collapse
|
14
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent structures in neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532479. [PMID: 36993605 PMCID: PMC10054986 DOI: 10.1101/2023.03.13.532479] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.
Collapse
|
15
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532554. [PMID: 36993213 PMCID: PMC10055042 DOI: 10.1101/2023.03.14.532554] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
|
16
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
17
|
Xu Z, Zhou X, Xu Y, Wu W. Removing nonlinear misalignment in neuronal spike trains using the Fisher-Rao registration framework. J Neurosci Methods 2022; 367:109436. [PMID: 34890697 DOI: 10.1016/j.jneumeth.2021.109436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Revised: 11/29/2021] [Accepted: 12/02/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND The temporal precision in neural spike train data is critically important for understanding functional mechanism in the nervous systems. However, the timing variability of spiking activity can be highly nonlinear in practical observations due to behavioral variability or unobserved/unobservable cognitive states. NEW METHOD In this study, we propose to adopt a powerful nonlinear method, referred to as the Fisher-Rao Registration (FRR), to remove such nonlinear phase variability in discrete neuronal spike trains. We also develop a smoothing procedure on the discrete spike train data in order to use the FRR framework. COMPARISON WITH EXISTING METHODS We systematically compare the FRR with the state-of-the-art linear and nonlinear methods in terms of model efficiency and effectiveness. RESULTS We show that the FRR has superior performance and the advantages are well illustrated with simulation and real experimental data. CONCLUSIONS It is found the FRR framework provides more appropriate alignment performance to understand the temporal variability in neuronal spike trains.
Collapse
Affiliation(s)
- Zishen Xu
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| | - Xinyu Zhou
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| | - Yiqi Xu
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| | - Wei Wu
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| |
Collapse
|
18
|
Kim MK, Sohn JW, Kim SP. Finding Kinematics-Driven Latent Neural States From Neuronal Population Activity for Motor Decoding. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2027-2036. [PMID: 34550888 DOI: 10.1109/tnsre.2021.3114367] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
While intracortical brain-machine interfaces (BMIs) demonstrate feasibility to restore mobility to people with paralysis, it is still challenging to maintain high-performance decoding in clinical BMIs. One of the main obstacles for high-performance BMI is the noise-prone nature of traditional decoding methods that connect neural response explicitly with physical quantity, such as velocity. In contrast, the recent development of latent neural state model enables a robust readout of large-scale neuronal population activity contents. However, these latent neural states do not necessarily contain kinematic information useful for decoding. Therefore, this study proposes a new approach to finding kinematics-dependent latent factors by extracting latent factors' kinematics-dependent components using linear regression. We estimated these components from the population activity through nonlinear mapping. The proposed kinematics-dependent latent factors generate neural trajectories that discriminate latent neural states before and after the motion onset. We compared the decoding performance of the proposed analysis model with the results from other popular models. They are factor analysis (FA), Gaussian process factor analysis (GPFA), latent factor analysis via dynamical systems (LFADS), preferential subspace identification (PSID), and neuronal population firing rates. The proposed analysis model results in higher decoding accuracy than do the others ( % improvement on average). Our approach may pave a new way to extract latent neural states specific to kinematic information from motor cortices, potentially improving decoding performance for online intracortical BMIs.
Collapse
|
19
|
Isbister JB, Reyes-Puerta V, Sun JJ, Horenko I, Luhmann HJ. Clustering and control for adaptation uncovers time-warped spike time patterns in cortical networks in vivo. Sci Rep 2021; 11:15066. [PMID: 34326363 PMCID: PMC8322153 DOI: 10.1038/s41598-021-94002-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 06/29/2021] [Indexed: 12/04/2022] Open
Abstract
How information in the nervous system is encoded by patterns of action potentials (i.e. spikes) remains an open question. Multi-neuron patterns of single spikes are a prime candidate for spike time encoding but their temporal variability requires further characterisation. Here we show how known sources of spike count variability affect stimulus-evoked spike time patterns between neurons separated over multiple layers and columns of adult rat somatosensory cortex in vivo. On subsets of trials (clusters) and after controlling for stimulus-response adaptation, spike time differences between pairs of neurons are “time-warped” (compressed/stretched) by trial-to-trial changes in shared excitability, explaining why fixed spike time patterns and noise correlations are seldom reported. We show that predicted cortical state is correlated between groups of 4 neurons, introducing the possibility of spike time pattern modulation by population-wide trial-to-trial changes in excitability (i.e. cortical state). Under the assumption of state-dependent coding, we propose an improved potential encoding capacity.
Collapse
Affiliation(s)
- James B Isbister
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, UK. .,The Blue Brain Project, École Polytechnique Fédérale de Lausanne, 1202, Geneva, Switzerland.
| | - Vicente Reyes-Puerta
- Institute of Physiology, University Medical Center, Johannes Gutenberg University, Mainz, Germany
| | - Jyh-Jang Sun
- Institute of Physiology, University Medical Center, Johannes Gutenberg University, Mainz, Germany.,NERF, Kapeldreef 75, 3001, Leuven, Belgium.,imec, Remisebosweg 1, 3001, Leuven, Belgium
| | - Illia Horenko
- Faculty of Informatics, Universita della Svizzera Italiana, Via G. Buffi 13, 6900, Lugano, Switzerland
| | - Heiko J Luhmann
- Institute of Physiology, University Medical Center, Johannes Gutenberg University, Mainz, Germany
| |
Collapse
|
20
|
Recanatesi S, Farrell M, Lajoie G, Deneve S, Rigotti M, Shea-Brown E. Predictive learning as a network mechanism for extracting low-dimensional latent space representations. Nat Commun 2021; 12:1417. [PMID: 33658520 PMCID: PMC7930246 DOI: 10.1038/s41467-021-21696-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 01/22/2021] [Indexed: 01/02/2023] Open
Abstract
Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data. Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.
Collapse
Affiliation(s)
- Stefano Recanatesi
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.
| | - Matthew Farrell
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | - Guillaume Lajoie
- Department of Mathematics and Statistics, Université de Montréal, Montreal, QC, Canada.,Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Sophie Deneve
- Group for Neural Theory, Ecole Normal Superieur, Paris, France
| | | | - Eric Shea-Brown
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.,Department of Applied Mathematics, University of Washington, Seattle, WA, USA.,Allen Institute for Brain Science, Seattle, WA, USA
| |
Collapse
|
21
|
Zhao W, Xu Z, Li W, Wu W. Modeling and analyzing neural signals with phase variability using Fisher-Rao registration. J Neurosci Methods 2020; 346:108954. [PMID: 32950555 DOI: 10.1016/j.jneumeth.2020.108954] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/11/2020] [Accepted: 09/16/2020] [Indexed: 01/02/2023]
Abstract
BACKGROUND The dynamic time warping (DTW) has recently been introduced to analyze neural signals such as EEG and fMRI where phase variability plays an important role in the data. NEW METHOD In this study, we propose to adopt a more powerful method, referred to as the Fisher-Rao Registration (FRR), to study the phase variability. COMPARISON WITH EXISTING METHODS We systematically compare FRR with DTW in three aspects: (1) basic framework, (2) mathematical properties, and (3) computational efficiency. RESULTS We show that FRR has superior performance in all these aspects and the advantages are well illustrated with simulation examples. CONCLUSIONS We then apply the FRR method to two real experimental recordings - one fMRI and one EEG data set. It is found the FRR method properly removes the phase variability in each set. Finally, we use the FRR framework to examine brain networks in these two data sets and the result demonstrates the effectiveness of the new method.
Collapse
Affiliation(s)
- Weilong Zhao
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| | - Zishen Xu
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| | - Wen Li
- Department of Psychology, Florida State University, 1107 W. Call St., Tallahassee, FL 32306-4301, USA
| | - Wei Wu
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| |
Collapse
|
22
|
Perich MG, Rajan K. Rethinking brain-wide interactions through multi-region 'network of networks' models. Curr Opin Neurobiol 2020; 65:146-151. [PMID: 33254073 DOI: 10.1016/j.conb.2020.11.003] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 10/17/2020] [Accepted: 11/08/2020] [Indexed: 12/20/2022]
Abstract
The neural control of behavior is distributed across many functionally and anatomically distinct brain regions even in small nervous systems. While classical neuroscience models treated these regions as a set of hierarchically isolated nodes, the brain comprises a recurrently interconnected network in which each region is intimately modulated by many others. Uncovering these interactions is now possible through experimental techniques that access large neural populations from many brain regions simultaneously. Harnessing these large-scale datasets, however, requires new theoretical approaches. Here, we review recent work to understand brain-wide interactions using multi-region 'network of networks' models and discuss how they can guide future experiments. We also emphasize the importance of multi-region recordings, and posit that studying individual components in isolation will be insufficient to understand the neural basis of behavior.
Collapse
Affiliation(s)
- Matthew G Perich
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Kanaka Rajan
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
23
|
Kawabata M, Soma S, Saiki-Ishikawa A, Nonomura S, Yoshida J, Ríos A, Sakai Y, Isomura Y. A spike analysis method for characterizing neurons based on phase locking and scaling to the interval between two behavioral events. J Neurophysiol 2020; 124:1923-1941. [PMID: 33085554 DOI: 10.1152/jn.00200.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Standard analysis of neuronal functions assesses the temporal correlation between animal behaviors and neuronal activity by aligning spike trains with the timing of a specific behavioral event, e.g., visual cue. However, spike activity is often involved in information processing dependent on a relative phase between two consecutive events rather than a single event. Nevertheless, less attention has so far been paid to such temporal features of spike activity in relation to two behavioral events. Here, we propose "Phase-Scaling analysis" to simultaneously evaluate the phase locking and scaling to the interval between two events in task-related spike activity of individual neurons. This analysis method can discriminate conceptual "scaled"-type neurons from "nonscaled"-type neurons using an activity variation map that combines phase locking with scaling to the interval. Its robustness was validated by spike simulation using different spike properties. Furthermore, we applied it to analyzing actual spike data from task-related neurons in the primary visual cortex (V1), posterior parietal cortex (PPC), primary motor cortex (M1), and secondary motor cortex (M2) of behaving rats. After hierarchical clustering of all neurons using their activity variation maps, we divided them objectively into four clusters corresponding to nonscaled-type sensory and motor neurons and scaled-type neurons including sustained and ramping activities, etc. Cluster/subcluster compositions for V1 differed from those of PPC, M1, and M2. The V1 neurons showed the fastest functional activities among those areas. Our method was also applicable to determine temporal "forms" and the latency of spike activity changes. These findings demonstrate its utility for characterizing neurons.NEW & NOTEWORTHY Phase-Scaling analysis is a novel technique to unbiasedly characterize the temporal dependency of functional neuron activity on two behavioral events and objectively determine the latency and form of the activity change. This powerful analysis can uncover several classes of latently functioning neurons that have thus far been overlooked, which may participate differently in intermediate processes of a brain function. The Phase-Scaling analysis will yield profound insights into neural mechanisms for processing internal information.
Collapse
Affiliation(s)
- Masanori Kawabata
- Department of Physiology and Cell Biology, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan.,Graduate School of Brain Sciences, Tamagawa University, Tokyo, Japan
| | - Shogo Soma
- Brain Science Institute, Tamagawa University, Tokyo, Japan.,Department of Molecular Cell Physiology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Akiko Saiki-Ishikawa
- Brain Science Institute, Tamagawa University, Tokyo, Japan.,Department of Neurobiology, Northwestern University, Evanston, Illinois
| | - Satoshi Nonomura
- Department of Physiology and Cell Biology, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan.,Brain Science Institute, Tamagawa University, Tokyo, Japan.,Systems Neuroscience Section, Primate Research Institute, Kyoto University, Aichi, Japan
| | - Junichi Yoshida
- Brain Science Institute, Tamagawa University, Tokyo, Japan.,Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, New York
| | - Alain Ríos
- Department of Physiology and Cell Biology, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan.,Graduate School of Brain Sciences, Tamagawa University, Tokyo, Japan
| | - Yutaka Sakai
- Graduate School of Brain Sciences, Tamagawa University, Tokyo, Japan.,Brain Science Institute, Tamagawa University, Tokyo, Japan
| | - Yoshikazu Isomura
- Department of Physiology and Cell Biology, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan.,Graduate School of Brain Sciences, Tamagawa University, Tokyo, Japan.,Brain Science Institute, Tamagawa University, Tokyo, Japan
| |
Collapse
|
24
|
Kim MK, Sohn JW, Kim SP. Decoding Kinematic Information From Primary Motor Cortex Ensemble Activities Using a Deep Canonical Correlation Analysis. Front Neurosci 2020; 14:509364. [PMID: 33177971 PMCID: PMC7596741 DOI: 10.3389/fnins.2020.509364] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Accepted: 09/22/2020] [Indexed: 12/17/2022] Open
Abstract
The control of arm movements through intracortical brain–machine interfaces (BMIs) mainly relies on the activities of the primary motor cortex (M1) neurons and mathematical models that decode their activities. Recent research on decoding process attempts to not only improve the performance but also simultaneously understand neural and behavioral relationships. In this study, we propose an efficient decoding algorithm using a deep canonical correlation analysis (DCCA), which maximizes correlations between canonical variables with the non-linear approximation of mappings from neuronal to canonical variables via deep learning. We investigate the effectiveness of using DCCA for finding a relationship between M1 activities and kinematic information when non-human primates performed a reaching task with one arm. Then, we examine whether using neural activity representations from DCCA improves the decoding performance through linear and non-linear decoders: a linear Kalman filter (LKF) and a long short-term memory in recurrent neural networks (LSTM-RNN). We found that neural representations of M1 activities estimated by DCCA resulted in more accurate decoding of velocity than those estimated by linear canonical correlation analysis, principal component analysis, factor analysis, and linear dynamical system. Decoding with DCCA yielded better performance than decoding the original FRs using LSTM-RNN (6.6 and 16.0% improvement on average for each velocity and position, respectively; Wilcoxon rank sum test, p < 0.05). Thus, DCCA can identify the kinematics-related canonical variables of M1 activities, thus improving the decoding performance. Our results may help advance the design of decoding models for intracortical BMIs.
Collapse
Affiliation(s)
- Min-Ki Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Jeong-Woo Sohn
- Department of Medical Science, College of Medicine, Catholic Kwandong University, Gangneung, South Korea
| | - Sung-Phil Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| |
Collapse
|
25
|
Barra B, Badi M, Perich MG, Conti S, Mirrazavi Salehian SS, Moreillon F, Bogaard A, Wurth S, Kaeser M, Passeraub P, Milekovic T, Billard A, Micera S, Capogrosso M. A versatile robotic platform for the design of natural, three-dimensional reaching and grasping tasks in monkeys. J Neural Eng 2019; 17:016004. [PMID: 31597123 DOI: 10.1088/1741-2552/ab4c77] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE Translational studies on motor control and neurological disorders require detailed monitoring of sensorimotor components of natural limb movements in relevant animal models. However, available experimental tools do not provide a sufficiently rich repertoire of behavioral signals. Here, we developed a robotic platform that enables the monitoring of kinematics, interaction forces, and neurophysiological signals during user-defined upper limb tasks for monkeys. APPROACH We configured the platform to position instrumented objects in a three-dimensional workspace and provide an interactive dynamic force-field. MAIN RESULTS We show the relevance of our platform for fundamental and translational studies with three example applications. First, we study the kinematics of natural grasp in response to variable interaction forces. We then show simultaneous and independent encoding of kinematic and forces in single unit intra-cortical recordings from sensorimotor cortical areas. Lastly, we demonstrate the relevance of our platform to develop clinically relevant brain computer interfaces in a kinematically unconstrained motor task. SIGNIFICANCE Our versatile control structure does not depend on the specific robotic arm used and allows for the design and implementation of a variety of tasks that can support both fundamental and translational studies of motor control.
Collapse
Affiliation(s)
- B Barra
- Department of Neuroscience and Movement Science, Platform of Translational Neurosciences, University of Fribourg, Fribourg, Switzerland. Co-first authors
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Williams AH, Poole B, Maheswaranathan N, Dhawale AK, Fisher T, Wilson CD, Brann DH, Trautmann EM, Ryu S, Shusterman R, Rinberg D, Ölveczky BP, Shenoy KV, Ganguli S. Discovering Precise Temporal Patterns in Large-Scale Neural Recordings through Robust and Interpretable Time Warping. Neuron 2019; 105:246-259.e8. [PMID: 31786013 DOI: 10.1016/j.neuron.2019.10.020] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Revised: 09/17/2019] [Accepted: 10/10/2019] [Indexed: 12/22/2022]
Abstract
Though the temporal precision of neural computation has been studied intensively, a data-driven determination of this precision remains a fundamental challenge. Reproducible spike patterns may be obscured on single trials by uncontrolled temporal variability in behavior and cognition and may not be time locked to measurable signatures in behavior or local field potentials (LFP). To overcome these challenges, we describe a general-purpose time warping framework that reveals precise spike-time patterns in an unsupervised manner, even when these patterns are decoupled from behavior or are temporally stretched across single trials. We demonstrate this method across diverse systems: cued reaching in nonhuman primates, motor sequence production in rats, and olfaction in mice. This approach flexibly uncovers diverse dynamical firing patterns, including pulsatile responses to behavioral events, LFP-aligned oscillatory spiking, and even unanticipated patterns, such as 7 Hz oscillations in rat motor cortex that are not time locked to measured behaviors or LFP.
Collapse
Affiliation(s)
- Alex H Williams
- Neuroscience Program, Stanford University, Stanford, CA 94305, USA.
| | - Ben Poole
- Google Brain, Google Inc., Mountain View, CA 94043, USA
| | | | - Ashesh K Dhawale
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Tucker Fisher
- Neuroscience Program, Stanford University, Stanford, CA 94305, USA
| | - Christopher D Wilson
- Neuroscience Institute, New York University School of Medicine, New York, NY 10016, USA
| | - David H Brann
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Eric M Trautmann
- Neuroscience Program, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Stephen Ryu
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Roman Shusterman
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, USA
| | - Dmitry Rinberg
- Neuroscience Institute, New York University School of Medicine, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10016, USA
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Krishna V Shenoy
- Neurobiology Department, Stanford University, Stanford, CA 94305, USA; Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Bioengineering Department, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Wu Tsai Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Surya Ganguli
- Applied Physics Department, Stanford University, Stanford, CA 94305, USA; Neurobiology Department, Stanford University, Stanford, CA 94305, USA; Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Wu Tsai Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA; Google Brain, Google Inc., Mountain View, CA 94043, USA.
| |
Collapse
|
27
|
Tam WK, Wu T, Zhao Q, Keefer E, Yang Z. Human motor decoding from neural signals: a review. BMC Biomed Eng 2019; 1:22. [PMID: 32903354 PMCID: PMC7422484 DOI: 10.1186/s42490-019-0022-z] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 07/21/2019] [Indexed: 01/24/2023] Open
Abstract
Many people suffer from movement disability due to amputation or neurological diseases. Fortunately, with modern neurotechnology now it is possible to intercept motor control signals at various points along the neural transduction pathway and use that to drive external devices for communication or control. Here we will review the latest developments in human motor decoding. We reviewed the various strategies to decode motor intention from human and their respective advantages and challenges. Neural control signals can be intercepted at various points in the neural signal transduction pathway, including the brain (electroencephalography, electrocorticography, intracortical recordings), the nerves (peripheral nerve recordings) and the muscles (electromyography). We systematically discussed the sites of signal acquisition, available neural features, signal processing techniques and decoding algorithms in each of these potential interception points. Examples of applications and the current state-of-the-art performance were also reviewed. Although great strides have been made in human motor decoding, we are still far away from achieving naturalistic and dexterous control like our native limbs. Concerted efforts from material scientists, electrical engineers, and healthcare professionals are needed to further advance the field and make the technology widely available in clinical use.
Collapse
Affiliation(s)
- Wing-kin Tam
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| | - Tong Wu
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| | - Qi Zhao
- Department of Computer Science and Engineering, University of Minnesota Twin Cities, 4-192 Keller Hall, 200 Union Street SE, Minnesota, 55455 USA
| | - Edward Keefer
- Nerves Incorporated, Dallas, TX P. O. Box 141295 USA
| | - Zhi Yang
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| |
Collapse
|
28
|
Unakafova VA, Gail A. Comparing Open-Source Toolboxes for Processing and Analysis of Spike and Local Field Potentials Data. Front Neuroinform 2019; 13:57. [PMID: 31417389 PMCID: PMC6682703 DOI: 10.3389/fninf.2019.00057] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 07/11/2019] [Indexed: 11/13/2022] Open
Abstract
Analysis of spike and local field potential (LFP) data is an essential part of neuroscientific research. Today there exist many open-source toolboxes for spike and LFP data analysis implementing various functionality. Here we aim to provide a practical guidance for neuroscientists in the choice of an open-source toolbox best satisfying their needs. We overview major open-source toolboxes for spike and LFP data analysis as well as toolboxes with tools for connectivity analysis, dimensionality reduction and generalized linear modeling. We focus on comparing toolboxes functionality, statistical and visualization tools, documentation and support quality. To give a better insight, we compare and illustrate functionality of the toolboxes on open-access dataset or simulated data and make corresponding MATLAB scripts publicly available.
Collapse
Affiliation(s)
| | - Alexander Gail
- Cognitive Neurosciences Laboratory, German Primate Center, Göttingen, Germany
- Primate Cognition, Göttingen, Germany
- Georg-Elias-Mueller-Institute of Psychology, University of Goettingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
29
|
Saif-Ur-Rehman M, Lienkämper R, Parpaley Y, Wellmer J, Liu C, Lee B, Kellis S, Andersen R, Iossifidis I, Glasmachers T, Klaes C. SpikeDeeptector: a deep-learning based method for detection of neural spiking activity. J Neural Eng 2019; 16:056003. [PMID: 31042684 DOI: 10.1088/1741-2552/ab1e63] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE In electrophysiology, microelectrodes are the primary source for recording neural data (single unit activity). These microelectrodes can be implanted individually or in the form of arrays containing dozens to hundreds of channels. Recordings of some channels contain neural activity, which are often contaminated with noise. Another fraction of channels does not record any neural data, but only noise. By noise, we mean physiological activities unrelated to spiking, including technical artifacts and neural activities of neurons that are too far away from the electrode to be usefully processed. For further analysis, an automatic identification and continuous tracking of channels containing neural data is of great significance for many applications, e.g. automated selection of neural channels during online and offline spike sorting. Automated spike detection and sorting is also critical for online decoding in brain-computer interface (BCI) applications, in which only simple threshold crossing events are often considered for feature extraction. To our knowledge, there is no method that can universally and automatically identify channels containing neural data. In this study, we aim to identify and track channels containing neural data from implanted electrodes, automatically and more importantly universally. By universally, we mean across different recording technologies, different subjects and different brain areas. APPROACH We propose a novel algorithm based on a new way of feature vector extraction and a deep learning method, which we call SpikeDeeptector. SpikeDeeptector considers a batch of waveforms to construct a single feature vector and enables contextual learning. The feature vectors are then fed to a deep learning method, which learns contextualized, temporal and spatial patterns, and classifies them as channels containing neural spike data or only noise. MAIN RESULTS We trained the model of SpikeDeeptector on data recorded from a single tetraplegic patient with two Utah arrays implanted in different areas of the brain. The trained model was then evaluated on data collected from six epileptic patients implanted with depth electrodes, unseen data from the tetraplegic patient and data from another tetraplegic patient implanted with two Utah arrays. The cumulative evaluation accuracy was 97.20% on 1.56 million hand labeled test inputs. SIGNIFICANCE The results demonstrate that SpikeDeeptector generalizes not only to the new data, but also to different brain areas, subjects, and electrode types not used for training. CLINICAL TRIAL REGISTRATION NUMBER The clinical trial registration number for patients implanted with the Utah array is NCT01849822. For the epilepsy patients, approval from the local ethics committee at the Ruhr-University Bochum, Germany, was obtained prior to implantation.
Collapse
Affiliation(s)
- Muhammad Saif-Ur-Rehman
- Faculty of Medicine, Ruhr-University Bochum, Bochum, Germany. Faculty of Electrical Engineering and Information Technology, Ruhr-University Bochum, Bochum, Germany
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Perich MG, Gallego JA, Miller LE. A Neural Population Mechanism for Rapid Learning. Neuron 2018; 100:964-976.e7. [PMID: 30344047 DOI: 10.1016/j.neuron.2018.09.030] [Citation(s) in RCA: 105] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 08/16/2018] [Accepted: 09/21/2018] [Indexed: 12/18/2022]
Abstract
Long-term learning of language, mathematics, and motor skills likely requires cortical plasticity, but behavior often requires much faster changes, sometimes even after single errors. Here, we propose one neural mechanism to rapidly develop new motor output without altering the functional connectivity within or between cortical areas. We tested cortico-cortical models relating the activity of hundreds of neurons in the premotor (PMd) and primary motor (M1) cortices throughout adaptation to reaching movement perturbations. We found a signature of learning in the "output-null" subspace of PMd with respect to M1 reflecting the ability of premotor cortex to alter preparatory activity without directly influencing M1. The output-null subspace planning activity evolved with adaptation, yet the "output-potent" mapping that captures information sent to M1 was preserved. Our results illustrate a population-level cortical mechanism to progressively adjust the output from one brain area to its downstream structures that could be exploited for rapid behavioral adaptation.
Collapse
Affiliation(s)
- Matthew G Perich
- Department of Biomedical Engineering, Northwestern University, Chicago, IL 60611, USA
| | - Juan A Gallego
- Department of Physiology, Northwestern University, Chicago, IL 60611, USA; Neural and Cognitive Engineering Group, Centre for Automation and Robotics, CSIC-UPM, 28500 Arganda del Rey, Madrid, Spain
| | - Lee E Miller
- Department of Biomedical Engineering, Northwestern University, Chicago, IL 60611, USA; Department of Physiology, Northwestern University, Chicago, IL 60611, USA; Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL 60611, USA.
| |
Collapse
|