1
|
Yu Z, Bu T, Zhang Y, Jia S, Huang T, Liu JK. Robust Decoding of Rich Dynamical Visual Scenes With Retinal Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3396-3409. [PMID: 38265909 DOI: 10.1109/tnnls.2024.3351120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Sensory information transmitted to the brain activates neurons to create a series of coping behaviors. Understanding the mechanisms of neural computation and reverse engineering the brain to build intelligent machines requires establishing a robust relationship between stimuli and neural responses. Neural decoding aims to reconstruct the original stimuli that trigger neural responses. With the recent upsurge of artificial intelligence, neural decoding provides an insightful perspective for designing novel algorithms of brain-machine interface. For humans, vision is the dominant contributor to the interaction between the external environment and the brain. In this study, utilizing the retinal neural spike data collected over multi trials with visual stimuli of two movies with different levels of scene complexity, we used a neural network decoder to quantify the decoded visual stimuli with six different metrics for image quality assessment establishing comprehensive inspection of decoding. With the detailed and systematical study of the effect and single and multiple trials of data, different noise in spikes, and blurred images, our results provide an in-depth investigation of decoding dynamical visual scenes using retinal spikes. These results provide insights into the neural coding of visual scenes and services as a guideline for designing next-generation decoding algorithms of neuroprosthesis and other devices of brain-machine interface.
Collapse
|
2
|
Yin X, Wu Z, Wang H. A novel DRL-guided sparse voxel decoding model for reconstructing perceived images from brain activity. J Neurosci Methods 2024; 412:110292. [PMID: 39299579 DOI: 10.1016/j.jneumeth.2024.110292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/31/2024] [Accepted: 09/15/2024] [Indexed: 09/22/2024]
Abstract
BACKGROUND Due to the sparse encoding character of the human visual cortex and the scarcity of paired training samples for {images, fMRIs}, voxel selection is an effective means of reconstructing perceived images from fMRI. However, the existing data-driven voxel selection methods have not achieved satisfactory results. NEW METHOD Here, a novel deep reinforcement learning-guided sparse voxel (DRL-SV) decoding model is proposed to reconstruct perceived images from fMRI. We innovatively describe voxel selection as a Markov decision process (MDP), training agents to select voxels that are highly involved in specific visual encoding. RESULTS Experimental results on two public datasets verify the effectiveness of the proposed DRL-SV, which can accurately select voxels highly involved in neural encoding, thereby improving the quality of visual image reconstruction. COMPARISON WITH EXISTING METHODS We qualitatively and quantitatively compared our results with the state-of-the-art (SOTA) methods, getting better reconstruction results. We compared the proposed DRL-SV with traditional data-driven baseline methods, obtaining sparser voxel selection results, but better reconstruction performance. CONCLUSIONS DRL-SV can accurately select voxels involved in visual encoding on few-shot, compared to data-driven voxel selection methods. The proposed decoding model provides a new avenue to improving the image reconstruction quality of the primary visual cortex.
Collapse
Affiliation(s)
- Xu Yin
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Zhengping Wu
- School of Innovations, Sanjiang University, China; School of Electronic Science and Engineering, Nanjing University, China
| | - Haixian Wang
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China.
| |
Collapse
|
3
|
Stringer C, Pachitariu M. Analysis methods for large-scale neuronal recordings. Science 2024; 386:eadp7429. [PMID: 39509504 DOI: 10.1126/science.adp7429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 09/27/2024] [Indexed: 11/15/2024]
Abstract
Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.
Collapse
Affiliation(s)
- Carsen Stringer
- Howard Hughes Medical Institute (HHMI) Janelia Research Campus, Ashburn, VA, USA
| | - Marius Pachitariu
- Howard Hughes Medical Institute (HHMI) Janelia Research Campus, Ashburn, VA, USA
| |
Collapse
|
4
|
Shah NP, Phillips AJ, Madugula S, Lotlikar A, Gogliettino AR, Hays MR, Grosberg L, Brown J, Dusi A, Tandon P, Hottowy P, Dabrowski W, Sher A, Litke AM, Mitra S, Chichilnisky EJ. Precise control of neural activity using dynamically optimized electrical stimulation. eLife 2024; 13:e83424. [PMID: 39508555 PMCID: PMC11542921 DOI: 10.7554/elife.83424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 07/15/2024] [Indexed: 11/15/2024] Open
Abstract
Neural implants have the potential to restore lost sensory function by electrically evoking the complex naturalistic activity patterns of neural populations. However, it can be difficult to predict and control evoked neural responses to simultaneous multi-electrode stimulation due to nonlinearity of the responses. We present a solution to this problem and demonstrate its utility in the context of a bidirectional retinal implant for restoring vision. A dynamically optimized stimulation approach encodes incoming visual stimuli into a rapid, greedily chosen, temporally dithered and spatially multiplexed sequence of simple stimulation patterns. Stimuli are selected to optimize the reconstruction of the visual stimulus from the evoked responses. Temporal dithering exploits the slow time scales of downstream neural processing, and spatial multiplexing exploits the independence of responses generated by distant electrodes. The approach was evaluated using an experimental laboratory prototype of a retinal implant: large-scale, high-resolution multi-electrode stimulation and recording of macaque and rat retinal ganglion cells ex vivo. The dynamically optimized stimulation approach substantially enhanced performance compared to existing approaches based on static mapping between visual stimulus intensity and current amplitude. The modular framework enabled parallel extensions to naturalistic viewing conditions, incorporation of perceptual similarity measures, and efficient implementation for an implantable device. A direct closed-loop test of the approach supported its potential use in vision restoration.
Collapse
Affiliation(s)
- Nishal Pradeepbhai Shah
- Department of Electrical EngineeringStanfordUnited States
- Department of NeurosurgeryStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| | - AJ Phillips
- Department of Electrical EngineeringStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| | - Sasidhar Madugula
- Department of NeurosurgeryStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| | | | - Alex R Gogliettino
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
- Neurosciences PhD ProgramStanfordUnited States
| | - Madeline Rose Hays
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
- Department of BioengineeringStanfordUnited States
| | - Lauren Grosberg
- Department of NeurosurgeryStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| | - Jeff Brown
- Department of Electrical EngineeringStanfordUnited States
| | - Aditya Dusi
- Department of Electrical EngineeringStanfordUnited States
| | - Pulkit Tandon
- Department of Electrical EngineeringStanfordUnited States
| | - Pawel Hottowy
- AGH University of Science and Technology, Faculty of Physics and Applied Computer ScienceKrakowPoland
| | - Wladyslaw Dabrowski
- AGH University of Science and Technology, Faculty of Physics and Applied Computer ScienceKrakowPoland
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CASanta CruzUnited States
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CASanta CruzUnited States
| | | | - EJ Chichilnisky
- Department of NeurosurgeryStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
- Department of OphthalmologyStanfordUnited States
| |
Collapse
|
5
|
Wu EG, Brackbill N, Rhoades C, Kling A, Gogliettino AR, Shah NP, Sher A, Litke AM, Simoncelli EP, Chichilnisky EJ. Fixational eye movements enhance the precision of visual information transmitted by the primate retina. Nat Commun 2024; 15:7964. [PMID: 39261491 PMCID: PMC11390888 DOI: 10.1038/s41467-024-52304-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/29/2024] [Indexed: 09/13/2024] Open
Abstract
Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.
Collapse
Affiliation(s)
- Eric G Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA, USA
| | - Colleen Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| | - Alex R Gogliettino
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eero P Simoncelli
- Flatiron Institute, Simons Foundation, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - E J Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA, USA.
- Department of Ophthalmology, Stanford University, Stanford, CA, USA.
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA.
| |
Collapse
|
6
|
Chen Y, Beech P, Yin Z, Jia S, Zhang J, Yu Z, Liu JK. Decoding dynamic visual scenes across the brain hierarchy. PLoS Comput Biol 2024; 20:e1012297. [PMID: 39093861 PMCID: PMC11324145 DOI: 10.1371/journal.pcbi.1012297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 08/14/2024] [Accepted: 07/03/2024] [Indexed: 08/04/2024] Open
Abstract
Understanding the computational mechanisms that underlie the encoding and decoding of environmental stimuli is a crucial investigation in neuroscience. Central to this pursuit is the exploration of how the brain represents visual information across its hierarchical architecture. A prominent challenge resides in discerning the neural underpinnings of the processing of dynamic natural visual scenes. Although considerable research efforts have been made to characterize individual components of the visual pathway, a systematic understanding of the distinctive neural coding associated with visual stimuli, as they traverse this hierarchical landscape, remains elusive. In this study, we leverage the comprehensive Allen Visual Coding-Neuropixels dataset and utilize the capabilities of deep learning neural network models to study neural coding in response to dynamic natural visual scenes across an expansive array of brain regions. Our study reveals that our decoding model adeptly deciphers visual scenes from neural spiking patterns exhibited within each distinct brain area. A compelling observation arises from the comparative analysis of decoding performances, which manifests as a notable encoding proficiency within the visual cortex and subcortical nuclei, in contrast to a relatively reduced encoding activity within hippocampal neurons. Strikingly, our results unveil a robust correlation between our decoding metrics and well-established anatomical and functional hierarchy indexes. These findings corroborate existing knowledge in visual coding related to artificial visual stimuli and illuminate the functional role of these deeper brain regions using dynamic stimuli. Consequently, our results suggest a novel perspective on the utility of decoding neural network models as a metric for quantifying the encoding quality of dynamic natural visual scenes represented by neural responses, thereby advancing our comprehension of visual coding within the complex hierarchy of the brain.
Collapse
Affiliation(s)
- Ye Chen
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Peter Beech
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Ziwei Yin
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| | - Shanshan Jia
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Jiayi Zhang
- Institutes of Brain Science, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institute for Medical and Engineering Innovation, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Zhaofei Yu
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
7
|
Gogliettino AR, Cooler S, Vilkhu RS, Brackbill NJ, Rhoades C, Wu EG, Kling A, Sher A, Litke AM, Chichilnisky EJ. Modeling responses of macaque and human retinal ganglion cells to natural images using a convolutional neural network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.22.586353. [PMID: 38585930 PMCID: PMC10996505 DOI: 10.1101/2024.03.22.586353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Linear-nonlinear (LN) cascade models provide a simple way to capture retinal ganglion cell (RGC) responses to artificial stimuli such as white noise, but their ability to model responses to natural images is limited. Recently, convolutional neural network (CNN) models have been shown to produce light response predictions that were substantially more accurate than those of a LN model. However, this modeling approach has not yet been applied to responses of macaque or human RGCs to natural images. Here, we train and test a CNN model on responses to natural images of the four numerically dominant RGC types in the macaque and human retina - ON parasol, OFF parasol, ON midget and OFF midget cells. Compared with the LN model, the CNN model provided substantially more accurate response predictions. Linear reconstructions of the visual stimulus were more accurate for CNN compared to LN model-generated responses, relative to reconstructions obtained from the recorded data. These findings demonstrate the effectiveness of a CNN model in capturing light responses of major RGC types in the macaque and human retinas in natural conditions.
Collapse
|
8
|
Gogliettino AR, Madugula SS, Grosberg LE, Vilkhu RS, Brown J, Nguyen H, Kling A, Hottowy P, Dąbrowski W, Sher A, Litke AM, Chichilnisky EJ. High-Fidelity Reproduction of Visual Signals by Electrical Stimulation in the Central Primate Retina. J Neurosci 2023; 43:4625-4641. [PMID: 37188516 PMCID: PMC10286946 DOI: 10.1523/jneurosci.1091-22.2023] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 05/17/2023] Open
Abstract
Electrical stimulation of retinal ganglion cells (RGCs) with electronic implants provides rudimentary artificial vision to people blinded by retinal degeneration. However, current devices stimulate indiscriminately and therefore cannot reproduce the intricate neural code of the retina. Recent work has demonstrated more precise activation of RGCs using focal electrical stimulation with multielectrode arrays in the peripheral macaque retina, but it is unclear how effective this can be in the central retina, which is required for high-resolution vision. This work probes the neural code and effectiveness of focal epiretinal stimulation in the central macaque retina, using large-scale electrical recording and stimulation ex vivo The functional organization, light response properties, and electrical properties of the major RGC types in the central retina were mostly similar to the peripheral retina, with some notable differences in density, kinetics, linearity, spiking statistics, and correlations. The major RGC types could be distinguished by their intrinsic electrical properties. Electrical stimulation targeting parasol cells revealed similar activation thresholds and reduced axon bundle activation in the central retina, but lower stimulation selectivity. Quantitative evaluation of the potential for image reconstruction from electrically evoked parasol cell signals revealed higher overall expected image quality in the central retina. An exploration of inadvertent midget cell activation suggested that it could contribute high spatial frequency noise to the visual signal carried by parasol cells. These results support the possibility of reproducing high-acuity visual signals in the central retina with an epiretinal implant.SIGNIFICANCE STATEMENT Artificial restoration of vision with retinal implants is a major treatment for blindness. However, present-day implants do not provide high-resolution visual perception, in part because they do not reproduce the natural neural code of the retina. Here, we demonstrate the level of visual signal reproduction that is possible with a future implant by examining how accurately responses to electrical stimulation of parasol retinal ganglion cells can convey visual signals. Although the precision of electrical stimulation in the central retina was diminished relative to the peripheral retina, the quality of expected visual signal reconstruction in parasol cells was greater. These findings suggest that visual signals could be restored with high fidelity in the central retina using a future retinal implant.
Collapse
Affiliation(s)
- Alex R Gogliettino
- Neurosciences PhD Program, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Sasidhar S Madugula
- Neurosciences PhD Program, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Stanford School of Medicine, Stanford University, Stanford, California 94305
| | - Lauren E Grosberg
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
| | - Ramandeep S Vilkhu
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
| | - Jeff Brown
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
| | - Huy Nguyen
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Alexandra Kling
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
| | - Paweł Hottowy
- Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, 30-059, Kraków, Poland
| | - Władysław Dąbrowski
- Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, 30-059, Kraków, Poland
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, California 95064
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, California 95064
| | - E J Chichilnisky
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
- Department of Ophthalmology, Stanford University, Stanford, California 94305
| |
Collapse
|
9
|
Caravaca-Rodriguez D, Gaytan SP, Suaning GJ, Barriga-Rivera A. Implications of Neural Plasticity in Retinal Prosthesis. Invest Ophthalmol Vis Sci 2022; 63:11. [PMID: 36251317 DOI: 10.1167/iovs.63.11.11] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Retinal degenerative diseases such as retinitis pigmentosa cause a progressive loss of photoreceptors that eventually prevents the affected person from perceiving visual sensations. The absence of a visual input produces a neural rewiring cascade that propagates along the visual system. This remodeling occurs first within the retina. Then, subsequent neuroplastic changes take place at higher visual centers in the brain, produced by either the abnormal neural encoding of the visual inputs delivered by the diseased retina or as the result of an adaptation to visual deprivation. While retinal implants can activate the surviving retinal neurons by delivering electric current, the unselective activation patterns of the different neural populations that exist in the retinal layers differ substantially from those in physiologic vision. Therefore, artificially induced neural patterns are being delivered to a brain that has already undergone important neural reconnections. Whether or not the modulation of this neural rewiring can improve the performance for retinal prostheses remains a critical question whose answer may be the enabler of improved functional artificial vision and more personalized neurorehabilitation strategies.
Collapse
Affiliation(s)
- Daniel Caravaca-Rodriguez
- Department of Applied Physics III, Technical School of Engineering, Universidad de Sevilla, Sevilla, Spain
| | - Susana P Gaytan
- Department of Physiology, Universidad de Sevilla, Sevilla, Spain
| | - Gregg J Suaning
- School of Biomedical Engineering, University of Sydney, Sydney, Australia
| | - Alejandro Barriga-Rivera
- Department of Applied Physics III, Technical School of Engineering, Universidad de Sevilla, Sevilla, Spain.,School of Biomedical Engineering, University of Sydney, Sydney, Australia
| |
Collapse
|
10
|
Zhang YJ, Yu ZF, Liu JK, Huang TJ. Neural Decoding of Visual Information Across Different Neural Recording Modalities and Approaches. MACHINE INTELLIGENCE RESEARCH 2022. [PMCID: PMC9283560 DOI: 10.1007/s11633-022-1335-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Vision plays a peculiar role in intelligence. Visual information, forming a large part of the sensory information, is fed into the human brain to formulate various types of cognition and behaviours that make humans become intelligent agents. Recent advances have led to the development of brain-inspired algorithms and models for machine vision. One of the key components of these methods is the utilization of the computational principles underlying biological neurons. Additionally, advanced experimental neuroscience techniques have generated different types of neural signals that carry essential visual information. Thus, there is a high demand for mapping out functional models for reading out visual information from neural signals. Here, we briefly review recent progress on this issue with a focus on how machine learning techniques can help in the development of models for contending various types of neural signals, from fine-scale neural spikes and single-cell calcium imaging to coarse-scale electroencephalography (EEG) and functional magnetic resonance imaging recordings of brain signals.
Collapse
|
11
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
12
|
Zhang Y, Bu T, Zhang J, Tang S, Yu Z, Liu JK, Huang T. Decoding Pixel-Level Image Features from Two-Photon Calcium Signals of Macaque Visual Cortex. Neural Comput 2022; 34:1369-1397. [PMID: 35534008 DOI: 10.1162/neco_a_01498] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 12/20/2021] [Indexed: 11/04/2022]
Abstract
Images of visual scenes comprise essential features important for visual cognition of the brain. The complexity of visual features lies at different levels, from simple artificial patterns to natural images with different scenes. It has been a focus of using stimulus images to predict neural responses. However, it remains unclear how to extract features from neuronal responses. Here we address this question by leveraging two-photon calcium neural data recorded from the visual cortex of awake macaque monkeys. With stimuli including various categories of artificial patterns and diverse scenes of natural images, we employed a deep neural network decoder inspired by image segmentation technique. Consistent with the notation of sparse coding for natural images, a few neurons with stronger responses dominated the decoding performance, whereas decoding of ar tificial patterns needs a large number of neurons. When natural images using the model pretrained on artificial patterns are decoded, salient features of natural scenes can be extracted, as well as the conventional category information. Altogether, our results give a new perspective on studying neural encoding principles using reverse-engineering decoding strategies.
Collapse
Affiliation(s)
- Yijun Zhang
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240.,Department of Computer Science and Technology, Peking University, Peking 100871, P.R.C.
| | - Tong Bu
- Department of Computer Science and Technology, Peking University, Beijing 100871, P.R.C.
| | - Jiyuan Zhang
- Department of Computer Science and Technology, Peking University, Beijing 100871, P.R.C.
| | - Shiming Tang
- School of Life Sciences and Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, P.R.C.
| | - Zhaofei Yu
- Department of Computer Science and Technology and In stitute for Artificial Intelligence, Peking University, Beijing 100871, P.R.C.
| | - Jian K Liu
- School of Computing, University of Leeds, Leeds LS2 9JT, U.K.
| | - Tiejun Huang
- Department of Computer Science and Technology and Institute for Artificial Intelligence, Peking University, Beijing 100871, P.R.C.,Beijing Academy of Artificial Intelligence, Beijing 100190, P.R.C.
| |
Collapse
|
13
|
Li W, Joseph Raj AN, Tjahjadi T, Zhuang Z. Fusion of ANNs as decoder of retinal spike trains for scene reconstruction. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03402-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|