1
|
Shahidi N, Rozenblit F, Khani MH, Schreyer HM, Mietsch M, Protti DA, Gollisch T. Filter-based models of suppression in retinal ganglion cells: Comparison and generalization across species and stimuli. PLoS Comput Biol 2025; 21:e1013031. [PMID: 40315420 DOI: 10.1371/journal.pcbi.1013031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2024] [Accepted: 04/07/2025] [Indexed: 05/04/2025] Open
Abstract
The dichotomy of excitation and suppression is one of the canonical mechanisms explaining the complexity of neural activity. Computational models of the interplay of excitation and suppression in single neurons aim at investigating how this interaction affects a neuron's spiking responses and shapes the encoding of sensory stimuli. Here, we compare the performance of three filter-based stimulus-encoding models for predicting retinal ganglion cell responses recorded from axolotl, mouse, and marmoset retina to different types of temporally varying visual stimuli. Suppression in these models is implemented via subtractive or divisive interactions of stimulus filters or by a response-driven feedback module. For the majority of ganglion cells, the subtractive and divisive models perform similarly and outperform the feedback model as well as a linear-nonlinear (LN) model with no suppression. Comparison between the subtractive and the divisive model depends on cell type, species, and stimulus components, with the divisive model generalizing best across temporal stimulus frequencies and visual contrast and the subtractive model capturing in particular responses for slow temporal stimulus dynamics and for slow axolotl cells. Overall, we conclude that the divisive and subtractive models are well suited for capturing interactions of excitation and suppression in ganglion cells and perform best for different temporal regimes of these interactions.
Collapse
Affiliation(s)
- Neda Shahidi
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Georg-Elias-Müller-Institute for Psychology, Georg-August-Universität Göttingen, Göttingen, Germany
- Cognitive Neuroscience Lab, German Primate Center, Göttingen, Germany
| | - Fernando Rozenblit
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Mohammad H Khani
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Helene M Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Matthias Mietsch
- Laboratory Animal Science Unit, German Primate Center, Göttingen, Germany
- German Center for Cardiovascular Research, Partner Site Göttingen, Göttingen, Germany
| | - Dario A Protti
- School of Medical Sciences (Neuroscience), The University of Sydney, Sydney, New South Wales, Australia
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
- Else Kröner Fresenius Center for Optogenetic Therapies, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
2
|
Gao Z, Ling Z, Liu W, Han K, Zhang H, Hua X, Botchwey EA, Jia S. Fluorescence microscopy through scattering media with robust matrix factorization. CELL REPORTS METHODS 2025:101031. [PMID: 40300606 DOI: 10.1016/j.crmeth.2025.101031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2025] [Revised: 02/25/2025] [Accepted: 04/04/2025] [Indexed: 05/01/2025]
Abstract
Biological tissues, as natural scattering media, inherently disrupt structural information, presenting significant challenges for optical imaging. Complex light propagation through tissue severely degrades image quality, limiting conventional fluorescence imaging techniques to superficial depths. Extracting meaningful information from random speckle patterns is, therefore, critical for deeper tissue imaging. In this study, we present RNP (robust non-negative principal matrix factorization), an approach that enables fluorescence microscopy under diverse scattering conditions. By integrating robust feature extraction with non-negativity constraints, RNP effectively addresses challenges posed by non-sparse signals and background interference in scattering tissue environments. The framework operates on a standard epi-fluorescence platform, eliminating the need for complex instrumentation or precise alignment. The results from imaging scattered cells and tissues demonstrate substantial improvements in robustness, field of view, depth of field, and image clarity. We anticipate that RNP will become a valuable tool for overcoming scattering challenges in fluorescence microscopy and driving advancements in biomedical research.
Collapse
Affiliation(s)
- Zijun Gao
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA 30332, USA; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA; Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA 30332, USA; School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Zhi Ling
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA 30332, USA; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA; Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA 30332, USA; George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Wenhao Liu
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA 30332, USA; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA
| | - Keyi Han
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA 30332, USA; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA
| | - Hongmanlin Zhang
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA; Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA 30332, USA; School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Xuanwen Hua
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA 30332, USA; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA
| | - Edward A Botchwey
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA; Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Shu Jia
- Laboratory for Systems Biophotonics, Georgia Institute of Technology, Atlanta, GA 30332, USA; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA; Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA 30332, USA.
| |
Collapse
|
3
|
Tanaka H, Isoda M, Noritake A. Subspace analysis identifies low-dimensional interactions between cortical and subcortical brain regions in social reward computation. Neurosci Res 2025; 213:146-155. [PMID: 39947349 DOI: 10.1016/j.neures.2025.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2024] [Revised: 02/05/2025] [Accepted: 02/10/2025] [Indexed: 02/22/2025]
Abstract
Animals living in social environments evaluate rewards given to themselves and others. We previously showed that single-unit activities in cortical (the medial prefrontal cortex, MPFC) and subcortical (the dopaminergic midbrain nuclei, DA, and the lateral hypothalamus, LH) regions reflect social reward computation. Extending the single-neuron analyses, this study employs matrix and tensor decomposition methods to characterize population activity within single regions and interactions between pairs of regions. First, we determined the dimensionality of population activity and corresponding components in a single brain region. The dimensions of MPFC and LH were comparable, indicating similarities in the population activities of the two regions. In contrast, the dimensions of DA were considerably smaller, indicating that the activities were idiosyncratic. Further, "subspaces" shared between MPFC and DA, between MPFC and LH, and between LH and DA were identified. We found that a few components in MPFC and LH explained a large portion of population activities in DA, indicating that the neural computation of social rewards resides in a small subspace. Our findings demonstrate that a limited number of neural components within cortico-subcortical circuits regulate the social monitoring of rewards to oneself and others.
Collapse
Affiliation(s)
- Hirokazu Tanaka
- Faculty of Information Technology, Tokyo City University, Tokyo, Japan.
| | - Masaki Isoda
- Division of Behavioral Development, Department of System Neuroscience, National Institute for Physiological Sciences, National Institutes of Natural Sciences, Okazaki, Japan; Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI), Hayama, Japan
| | - Atsushi Noritake
- Division of Behavioral Development, Department of System Neuroscience, National Institute for Physiological Sciences, National Institutes of Natural Sciences, Okazaki, Japan; Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI), Hayama, Japan.
| |
Collapse
|
4
|
Yu Z, Bu T, Zhang Y, Jia S, Huang T, Liu JK. Robust Decoding of Rich Dynamical Visual Scenes With Retinal Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3396-3409. [PMID: 38265909 DOI: 10.1109/tnnls.2024.3351120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Sensory information transmitted to the brain activates neurons to create a series of coping behaviors. Understanding the mechanisms of neural computation and reverse engineering the brain to build intelligent machines requires establishing a robust relationship between stimuli and neural responses. Neural decoding aims to reconstruct the original stimuli that trigger neural responses. With the recent upsurge of artificial intelligence, neural decoding provides an insightful perspective for designing novel algorithms of brain-machine interface. For humans, vision is the dominant contributor to the interaction between the external environment and the brain. In this study, utilizing the retinal neural spike data collected over multi trials with visual stimuli of two movies with different levels of scene complexity, we used a neural network decoder to quantify the decoded visual stimuli with six different metrics for image quality assessment establishing comprehensive inspection of decoding. With the detailed and systematical study of the effect and single and multiple trials of data, different noise in spikes, and blurred images, our results provide an in-depth investigation of decoding dynamical visual scenes using retinal spikes. These results provide insights into the neural coding of visual scenes and services as a guideline for designing next-generation decoding algorithms of neuroprosthesis and other devices of brain-machine interface.
Collapse
|
5
|
Karamanlis D, Khani MH, Schreyer HM, Zapp SJ, Mietsch M, Gollisch T. Nonlinear receptive fields evoke redundant retinal coding of natural scenes. Nature 2025; 637:394-401. [PMID: 39567692 PMCID: PMC11711096 DOI: 10.1038/s41586-024-08212-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 10/14/2024] [Indexed: 11/22/2024]
Abstract
The role of the vertebrate retina in early vision is generally described by the efficient coding hypothesis1,2, which predicts that the retina reduces the redundancy inherent in natural scenes3 by discarding spatiotemporal correlations while preserving stimulus information4. It is unclear, however, whether the predicted decorrelation and redundancy reduction in the activity of ganglion cells, the retina's output neurons, hold under gaze shifts, which dominate the dynamics of the natural visual input5. We show here that species-specific gaze patterns in natural stimuli can drive correlated spiking responses both in and across distinct types of ganglion cells in marmoset as well as mouse retina. These concerted responses disrupt redundancy reduction to signal fixation periods with locally high spatial contrast. Model-based analyses of ganglion cell responses to natural stimuli show that the observed response correlations follow from nonlinear pooling of ganglion cell inputs. Our results indicate cell-type-specific deviations from efficient coding in retinal processing of natural gaze shifts.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- University of Geneva, Department of Basic Neurosciences, Geneva, Switzerland.
| | - Mohammad H Khani
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - Helene M Schreyer
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - Sören J Zapp
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Matthias Mietsch
- German Primate Center, Laboratory Animal Science Unit, Göttingen, Germany
- German Center for Cardiovascular Research, Partner Site Göttingen, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany.
- Else Kröner Fresenius Center for Optogenetic Therapies, University Medical Center Göttingen, Göttingen, Germany.
| |
Collapse
|
6
|
Höfling L, Szatko KP, Behrens C, Deng Y, Qiu Y, Klindt DA, Jessen Z, Schwartz GW, Bethge M, Berens P, Franke K, Ecker AS, Euler T. A chromatic feature detector in the retina signals visual context changes. eLife 2024; 13:e86860. [PMID: 39365730 PMCID: PMC11452179 DOI: 10.7554/elife.86860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 08/25/2024] [Indexed: 10/06/2024] Open
Abstract
The retina transforms patterns of light into visual feature representations supporting behaviour. These representations are distributed across various types of retinal ganglion cells (RGCs), whose spatial and temporal tuning properties have been studied extensively in many model organisms, including the mouse. However, it has been difficult to link the potentially nonlinear retinal transformations of natural visual inputs to specific ethological purposes. Here, we discover a nonlinear selectivity to chromatic contrast in an RGC type that allows the detection of changes in visual context. We trained a convolutional neural network (CNN) model on large-scale functional recordings of RGC responses to natural mouse movies, and then used this model to search in silico for stimuli that maximally excite distinct types of RGCs. This procedure predicted centre colour opponency in transient suppressed-by-contrast (tSbC) RGCs, a cell type whose function is being debated. We confirmed experimentally that these cells indeed responded very selectively to Green-OFF, UV-ON contrasts. This type of chromatic contrast was characteristic of transitions from ground to sky in the visual scene, as might be elicited by head or eye movements across the horizon. Because tSbC cells performed best among all RGC types at reliably detecting these transitions, we suggest a role for this RGC type in providing contextual information (i.e. sky or ground) necessary for the selection of appropriate behavioural responses to other stimuli, such as looming objects. Our work showcases how a combination of experiments with natural stimuli and computational modelling allows discovering novel types of stimulus selectivity and identifying their potential ethological relevance.
Collapse
Affiliation(s)
- Larissa Höfling
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| | - Christian Behrens
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
| | - Yuyao Deng
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| | - Yongrong Qiu
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| | | | - Zachary Jessen
- Feinberg School of Medicine, Department of Ophthalmology, Northwestern UniversityChicagoUnited States
| | - Gregory W Schwartz
- Feinberg School of Medicine, Department of Ophthalmology, Northwestern UniversityChicagoUnited States
| | - Matthias Bethge
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
- Tübingen AI Center, University of TübingenTübingenGermany
| | - Philipp Berens
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
- Tübingen AI Center, University of TübingenTübingenGermany
- Hertie Institute for AI in Brain HealthTübingenGermany
| | - Katrin Franke
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
| | - Alexander S Ecker
- Institute of Computer Science and Campus Institute Data Science, University of GöttingenGöttingenGermany
- Max Planck Institute for Dynamics and Self-OrganizationGöttingenGermany
| | - Thomas Euler
- Institute for Ophthalmic Research, University of TübingenTübingenGermany
- Centre for Integrative Neuroscience, University of TübingenTübingenGermany
| |
Collapse
|
7
|
Wu EG, Brackbill N, Rhoades C, Kling A, Gogliettino AR, Shah NP, Sher A, Litke AM, Simoncelli EP, Chichilnisky EJ. Fixational eye movements enhance the precision of visual information transmitted by the primate retina. Nat Commun 2024; 15:7964. [PMID: 39261491 PMCID: PMC11390888 DOI: 10.1038/s41467-024-52304-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/29/2024] [Indexed: 09/13/2024] Open
Abstract
Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.
Collapse
Affiliation(s)
- Eric G Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA, USA
| | - Colleen Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| | - Alex R Gogliettino
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eero P Simoncelli
- Flatiron Institute, Simons Foundation, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - E J Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA, USA.
- Department of Ophthalmology, Stanford University, Stanford, CA, USA.
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA.
| |
Collapse
|
8
|
Krüppel S, Khani MH, Schreyer HM, Sridhar S, Ramakrishna V, Zapp SJ, Mietsch M, Karamanlis D, Gollisch T. Applying Super-Resolution and Tomography Concepts to Identify Receptive Field Subunits in the Retina. PLoS Comput Biol 2024; 20:e1012370. [PMID: 39226328 PMCID: PMC11398665 DOI: 10.1371/journal.pcbi.1012370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 09/13/2024] [Accepted: 07/28/2024] [Indexed: 09/05/2024] Open
Abstract
Spatially nonlinear stimulus integration by retinal ganglion cells lies at the heart of various computations performed by the retina. It arises from the nonlinear transmission of signals that ganglion cells receive from bipolar cells, which thereby constitute functional subunits within a ganglion cell's receptive field. Inferring these subunits from recorded ganglion cell activity promises a new avenue for studying the functional architecture of the retina. This calls for efficient methods, which leave sufficient experimental time to leverage the acquired knowledge for further investigating identified subunits. Here, we combine concepts from super-resolution microscopy and computed tomography and introduce super-resolved tomographic reconstruction (STR) as a technique to efficiently stimulate and locate receptive field subunits. Simulations demonstrate that this approach can reliably identify subunits across a wide range of model variations, and application in recordings of primate parasol ganglion cells validates the experimental feasibility. STR can potentially reveal comprehensive subunit layouts within only a few tens of minutes of recording time, making it ideal for online analysis and closed-loop investigations of receptive field substructure in retina recordings.
Collapse
Affiliation(s)
- Steffen Krüppel
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| | - Mohammad H Khani
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Helene M Schreyer
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Shashwat Sridhar
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Varsha Ramakrishna
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Sören J Zapp
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Matthias Mietsch
- German Primate Center, Laboratory Animal Science Unit, Göttingen, Germany
- German Center for Cardiovascular Research, Partner Site Göttingen, Göttingen, Germany
| | - Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
- Else Kröner Fresenius Center for Optogenetic Therapies, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
9
|
Wu EG, Brackbill N, Rhoades C, Kling A, Gogliettino AR, Shah NP, Sher A, Litke AM, Simoncelli EP, Chichilnisky E. Fixational Eye Movements Enhance the Precision of Visual Information Transmitted by the Primate Retina. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.12.552902. [PMID: 37645934 PMCID: PMC10462030 DOI: 10.1101/2023.08.12.552902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.
Collapse
Affiliation(s)
- Eric G. Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA, USA
| | - Colleen Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| | - Alex R. Gogliettino
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Nishal P. Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Alan M. Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eero P. Simoncelli
- Flatiron Institute, Simons Foundation, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - E.J. Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| |
Collapse
|
10
|
Kobayashi R, Shinomoto S. Inference of monosynaptic connections from parallel spike trains: A review. Neurosci Res 2024:S0168-0102(24)00097-X. [PMID: 39098768 DOI: 10.1016/j.neures.2024.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 07/12/2024] [Accepted: 07/19/2024] [Indexed: 08/06/2024]
Abstract
This article presents a mini-review about the progress in inferring monosynaptic connections from spike trains of multiple neurons over the past twenty years. First, we explain a variety of meanings of "neuronal connectivity" in different research areas of neuroscience, such as structural connectivity, monosynaptic connectivity, and functional connectivity. Among these, we focus on the methods used to infer the monosynaptic connectivity from spike data. We then summarize the inference methods based on two main approaches, i.e., correlation-based and model-based approaches. Finally, we describe available source codes for connectivity inference and future challenges. Although inference will never be perfect, the accuracy of identifying the monosynaptic connections has improved dramatically in recent years due to continuous efforts.
Collapse
Affiliation(s)
- Ryota Kobayashi
- Graduate School of Frontier Sciences, The University of Tokyo, Chiba 277-8561, Japan; Mathematics and Informatics Center, The University of Tokyo, Tokyo 113-8656, Japan.
| | - Shigeru Shinomoto
- Graduate School of Biostudies, Kyoto University, Kyoto 606-8501, Japan; Research Organization of Open Innovation and Collaboration, Ritsumeikan University, Osaka 567-8570, Japan
| |
Collapse
|
11
|
Turishcheva P, Fahey PG, Vystrčilová M, Hansel L, Froebe R, Ponder K, Qiu Y, Willeke KF, Bashiri M, Baikulov R, Zhu Y, Ma L, Yu S, Huang T, Li BM, Wulf WD, Kudryashova N, Hennig MH, Rochefort NL, Onken A, Wang E, Ding Z, Tolias AS, Sinz FH, Ecker AS. Retrospective for the Dynamic Sensorium Competition for predicting large-scale mouse primary visual cortex activity from videos. ARXIV 2024:arXiv:2407.09100v1. [PMID: 39040641 PMCID: PMC11261979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/24/2024]
Abstract
Understanding how biological visual systems process information is challenging because of the nonlinear relationship between visual input and neuronal responses. Artificial neural networks allow computational neuroscientists to create predictive models that connect biological and machine vision. Machine learning has benefited tremendously from benchmarks that compare different model on the same task under standardized conditions. However, there was no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we established the SENSORIUM 2023 Benchmark Competition with dynamic input, featuring a new large-scale dataset from the primary visual cortex of ten mice. This dataset includes responses from 78,853 neurons to 2 hours of dynamic stimuli per neuron, together with the behavioral measurements such as running speed, pupil dilation, and eye movements. The competition ranked models in two tracks based on predictive performance for neuronal responses on a held-out test set: one focusing on predicting in-domain natural stimuli and another on out-of-distribution (OOD) stimuli to assess model generalization. As part of the NeurIPS 2023 competition track, we received more than 160 model submissions from 22 teams. Several new architectures for predictive models were proposed, and the winning teams improved the previous state-of-the-art model by 50%. Access to the dataset as well as the benchmarking infrastructure will remain online at www.sensorium-competition.net.
Collapse
Affiliation(s)
- Polina Turishcheva
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
| | - Paul G. Fahey
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
| | - Michaela Vystrčilová
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
| | - Laura Hansel
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
| | - Rachel Froebe
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
| | - Kayla Ponder
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, USA
| | - Yongrong Qiu
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
| | - Konstantin F. Willeke
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- International Max Planck Research School for Intelligent Systems, Tübingen, Germany
- Institute for Bioinformatics and Medical Informatics, Tübingen University, Germany
| | - Mohammad Bashiri
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- International Max Planck Research School for Intelligent Systems, Tübingen, Germany
- Institute for Bioinformatics and Medical Informatics, Tübingen University, Germany
| | | | - Yu Zhu
- Institute of Automation, Chinese Academy of Sciences, China
- Beijing Academy of Artificial Intelligence, China
| | - Lei Ma
- Beijing Academy of Artificial Intelligence, China
| | - Shan Yu
- Institute of Automation, Chinese Academy of Sciences, China
| | - Tiejun Huang
- Beijing Academy of Artificial Intelligence, China
| | - Bryan M. Li
- The Alan Turing Institute, UK
- School of Informatics, University of Edinburgh, UK
| | - Wolf De Wulf
- School of Informatics, University of Edinburgh, UK
| | | | | | - Nathalie L. Rochefort
- Centre for Discovery Brain Sciences, University of Edinburgh, UK
- Simons Initiative for the Developing Brain, University of Edinburgh, UK
| | - Arno Onken
- School of Informatics, University of Edinburgh, UK
| | - Eric Wang
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, USA
| | - Zhiwei Ding
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, USA
| | - Andreas S. Tolias
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
- Department of Electrical Engineering, Stanford University, Stanford, CA, US
| | - Fabian H. Sinz
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas, USA
- International Max Planck Research School for Intelligent Systems, Tübingen, Germany
- Institute for Bioinformatics and Medical Informatics, Tübingen University, Germany
| | - Alexander S Ecker
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| |
Collapse
|
12
|
Turishcheva P, Fahey PG, Vystrčilová M, Hansel L, Froebe R, Ponder K, Qiu Y, Willeke KF, Bashiri M, Wang E, Ding Z, Tolias AS, Sinz FH, Ecker AS. The Dynamic Sensorium competition for predicting large-scale mouse visual cortex activity from videos. ARXIV 2024:arXiv:2305.19654v2. [PMID: 37396602 PMCID: PMC10312815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input (i.e. images). However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Benchmark Competition with dynamic input (https://www.sensorium-competition.net/). This competition includes the collection of a new large-scale dataset from the primary visual cortex of ten mice, containing responses from over 78,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input (i.e. video). We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.
Collapse
Affiliation(s)
- Polina Turishcheva
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
| | - Paul G Fahey
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
| | - Michaela Vystrčilová
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
| | - Laura Hansel
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
| | - Rachel Froebe
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
| | - Kayla Ponder
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Yongrong Qiu
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
| | - Konstantin F Willeke
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
- International Max Planck Research School for Intelligent Systems, University of Tübingen, Germany
- Institute for Bioinformatics and Medical Informatics, University of Tübingen, Germany
| | - Mohammad Bashiri
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- International Max Planck Research School for Intelligent Systems, University of Tübingen, Germany
- Institute for Bioinformatics and Medical Informatics, University of Tübingen, Germany
| | - Eric Wang
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Zhiwei Ding
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, US
- Stanford Bio-X, Stanford University, Stanford, CA, US
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, US
- Department of Electrical Engineering, Stanford University, Stanford, CA, US
| | - Fabian H Sinz
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- International Max Planck Research School for Intelligent Systems, University of Tübingen, Germany
- Institute for Bioinformatics and Medical Informatics, University of Tübingen, Germany
| | - Alexander S Ecker
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| |
Collapse
|
13
|
Almasi A, Sun SH, Jung YJ, Ibbotson M, Meffin H. Data-driven modelling of visual receptive fields: comparison between the generalized quadratic model and the nonlinear input model. J Neural Eng 2024; 21:046014. [PMID: 38941988 DOI: 10.1088/1741-2552/ad5d15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 06/28/2024] [Indexed: 06/30/2024]
Abstract
Objective: Neurons in primary visual cortex (V1) display a range of sensitivity in their response to translations of their preferred visual features within their receptive field: from high specificity to a precise position through to complete invariance. This visual feature selectivity and invariance is frequently modeled by applying a selection of linear spatial filters to the input image, that define the feature selectivity, followed by a nonlinear function that combines the filter outputs, that defines the invariance, to predict the neural response. We compare two such classes of model, that are both popular and parsimonious, the generalized quadratic model (GQM) and the nonlinear input model (NIM). These two classes of model differ primarily in that the NIM can accommodate a greater diversity in the form of nonlinearity that is applied to the outputs of the filters.Approach: We compare the two model types by applying them to data from multielectrode recordings from cat primary visual cortex in response to spatially white Gaussian noise After fitting both classes of model to a database of 342 single units (SUs), we analyze the qualitative and quantitative differences in the visual feature processing performed by the two models and their ability to predict neural response.Main results: We find that the NIM predicts response rates on a held-out data at least as well as the GQM for 95% of SUs. Superior performance occurs predominantly for those units with above average spike rates and is largely due to the NIMs ability to capture aspects of the model's nonlinear function cannot be captured with the GQM rather than differences in the visual features being processed by the two different models.Significance: These results can help guide model choice for data-driven receptive field modelling.
Collapse
Affiliation(s)
- Ali Almasi
- National Vision Research Institute, Carlton, VIC 3053, Australia
| | - Shi H Sun
- National Vision Research Institute, Carlton, VIC 3053, Australia
| | - Young Jun Jung
- National Vision Research Institute, Carlton, VIC 3053, Australia
| | - Michael Ibbotson
- National Vision Research Institute, Carlton, VIC 3053, Australia
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Hamish Meffin
- National Vision Research Institute, Carlton, VIC 3053, Australia
- Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC 3010, Australia
| |
Collapse
|
14
|
Jia S, Liu JK, Yu Z. Protocol for dissecting cascade computational components in neural networks of a visual system. STAR Protoc 2023; 4:102722. [PMID: 37976152 PMCID: PMC10692719 DOI: 10.1016/j.xpro.2023.102722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 08/26/2023] [Accepted: 10/30/2023] [Indexed: 11/19/2023] Open
Abstract
Finding the complete functional circuits of neurons is a challenging problem in brain research. Here, we present a protocol, based on visual stimuli and spikes, for obtaining the complete circuit of recorded neurons using spike-triggered nonnegative matrix factorization. We describe steps for data preprocessing, inferring the spatial receptive field of the subunits, and analyzing the module matrix. This approach identifies computational components of the feedforward network of retinal ganglion cells and dissects the network structure based on natural image stimuli. For complete details on the use and execution of this protocol, please refer to Jia et al. (2021).1.
Collapse
Affiliation(s)
- Shanshan Jia
- School of Computer Science, Peking University, Beijing 100871, China; Institute for Artificial Intelligence, Peking University, Beijing 100871, China; Department of Computer Science and Technology, Peking University, Beijing 100871, China
| | - Jian K Liu
- School of Computing, University of Leeds, Leeds, UK
| | - Zhaofei Yu
- School of Computer Science, Peking University, Beijing 100871, China; Institute for Artificial Intelligence, Peking University, Beijing 100871, China; Department of Computer Science and Technology, Peking University, Beijing 100871, China.
| |
Collapse
|
15
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
16
|
Freedland J, Rieke F. Systematic reduction of the dimensionality of natural scenes allows accurate predictions of retinal ganglion cell spike outputs. Proc Natl Acad Sci U S A 2022; 119:e2121744119. [PMID: 36343230 PMCID: PMC9674269 DOI: 10.1073/pnas.2121744119] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 09/23/2022] [Indexed: 11/09/2022] Open
Abstract
The mammalian retina engages a broad array of linear and nonlinear circuit mechanisms to convert natural scenes into retinal ganglion cell (RGC) spike outputs. Although many individual integration mechanisms are well understood, we know less about how multiple mechanisms interact to encode the complex spatial features present in natural inputs. Here, we identified key spatial features in natural scenes that shape encoding by primate parasol RGCs. Our approach identified simplifications in the spatial structure of natural scenes that minimally altered RGC spike responses. We observed that reducing natural movies into 16 linearly integrated regions described ∼80% of the structure of parasol RGC spike responses; this performance depended on the number of regions but not their precise spatial locations. We used simplified stimuli to design high-dimensional metamers that recapitulated responses to naturalistic movies. Finally, we modeled the retinal computations that convert flashed natural images into one-dimensional spike counts.
Collapse
Affiliation(s)
- Julian Freedland
- Molecular Engineering & Sciences Institute, University of Washington, Seattle, WA 98195
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
17
|
Zhang YJ, Yu ZF, Liu JK, Huang TJ. Neural Decoding of Visual Information Across Different Neural Recording Modalities and Approaches. MACHINE INTELLIGENCE RESEARCH 2022. [PMCID: PMC9283560 DOI: 10.1007/s11633-022-1335-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Vision plays a peculiar role in intelligence. Visual information, forming a large part of the sensory information, is fed into the human brain to formulate various types of cognition and behaviours that make humans become intelligent agents. Recent advances have led to the development of brain-inspired algorithms and models for machine vision. One of the key components of these methods is the utilization of the computational principles underlying biological neurons. Additionally, advanced experimental neuroscience techniques have generated different types of neural signals that carry essential visual information. Thus, there is a high demand for mapping out functional models for reading out visual information from neural signals. Here, we briefly review recent progress on this issue with a focus on how machine learning techniques can help in the development of models for contending various types of neural signals, from fine-scale neural spikes and single-cell calcium imaging to coarse-scale electroencephalography (EEG) and functional magnetic resonance imaging recordings of brain signals.
Collapse
|
18
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
19
|
Jia S, Yu Z, Onken A, Tian Y, Huang T, Liu JK. Neural System Identification With Spike-Triggered Non-Negative Matrix Factorization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:4772-4783. [PMID: 33400673 DOI: 10.1109/tcyb.2020.3042513] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Neuronal circuits formed in the brain are complex with intricate connection patterns. Such complexity is also observed in the retina with a relatively simple neuronal circuit. A retinal ganglion cell (GC) receives excitatory inputs from neurons in previous layers as driving forces to fire spikes. Analytical methods are required to decipher these components in a systematic manner. Recently a method called spike-triggered non-negative matrix factorization (STNMF) has been proposed for this purpose. In this study, we extend the scope of the STNMF method. By using retinal GCs as a model system, we show that STNMF can detect various computational properties of upstream bipolar cells (BCs), including spatial receptive field, temporal filter, and transfer nonlinearity. In addition, we recover synaptic connection strengths from the weight matrix of STNMF. Furthermore, we show that STNMF can separate spikes of a GC into a few subsets of spikes, where each subset is contributed by one presynaptic BC. Taken together, these results corroborate that STNMF is a useful method for deciphering the structure of neuronal circuits.
Collapse
|
20
|
Xu Q, Shen J, Ran X, Tang H, Pan G, Liu JK. Robust Transcoding Sensory Information With Neural Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1935-1946. [PMID: 34665741 DOI: 10.1109/tnnls.2021.3107449] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Neural coding, including encoding and decoding, is one of the key problems in neuroscience for understanding how the brain uses neural signals to relate sensory perception and motor behaviors with neural systems. However, most of the existed studies only aim at dealing with the continuous signal of neural systems, while lacking a unique feature of biological neurons, termed spike, which is the fundamental information unit for neural computation as well as a building block for brain-machine interface. Aiming at these limitations, we propose a transcoding framework to encode multi-modal sensory information into neural spikes and then reconstruct stimuli from spikes. Sensory information can be compressed into 10% in terms of neural spikes, yet re-extract 100% of information by reconstruction. Our framework can not only feasibly and accurately reconstruct dynamical visual and auditory scenes, but also rebuild the stimulus patterns from functional magnetic resonance imaging (fMRI) brain activities. More importantly, it has a superb ability of noise immunity for various types of artificial noises and background signals. The proposed framework provides efficient ways to perform multimodal feature representation and reconstruction in a high-throughput fashion, with potential usage for efficient neuromorphic computing in a noisy environment.
Collapse
|
21
|
Electrophysiological dataset from macaque visual cortical area MST in response to a novel motion stimulus. Sci Data 2022; 9:182. [PMID: 35440786 PMCID: PMC9019011 DOI: 10.1038/s41597-022-01239-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 03/04/2022] [Indexed: 12/03/2022] Open
Abstract
Establishing the cortical neural representation of visual stimuli is a central challenge of systems neuroscience. Publicly available data would allow a broad range of scientific analyses and hypothesis testing, but are rare and largely focused on the early visual system. To address the shortage of open data from higher visual areas, we provide a comprehensive dataset from a neurophysiology study in macaque monkey visual cortex that includes a complete record of extracellular action potential recordings from the extrastriate medial superior temporal (MST) area, behavioral data, and detailed stimulus records. It includes spiking activity of 172 single neurons recorded in 139 sessions from 4 hemispheres of 3 rhesus macaque monkeys. The data was collected across 3 experiments, designed to characterize the response properties of MST neurons to complex motion stimuli. This data can be used to elucidate visual information processing at the level of single neurons in a high-level area of primate visual cortex. Providing open access to this dataset also promotes the 3R-principle of responsible animal research. Measurement(s) | spike train • eye movement measurement | Technology Type(s) | single-unit recording • eye tracking device | Factor Type(s) | direction, location, and speed of moving random dot patterns | Sample Characteristic - Organism | Macaca mulatta | Sample Characteristic - Environment | laboratory environment | Sample Characteristic - Location | Germany |
Collapse
|
22
|
Zapp SJ, Nitsche S, Gollisch T. Retinal receptive-field substructure: scaffolding for coding and computation. Trends Neurosci 2022; 45:430-445. [DOI: 10.1016/j.tins.2022.03.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/29/2022]
|
23
|
Liu JK, Karamanlis D, Gollisch T. Simple model for encoding natural images by retinal ganglion cells with nonlinear spatial integration. PLoS Comput Biol 2022; 18:e1009925. [PMID: 35259159 PMCID: PMC8932571 DOI: 10.1371/journal.pcbi.1009925] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 03/18/2022] [Accepted: 02/14/2022] [Indexed: 01/05/2023] Open
Abstract
A central goal in sensory neuroscience is to understand the neuronal signal processing involved in the encoding of natural stimuli. A critical step towards this goal is the development of successful computational encoding models. For ganglion cells in the vertebrate retina, the development of satisfactory models for responses to natural visual scenes is an ongoing challenge. Standard models typically apply linear integration of visual stimuli over space, yet many ganglion cells are known to show nonlinear spatial integration, in particular when stimulated with contrast-reversing gratings. We here study the influence of spatial nonlinearities in the encoding of natural images by ganglion cells, using multielectrode-array recordings from isolated salamander and mouse retinas. We assess how responses to natural images depend on first- and second-order statistics of spatial patterns inside the receptive field. This leads us to a simple extension of current standard ganglion cell models. We show that taking not only the weighted average of light intensity inside the receptive field into account but also its variance over space can partly account for nonlinear integration and substantially improve response predictions of responses to novel images. For salamander ganglion cells, we find that response predictions for cell classes with large receptive fields profit most from including spatial contrast information. Finally, we demonstrate how this model framework can be used to assess the spatial scale of nonlinear integration. Our results underscore that nonlinear spatial stimulus integration translates to stimulation with natural images. Furthermore, the introduced model framework provides a simple, yet powerful extension of standard models and may serve as a benchmark for the development of more detailed models of the nonlinear structure of receptive fields. For understanding how sensory systems operate in the natural environment, an important goal is to develop models that capture neuronal responses to natural stimuli. For retinal ganglion cells, which connect the eye to the brain, current standard models often fail to capture responses to natural visual scenes. This shortcoming is at least partly rooted in the fact that ganglion cells may combine visual signals over space in a nonlinear fashion. We here show that a simple model, which not only considers the average light intensity inside a cell’s receptive field but also the variance of light intensity over space, can partly account for these nonlinearities and thereby improve current standard models. This provides an easy-to-obtain benchmark for modeling ganglion cell responses to natural images.
Collapse
Affiliation(s)
- Jian K. Liu
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), University of Göttingen, Göttingen, Germany
- * E-mail:
| |
Collapse
|
24
|
Jia S, Li X, Huang T, Liu JK, Yu Z. Representing the dynamics of high-dimensional data with non-redundant wavelets. PATTERNS 2022; 3:100424. [PMID: 35510192 PMCID: PMC9058841 DOI: 10.1016/j.patter.2021.100424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 09/22/2021] [Accepted: 12/09/2021] [Indexed: 11/19/2022]
Abstract
A crucial question in data science is to extract meaningful information embedded in high-dimensional data into a low-dimensional set of features that can represent the original data at different levels. Wavelet analysis is a pervasive method for decomposing time-series signals into a few levels with detailed temporal resolution. However, obtained wavelets are intertwined and over-represented across levels for each sample and across different samples within one population. Here, using neuroscience data of simulated spikes, experimental spikes, calcium imaging signals, and human electrocorticography signals, we leveraged conditional mutual information between wavelets for feature selection. The meaningfulness of selected features was verified to decode stimulus or condition with high accuracy yet using only a small set of features. These results provide a new way of wavelet analysis for extracting essential features of the dynamics of spatiotemporal neural data, which then enables to support novel model design of machine learning with representative features. WCMI can extract meaningful information from high-dimensional data Extracted features from neural signals are non-redundant Simple decoders can read out these features with superb accuracy
One of the essential questions in data science is to extract meaningful information from high-dimensional data. A useful approach is to represent data using a few features that maintain the crucial information. The leading property of spatiotemporal data is foremost ever-changing dynamics in time. Wavelet analysis, as a classical method for disentangling time series, can capture temporal dynamics with detail. Here, we leveraged conditional mutual information between wavelets to select a small subset of non-redundant features. We demonstrated the efficiency and effectiveness of features using various types of neuroscience data with different sampling frequencies at the level of the single cell, cell population, and coarse-scale brain activity. Our results shed new insights into representing the dynamics of spatiotemporal data using a few fundamental features extracted by wavelet analysis, which may have wide implications to other types of data with rich temporal dynamics.
Collapse
|
25
|
Yan Q, Zheng Y, Jia S, Zhang Y, Yu Z, Chen F, Tian Y, Huang T, Liu JK. Revealing Fine Structures of the Retinal Receptive Field by Deep-Learning Networks. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:39-50. [PMID: 32167923 DOI: 10.1109/tcyb.2020.2972983] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deep convolutional neural networks (CNNs) have demonstrated impressive performance on many visual tasks. Recently, they became useful models for the visual system in neuroscience. However, it is still not clear what is learned by CNNs in terms of neuronal circuits. When a deep CNN with many layers is used for the visual system, it is not easy to compare the structure components of CNNs with possible neuroscience underpinnings due to highly complex circuits from the retina to the higher visual cortex. Here, we address this issue by focusing on single retinal ganglion cells with biophysical models and recording data from animals. By training CNNs with white noise images to predict neuronal responses, we found that fine structures of the retinal receptive field can be revealed. Specifically, convolutional filters learned are resembling biological components of the retinal circuit. This suggests that a CNN learning from one single retinal cell reveals a minimal neural network carried out in this cell. Furthermore, when CNNs learned from different cells are transferred between cells, there is a diversity of transfer learning performance, which indicates that CNNs are cell specific. Moreover, when CNNs are transferred between different types of input images, here white noise versus natural images, transfer learning shows a good performance, which implies that CNNs indeed capture the full computational ability of a single retinal cell for different inputs. Taken together, these results suggest that CNNs could be used to reveal structure components of neuronal circuits, and provide a powerful model for neural system identification.
Collapse
|
26
|
Jia S, Xing D, Yu Z, Liu JK. Dissecting cascade computational components in spiking neural networks. PLoS Comput Biol 2021; 17:e1009640. [PMID: 34843460 PMCID: PMC8659421 DOI: 10.1371/journal.pcbi.1009640] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 12/09/2021] [Accepted: 11/14/2021] [Indexed: 01/15/2023] Open
Abstract
Finding out the physical structure of neuronal circuits that governs neuronal responses is an important goal for brain research. With fast advances for large-scale recording techniques, identification of a neuronal circuit with multiple neurons and stages or layers becomes possible and highly demanding. Although methods for mapping the connection structure of circuits have been greatly developed in recent years, they are mostly limited to simple scenarios of a few neurons in a pairwise fashion; and dissecting dynamical circuits, particularly mapping out a complete functional circuit that converges to a single neuron, is still a challenging question. Here, we show that a recent method, termed spike-triggered non-negative matrix factorization (STNMF), can address these issues. By simulating different scenarios of spiking neural networks with various connections between neurons and stages, we demonstrate that STNMF is a persuasive method to dissect functional connections within a circuit. Using spiking activities recorded at neurons of the output layer, STNMF can obtain a complete circuit consisting of all cascade computational components of presynaptic neurons, as well as their spiking activities. For simulated simple and complex cells of the primary visual cortex, STNMF allows us to dissect the pathway of visual computation. Taken together, these results suggest that STNMF could provide a useful approach for investigating neuronal systems leveraging recorded functional neuronal activity. It is well known that the computation of neuronal circuits is carried out through the staged and cascade structure of different types of neurons. Nevertheless, the information, particularly sensory information, is processed in a network primarily with feedforward connections through different pathways. A peculiar example is the early visual system, where light is transcoded by the retinal cells, routed by the lateral geniculate nucleus, and reached the primary visual cortex. One meticulous interest in recent years is to map out these physical structures of neuronal pathways. However, most methods so far are limited to taking snapshots of a static view of connections between neurons. It remains unclear how to obtain a functional and dynamical neuronal circuit beyond the simple scenarios of a few randomly sampled neurons. Using simulated spiking neural networks of visual pathways with different scenarios of multiple stages, mixed cell types, and natural image stimuli, we demonstrate that a recent computational tool, named spike-triggered non-negative matrix factorization, can resolve these issues. It enables us to recover the entire structural components of neural networks underlying the computation, together with the functional components of each individual neuron. Utilizing it for complex cells of the primary visual cortex allows us to reveal every underpinning of the nonlinear computation. Our results, together with other recent experimental and computational efforts, show that it is possible to systematically dissect neural circuitry with detailed structural and functional components.
Collapse
Affiliation(s)
- Shanshan Jia
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
| | - Dajun Xing
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
- * E-mail: (ZY); (JKL)
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
- * E-mail: (ZY); (JKL)
| |
Collapse
|
27
|
Rapid Analysis of Visual Receptive Fields by Iterative Tomography. eNeuro 2021; 8:ENEURO.0046-21.2021. [PMID: 34799410 PMCID: PMC8658541 DOI: 10.1523/eneuro.0046-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 11/02/2021] [Accepted: 11/12/2021] [Indexed: 11/21/2022] Open
Abstract
Many receptive fields in the early visual system show standard (center-surround) structure and can be analyzed using simple drifting patterns and a difference-of-Gaussians (DoG) model, which treats the receptive field as a linear filter of the visual image. But many other receptive fields show nonlinear properties such as selectivity for direction of movement. Such receptive fields are typically studied using discrete stimuli (moving or flashed bars and edges) and are modelled according to the features of the visual image to which they are most sensitive. Here, we harness recent advances in tomographic image analysis to characterize rapidly and simultaneously both the linear and nonlinear components of visual receptive fields. Spiking and intracellular voltage potential responses to briefly flashed bars are analyzed using non-negative matrix factorization (NNMF) and iterative reconstruction tomography (IRT). The method yields high-resolution receptive field maps of individual neurons and neuron ensembles in primate (marmoset, both sexes) lateral geniculate and rodent (mouse, male) retina. We show that the first two IRT components correspond to DoG-equivalent center and surround of standard [magnocellular (M) and parvocellular (P)] receptive fields in primate geniculate. The first two IRT components also reveal the spatiotemporal receptive field structure of nonstandard (on/off-rectifying) receptive fields. In rodent retina we combine NNMF-IRT with patch-clamp recording and dye injection to directly map spatial receptive fields to the underlying anatomy of retinal output neurons. We conclude that NNMF-IRT provides a rapid and flexible framework for study of receptive fields in the early visual system.
Collapse
|
28
|
Zheng Y, Jia S, Yu Z, Liu JK, Huang T. Unraveling neural coding of dynamic natural visual scenes via convolutional recurrent neural networks. PATTERNS (NEW YORK, N.Y.) 2021; 2:100350. [PMID: 34693375 PMCID: PMC8515013 DOI: 10.1016/j.patter.2021.100350] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/22/2021] [Accepted: 08/23/2021] [Indexed: 11/18/2022]
Abstract
Traditional models of retinal system identification analyze the neural response to artificial stimuli using models consisting of predefined components. The model design is limited to prior knowledge, and the artificial stimuli are too simple to be compared with stimuli processed by the retina. To fill in this gap with an explainable model that reveals how a population of neurons work together to encode the larger field of natural scenes, here we used a deep-learning model for identifying the computational elements of the retinal circuit that contribute to learning the dynamics of natural scenes. Experimental results verify that the recurrent connection plays a key role in encoding complex dynamic visual scenes while learning biological computational underpinnings of the retinal circuit. In addition, the proposed models reveal both the shapes and the locations of the spatiotemporal receptive fields of ganglion cells.
Collapse
Affiliation(s)
- Yajing Zheng
- Department of Computer Science and Technology, National Engineering Laboratory for Video Technology, Peking University, Beijing 100871, China
| | - Shanshan Jia
- Department of Computer Science and Technology, National Engineering Laboratory for Video Technology, Peking University, Beijing 100871, China
| | - Zhaofei Yu
- Department of Computer Science and Technology, National Engineering Laboratory for Video Technology, Peking University, Beijing 100871, China
- Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds LS2 9JT, UK
| | - Tiejun Huang
- Department of Computer Science and Technology, National Engineering Laboratory for Video Technology, Peking University, Beijing 100871, China
- Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| |
Collapse
|
29
|
Williams AH, Linderman SW. Statistical neuroscience in the single trial limit. Curr Opin Neurobiol 2021; 70:193-205. [PMID: 34861596 DOI: 10.1016/j.conb.2021.10.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 09/29/2021] [Accepted: 10/27/2021] [Indexed: 11/24/2022]
Abstract
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
Collapse
Affiliation(s)
- Alex H Williams
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA
| | - Scott W Linderman
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA.
| |
Collapse
|
30
|
Domdei N, Reiniger JL, Holz FG, Harmening WM. The Relationship Between Visual Sensitivity and Eccentricity, Cone Density and Outer Segment Length in the Human Foveola. Invest Ophthalmol Vis Sci 2021; 62:31. [PMID: 34289495 PMCID: PMC8300048 DOI: 10.1167/iovs.62.9.31] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
Purpose The cellular topography of the human foveola, the central 1° diameter of the fovea, is strikingly non-uniform, with a steep increase of cone photoreceptor density and outer segment (OS) length toward its center. Here, we assessed to what extent the specific cellular organization of the foveola of an individual is reflected in visual sensitivity and if sensitivity peaks at the preferred retinal locus of fixation (PRL). Methods Increment sensitivity to small-spot, cone-targeted visual stimuli (1 × 1 arcmin, 543-nm light) was recorded psychophysically in four human participants at 17 locations concentric within a 0.2° diameter on and around the PRL with adaptive optics scanning laser ophthalmoscopy-based microstimulation. Sensitivity test spots were aligned with cell-resolved maps of cone density and cone OS length. Results Peak sensitivity was at neither the PRL nor the topographical center of the cone mosaic. Within the central 0.1° diameter, a plateau-like sensitivity profile was observed. Cone density and maximal OS length differed significantly across participants, correlating with their peak sensitivity. Based on these results, biophysical simulation allowed to develop a model of visual sensitivity in the foveola, with distance from the PRL (eccentricity), cone density, and OS length as parameters. Conclusions Small-spot sensitivity thresholds in healthy retinas will help to establish the range of normal foveolar function in cell-targeted vision testing. Because of the high reproducibility in replicate testing, threshold variability not explained by our model is assumed to be caused by individual cone and bipolar cell weighting at the specific target locations.
Collapse
Affiliation(s)
- Niklas Domdei
- Department of Ophthalmology, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, Germany
| | - Jenny L Reiniger
- Department of Ophthalmology, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, Germany
| | - Frank G Holz
- Department of Ophthalmology, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, Germany
| | - Wolf M Harmening
- Department of Ophthalmology, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, Germany
| |
Collapse
|
31
|
Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli. Neuron 2021; 109:1692-1706.e8. [PMID: 33798407 PMCID: PMC8153253 DOI: 10.1016/j.neuron.2021.03.015] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 01/22/2021] [Accepted: 03/10/2021] [Indexed: 11/21/2022]
Abstract
The retina dissects the visual scene into parallel information channels, which extract specific visual features through nonlinear processing. The first nonlinear stage is typically considered to occur at the output of bipolar cells, resulting from nonlinear transmitter release from synaptic terminals. In contrast, we show here that bipolar cells themselves can act as nonlinear processing elements at the level of their somatic membrane potential. Intracellular recordings from bipolar cells in the salamander retina revealed frequent nonlinear integration of visual signals within bipolar cell receptive field centers, affecting the encoding of artificial and natural stimuli. These nonlinearities provide sensitivity to spatial structure below the scale of bipolar cell receptive fields in both bipolar and downstream ganglion cells and appear to arise at the excitatory input into bipolar cells. Thus, our data suggest that nonlinear signal pooling starts earlier than previously thought: that is, at the input stage of bipolar cells. Some retinal bipolar cells represent visual contrast in a nonlinear fashion These bipolar cells also nonlinearly integrate visual signals over space The spatial nonlinearity affects the encoding of natural stimuli by bipolar cells The nonlinearity results from feedforward input, not from feedback inhibition
Collapse
|
32
|
Khani MH, Gollisch T. Linear and nonlinear chromatic integration in the mouse retina. Nat Commun 2021; 12:1900. [PMID: 33772000 PMCID: PMC7997992 DOI: 10.1038/s41467-021-22042-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 02/23/2021] [Indexed: 11/09/2022] Open
Abstract
The computations performed by a neural circuit depend on how it integrates its input signals into an output of its own. In the retina, ganglion cells integrate visual information over time, space, and chromatic channels. Unlike the former two, chromatic integration is largely unexplored. Analogous to classical studies of spatial integration, we here study chromatic integration in mouse retina by identifying chromatic stimuli for which activation from the green or UV color channel is maximally balanced by deactivation through the other color channel. This reveals nonlinear chromatic integration in subsets of On, Off, and On-Off ganglion cells. Unlike the latter two, nonlinear On cells display response suppression rather than activation under balanced chromatic stimulation. Furthermore, nonlinear chromatic integration occurs independently of nonlinear spatial integration, depends on contributions from the rod pathway and on surround inhibition, and may provide information about chromatic boundaries, such as the skyline in natural scenes.
Collapse
Affiliation(s)
- Mohammad Hossein Khani
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- International Max Planck Research School for Neuroscience, Göttingen, Germany.
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
| |
Collapse
|
33
|
Nonlinear Spatial Integration Underlies the Diversity of Retinal Ganglion Cell Responses to Natural Images. J Neurosci 2021; 41:3479-3498. [PMID: 33664129 PMCID: PMC8051676 DOI: 10.1523/jneurosci.3075-20.2021] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 02/05/2021] [Accepted: 02/09/2021] [Indexed: 02/06/2023] Open
Abstract
How neurons encode natural stimuli is a fundamental question for sensory neuroscience. In the early visual system, standard encoding models assume that neurons linearly filter incoming stimuli through their receptive fields, but artificial stimuli, such as contrast-reversing gratings, often reveal nonlinear spatial processing. We investigated to what extent such nonlinear processing is relevant for the encoding of natural images in retinal ganglion cells in mice of either sex. How neurons encode natural stimuli is a fundamental question for sensory neuroscience. In the early visual system, standard encoding models assume that neurons linearly filter incoming stimuli through their receptive fields, but artificial stimuli, such as contrast-reversing gratings, often reveal nonlinear spatial processing. We investigated to what extent such nonlinear processing is relevant for the encoding of natural images in retinal ganglion cells in mice of either sex. We found that standard linear receptive field models yielded good predictions of responses to flashed natural images for a subset of cells but failed to capture the spiking activity for many others. Cells with poor model performance displayed pronounced sensitivity to fine spatial contrast and local signal rectification as the dominant nonlinearity. By contrast, sensitivity to high-frequency contrast-reversing gratings, a classical test for nonlinear spatial integration, was not a good predictor of model performance and thus did not capture the variability of nonlinear spatial integration under natural images. In addition, we also observed a class of nonlinear ganglion cells with inverse tuning for spatial contrast, responding more strongly to spatially homogeneous than to spatially structured stimuli. These findings highlight the diversity of receptive field nonlinearities as a crucial component for understanding early sensory encoding in the context of natural stimuli. SIGNIFICANCE STATEMENT Experiments with artificial visual stimuli have revealed that many types of retinal ganglion cells pool spatial input signals nonlinearly. However, it is still unclear how relevant this nonlinear spatial integration is when the input signals are natural images. Here we analyze retinal responses to natural scenes in large populations of mouse ganglion cells. We show that nonlinear spatial integration strongly influences responses to natural images for some ganglion cells, but not for others. Cells with nonlinear spatial integration were sensitive to spatial structure inside their receptive fields, and a small group of cells displayed a surprising sensitivity to spatially homogeneous stimuli. Traditional analyses with contrast-reversing gratings did not predict this variability of nonlinear spatial integration under natural images.
Collapse
|
34
|
Ahn J, Yoo Y, Goo YS. Spike-triggered Clustering for Retinal Ganglion Cell Classification. Exp Neurobiol 2020; 29:433-452. [PMID: 33321473 PMCID: PMC7788309 DOI: 10.5607/en20029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 11/24/2020] [Accepted: 11/25/2020] [Indexed: 11/19/2022] Open
Abstract
Retinal ganglion cells (RGCs), the retina's output neurons, encode visual information through spiking. The RGC receptive field (RF) represents the basic unit of visual information processing in the retina. RFs are commonly estimated using the spike-triggered average (STA), which is the average of the stimulus patterns to which a given RGC is sensitive. Whereas STA, based on the concept of the average, is simple and intuitive, it leaves more complex structures in the RFs undetected. Alternatively, spike-triggered covariance (STC) analysis provides information on second-order RF statistics. However, STC is computationally cumbersome and difficult to interpret. Thus, the objective of this study was to propose and validate a new computational method, called spike-triggered clustering (STCL), specific for multimodal RFs. Specifically, RFs were fit with a Gaussian mixture model, which provides the means and covariances of multiple RF clusters. The proposed method recovered bipolar stimulus patterns in the RFs of ON-OFF cells, while the STA identified only ON and OFF RGCs, and the remaining RGCs were labeled as unknown types. In contrast, our new STCL analysis distinguished ON-OFF RGCs from the ON, OFF, and unknown RGC types classified by STA. Thus, the proposed method enables us to include ON-OFF RGCs prior to retinal information analysis.
Collapse
Affiliation(s)
- Jungryul Ahn
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju 28644, Korea
| | - Yongseok Yoo
- Department of Electronics Engineering, Incheon National University, Incheon 22012, Korea
| | - Yong Sook Goo
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju 28644, Korea
| |
Collapse
|
35
|
Shah NP, Chichilnisky EJ. Computational challenges and opportunities for a bi-directional artificial retina. J Neural Eng 2020; 17:055002. [PMID: 33089827 DOI: 10.1088/1741-2552/aba8b1] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
A future artificial retina that can restore high acuity vision in blind people will rely on the capability to both read (observe) and write (control) the spiking activity of neurons using an adaptive, bi-directional and high-resolution device. Although current research is focused on overcoming the technical challenges of building and implanting such a device, exploiting its capabilities to achieve more acute visual perception will also require substantial computational advances. Using high-density large-scale recording and stimulation in the primate retina with an ex vivo multi-electrode array lab prototype, we frame several of the major computational problems, and describe current progress and future opportunities in solving them. First, we identify cell types and locations from spontaneous activity in the blind retina, and then efficiently estimate their visual response properties by using a low-dimensional manifold of inter-retina variability learned from a large experimental dataset. Second, we estimate retinal responses to a large collection of relevant electrical stimuli by passing current patterns through an electrode array, spike sorting the resulting recordings and using the results to develop a model of evoked responses. Third, we reproduce the desired responses for a given visual target by temporally dithering a diverse collection of electrical stimuli within the integration time of the visual system. Together, these novel approaches may substantially enhance artificial vision in a next-generation device.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, United States of America. Department of Neurosurgery, Stanford University, Stanford, CA, United States of America. Author to whom any correspondence should be addressed
| | | |
Collapse
|
36
|
Rozenblit F, Gollisch T. What the salamander eye has been telling the vision scientist's brain. Semin Cell Dev Biol 2020; 106:61-71. [PMID: 32359891 PMCID: PMC7493835 DOI: 10.1016/j.semcdb.2020.04.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 04/16/2020] [Accepted: 04/16/2020] [Indexed: 12/30/2022]
Abstract
Salamanders have been habitual residents of research laboratories for more than a century, and their history in science is tightly interwoven with vision research. Nevertheless, many vision scientists - even those working with salamanders - may be unaware of how much our knowledge about vision, and particularly the retina, has been shaped by studying salamanders. In this review, we take a tour through the salamander history in vision science, highlighting the main contributions of salamanders to our understanding of the vertebrate retina. We further point out specificities of the salamander visual system and discuss the perspectives of this animal system for future vision research.
Collapse
Affiliation(s)
- Fernando Rozenblit
- Department of Ophthalmology, University Medical Center Göttingen, 37073, Göttingen, Germany; Bernstein Center for Computational Neuroscience Göttingen, 37077, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, 37073, Göttingen, Germany; Bernstein Center for Computational Neuroscience Göttingen, 37077, Göttingen, Germany.
| |
Collapse
|
37
|
Reconstruction of natural visual scenes from neural spikes with deep neural networks. Neural Netw 2020; 125:19-30. [DOI: 10.1016/j.neunet.2020.01.033] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 01/16/2020] [Accepted: 01/28/2020] [Indexed: 01/01/2023]
|
38
|
Shah NP, Brackbill N, Rhoades C, Kling A, Goetz G, Litke AM, Sher A, Simoncelli EP, Chichilnisky EJ. Inference of nonlinear receptive field subunits with spike-triggered clustering. eLife 2020; 9:e45743. [PMID: 32149600 PMCID: PMC7062463 DOI: 10.7554/elife.45743] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Accepted: 10/29/2019] [Indexed: 11/25/2022] Open
Abstract
Responses of sensory neurons are often modeled using a weighted combination of rectified linear subunits. Since these subunits often cannot be measured directly, a flexible method is needed to infer their properties from the responses of downstream neurons. We present a method for maximum likelihood estimation of subunits by soft-clustering spike-triggered stimuli, and demonstrate its effectiveness in visual neurons. For parasol retinal ganglion cells in macaque retina, estimated subunits partitioned the receptive field into compact regions, likely representing aggregated bipolar cell inputs. Joint clustering revealed shared subunits between neighboring cells, producing a parsimonious population model. Closed-loop validation, using stimuli lying in the null space of the linear receptive field, revealed stronger nonlinearities in OFF cells than ON cells. Responses to natural images, jittered to emulate fixational eye movements, were accurately predicted by the subunit model. Finally, the generality of the approach was demonstrated in macaque V1 neurons.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical EngineeringStanford UniversityStanfordUnited States
| | - Nora Brackbill
- Department of PhysicsStanford UniversityStanfordUnited States
| | - Colleen Rhoades
- Department of BioengineeringStanford UniversityStanfordUnited States
| | - Alexandra Kling
- Department of NeurosurgeryStanford School of MedicineStanfordUnited States
- Department of OphthalmologyStanford UniversityStanfordUnited States
- Hansen Experimental Physics LaboratoryStanford UniversityStanfordUnited States
| | - Georges Goetz
- Department of NeurosurgeryStanford School of MedicineStanfordUnited States
- Department of OphthalmologyStanford UniversityStanfordUnited States
- Hansen Experimental Physics LaboratoryStanford UniversityStanfordUnited States
| | - Alan M Litke
- Institute for Particle PhysicsUniversity of California, Santa CruzSanta CruzUnited States
| | - Alexander Sher
- Santa Cruz Institute for Particle PhysicsUniversity of California, Santa CruzSanta CruzUnited States
| | - Eero P Simoncelli
- Center for Neural ScienceNew York UniversityNew YorkUnited States
- Howard Hughes Medical InstituteChevy ChaseUnited States
| | - EJ Chichilnisky
- Department of NeurosurgeryStanford School of MedicineStanfordUnited States
- Department of OphthalmologyStanford UniversityStanfordUnited States
- Hansen Experimental Physics LaboratoryStanford UniversityStanfordUnited States
| |
Collapse
|
39
|
Ahn J, Rueckauer B, Yoo Y, Goo YS. New Features of Receptive Fields in Mouse Retina through Spike-triggered Covariance. Exp Neurobiol 2020; 29:38-49. [PMID: 32122107 PMCID: PMC7075653 DOI: 10.5607/en.2020.29.1.38] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 02/19/2020] [Accepted: 02/19/2020] [Indexed: 12/31/2022] Open
Abstract
Retinal ganglion cells (RGCs) encode various spatiotemporal features of visual information into spiking patterns. The receptive field (RF) of each RGC is usually calculated by spike-triggered average (STA), which is fast and easy to understand, but limited to simple and unimodal RFs. As an alternative, spike-triggered covariance (STC) has been proposed to characterize more complex patterns in RFs. This study compares STA and STC for the characterization of RFs and demonstrates that STC has an advantage over STA for identifying novel spatiotemporal features of RFs in mouse RGCs. We first classified mouse RGCs into ON, OFF, and ON/OFF cells according to their response to full-field light stimulus, and then investigated the spatiotemporal patterns of RFs with random checkerboard stimulation, using both STA and STC analysis. We propose five sub-types (T1–T5) in the STC of mouse RGCs together with their physiological implications. In particular, the relatively slow biphasic pattern (T1) could be related to excitatory inputs from bipolar cells. The transient biphasic pattern (T2) allows one to characterize complex patterns in RFs of ON/OFF cells. The other patterns (T3–T5), which are contrasting, alternating, and monophasic patterns, could be related to inhibitory inputs from amacrine cells. Thus, combining STA and STC and considering the proposed sub-types unveil novel characteristics of RFs in the mouse retina and offer a more holistic understanding of the neural coding mechanisms of mouse RGCs.
Collapse
Affiliation(s)
- Jungryul Ahn
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju 28644, Korea
| | - Bodo Rueckauer
- Institute of Neuroinformatics, ETH Zurich and University of Zurich, Zurich 8057, Switzerland
| | - Yongseok Yoo
- Department of Electronics Engineering, Incheon National University, Incheon 22012, Korea
| | - Yong Sook Goo
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju 28644, Korea
| |
Collapse
|
40
|
Zhang Y, Yang C, Huang K, Jusup M, Wang Z, Li X. Reconstructing Heterogeneous Networks via Compressive Sensing and Clustering. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2020. [DOI: 10.1109/tetci.2020.2997011] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
41
|
Latimer KW, Rieke F, Pillow JW. Inferring synaptic inputs from spikes with a conductance-based neural encoding model. eLife 2019; 8:47012. [PMID: 31850846 PMCID: PMC6989090 DOI: 10.7554/elife.47012] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/17/2019] [Indexed: 01/15/2023] Open
Abstract
Descriptive statistical models of neural responses generally aim to characterize the mapping from stimuli to spike responses while ignoring biophysical details of the encoding process. Here, we introduce an alternative approach, the conductance-based encoding model (CBEM), which describes a mapping from stimuli to excitatory and inhibitory synaptic conductances governing the dynamics of sub-threshold membrane potential. Remarkably, we show that the CBEM can be fit to extracellular spike train data and then used to predict excitatory and inhibitory synaptic currents. We validate these predictions with intracellular recordings from macaque retinal ganglion cells. Moreover, we offer a novel quasi-biophysical interpretation of the Poisson generalized linear model (GLM) as a special case of the CBEM in which excitation and inhibition are perfectly balanced. This work forges a new link between statistical and biophysical models of neural encoding and sheds new light on the biophysical variables that underlie spiking in the early visual pathway.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, United States
| |
Collapse
|
42
|
Abstract
With modern neurophysiological methods able to record neural activity throughout the visual pathway in the context of arbitrarily complex visual stimulation, our understanding of visual system function is becoming limited by the available models of visual neurons that can be directly related to such data. Different forms of statistical models are now being used to probe the cellular and circuit mechanisms shaping neural activity, understand how neural selectivity to complex visual features is computed, and derive the ways in which neurons contribute to systems-level visual processing. However, models that are able to more accurately reproduce observed neural activity often defy simple interpretations. As a result, rather than being used solely to connect with existing theories of visual processing, statistical modeling will increasingly drive the evolution of more sophisticated theories.
Collapse
Affiliation(s)
- Daniel A. Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
43
|
Rhoades CE, Shah NP, Manookin MB, Brackbill N, Kling A, Goetz G, Sher A, Litke AM, Chichilnisky EJ. Unusual Physiological Properties of Smooth Monostratified Ganglion Cell Types in Primate Retina. Neuron 2019; 103:658-672.e6. [PMID: 31227309 PMCID: PMC6817368 DOI: 10.1016/j.neuron.2019.05.036] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 04/26/2019] [Accepted: 05/22/2019] [Indexed: 02/06/2023]
Abstract
The functions of the diverse retinal ganglion cell types in primates and the parallel visual pathways they initiate remain poorly understood. Here, unusual physiological and computational properties of the ON and OFF smooth monostratified ganglion cells are explored. Large-scale multi-electrode recordings from 48 macaque retinas revealed that these cells exhibit irregular receptive field structure composed of spatially segregated hotspots, quite different from the classic center-surround model of retinal receptive fields. Surprisingly, visual stimulation of different hotspots in the same cell produced spikes with subtly different spatiotemporal voltage signatures, consistent with a dendritic contribution to hotspot structure. Targeted visual stimulation and computational inference demonstrated strong nonlinear subunit properties associated with each hotspot, supporting a model in which the hotspots apply nonlinearities at a larger spatial scale than bipolar cells. These findings reveal a previously unreported nonlinear mechanism in the output of the primate retina that contributes to signaling spatial information.
Collapse
Affiliation(s)
- Colleen E Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA.
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, WA 98195, USA
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA 94305, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA; Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305, USA
| | - Georges Goetz
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA; Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA 95064, USA
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA 95064, USA
| | - E J Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA; Department of Ophthalmology Stanford University, Stanford, CA 94305, USA; Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
44
|
Shi Q, Gupta P, Boukhvalova AK, Singer JH, Butts DA. Functional characterization of retinal ganglion cells using tailored nonlinear modeling. Sci Rep 2019; 9:8713. [PMID: 31213620 PMCID: PMC6581951 DOI: 10.1038/s41598-019-45048-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Accepted: 05/31/2019] [Indexed: 01/30/2023] Open
Abstract
The mammalian retina encodes the visual world in action potentials generated by 20-50 functionally and anatomically-distinct types of retinal ganglion cell (RGC). Individual RGC types receive synaptic input from distinct presynaptic circuits; therefore, their responsiveness to specific features in the visual scene arises from the information encoded in synaptic input and shaped by postsynaptic signal integration and spike generation. Unfortunately, there is a dearth of tools for characterizing the computations reflected in RGC spike output. Therefore, we developed a statistical model, the separable Nonlinear Input Model, to characterize the excitatory and suppressive components of RGC receptive fields. We recorded RGC responses to a correlated noise ("cloud") stimulus in an in vitro preparation of mouse retina and found that our model accurately predicted RGC responses at high spatiotemporal resolution. It identified multiple receptive fields reflecting the main excitatory and suppressive components of the response of each neuron. Significantly, our model accurately identified ON-OFF cells and distinguished their distinct ON and OFF receptive fields, and it demonstrated a diversity of suppressive receptive fields in the RGC population. In total, our method offers a rich description of RGC computation and sets a foundation for relating it to retinal circuitry.
Collapse
Affiliation(s)
- Qing Shi
- Department of Biology, University of Maryland, College Park, MD, United States.
| | - Pranjal Gupta
- Department of Biology, University of Maryland, College Park, MD, United States
| | | | - Joshua H Singer
- Department of Biology, University of Maryland, College Park, MD, United States
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, United States
| | - Daniel A Butts
- Department of Biology, University of Maryland, College Park, MD, United States
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, United States
| |
Collapse
|
45
|
Beyeler M, Rounds EL, Carlson KD, Dutt N, Krichmar JL. Neural correlates of sparse coding and dimensionality reduction. PLoS Comput Biol 2019; 15:e1006908. [PMID: 31246948 PMCID: PMC6597036 DOI: 10.1371/journal.pcbi.1006908] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Supported by recent computational studies, there is increasing evidence that a wide range of neuronal responses can be understood as an emergent property of nonnegative sparse coding (NSC), an efficient population coding scheme based on dimensionality reduction and sparsity constraints. We review evidence that NSC might be employed by sensory areas to efficiently encode external stimulus spaces, by some associative areas to conjunctively represent multiple behaviorally relevant variables, and possibly by the basal ganglia to coordinate movement. In addition, NSC might provide a useful theoretical framework under which to understand the often complex and nonintuitive response properties of neurons in other brain areas. Although NSC might not apply to all brain areas (for example, motor or executive function areas) the success of NSC-based models, especially in sensory areas, warrants further investigation for neural correlates in other regions.
Collapse
Affiliation(s)
- Michael Beyeler
- Department of Psychology, University of Washington, Seattle, Washington, United States of America
- Institute for Neuroengineering, University of Washington, Seattle, Washington, United States of America
- eScience Institute, University of Washington, Seattle, Washington, United States of America
- Department of Computer Science, University of California, Irvine, California, United States of America
| | - Emily L. Rounds
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Kristofor D. Carlson
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
- Sandia National Laboratories, Albuquerque, New Mexico, United States of America
| | - Nikil Dutt
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Jeffrey L. Krichmar
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| |
Collapse
|
46
|
Kling A, Field GD, Brainard DH, Chichilnisky EJ. Probing Computation in the Primate Visual System at Single-Cone Resolution. Annu Rev Neurosci 2019; 42:169-186. [PMID: 30857477 DOI: 10.1146/annurev-neuro-070918-050233] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Daylight vision begins when light activates cone photoreceptors in the retina, creating spatial patterns of neural activity. These cone signals are then combined and processed in downstream neural circuits, ultimately producing visual perception. Recent technical advances have made it possible to deliver visual stimuli to the retina that probe this processing by the visual system at its elementary resolution of individual cones. Physiological recordings from nonhuman primate retinas reveal the spatial organization of cone signals in retinal ganglion cells, including how signals from cones of different types are combined to support both spatial and color vision. Psychophysical experiments with human subjects characterize the visual sensations evoked by stimulating a single cone, including the perception of color. Future combined physiological and psychophysical experiments focusing on probing the elementary visual inputs are likely to clarify how neural processing generates our perception of the visual world.
Collapse
Affiliation(s)
- A Kling
- Departments of Neurosurgery and Ophthalmology, Stanford University School of Medicine, Stanford, California 94305, USA;
| | - G D Field
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina 27710, USA
| | - D H Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - E J Chichilnisky
- Departments of Neurosurgery and Ophthalmology, Stanford University School of Medicine, Stanford, California 94305, USA;
| |
Collapse
|
47
|
Activity Correlations between Direction-Selective Retinal Ganglion Cells Synergistically Enhance Motion Decoding from Complex Visual Scenes. Neuron 2019; 101:963-976.e7. [PMID: 30709656 PMCID: PMC6424814 DOI: 10.1016/j.neuron.2019.01.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 11/15/2018] [Accepted: 12/31/2018] [Indexed: 11/26/2022]
Abstract
Neurons in sensory systems are often tuned to particular stimulus features. During complex naturalistic stimulation, however, multiple features may simultaneously affect neuronal responses, which complicates the readout of individual features. To investigate feature representation under complex stimulation, we studied how direction-selective ganglion cells in salamander retina respond to texture motion where direction, velocity, and spatial pattern inside the receptive field continuously change. We found that the cells preserve their direction preference under this stimulation, yet their direction encoding becomes ambiguous due to simultaneous activation by luminance changes. The ambiguities can be resolved by considering populations of direction-selective cells with different preferred directions. This gives rise to synergistic motion decoding, yielding more information from the population than the summed information from single-cell responses. Strong positive response correlations between cells with different preferred directions amplify this synergy. Our results show how correlated population activity can enhance feature extraction in complex visual scenes. Direction-selective ganglion cells respond to motion as well as luminance changes This obscures the readout of direction from single cells under complex texture motion Population decoding improves direction readout supralinearly over individual cells Strong spike correlations further enhance readout through increased synergy
Collapse
|
48
|
Wienbar S, Schwartz GW. The dynamic receptive fields of retinal ganglion cells. Prog Retin Eye Res 2018; 67:102-117. [PMID: 29944919 PMCID: PMC6235744 DOI: 10.1016/j.preteyeres.2018.06.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 06/15/2018] [Accepted: 06/20/2018] [Indexed: 11/30/2022]
Abstract
Retinal ganglion cells (RGCs) were one of the first classes of sensory neurons to be described in terms of a receptive field (RF). Over the last six decades, our understanding of the diversity of RGC types and the nuances of their response properties has grown exponentially. We will review the current understanding of RGC RFs mostly from studies in mammals, but including work from other vertebrates as well. We will argue for a new paradigm that embraces the fluidity of RGC RFs with an eye toward the neuroethology of vision. Specifically, we will focus on (1) different methods for measuring RGC RFs, (2) RF models, (3) feature selectivity and the distinction between fluid and stable RF properties, and (4) ideas about the future of understanding RGC RFs.
Collapse
Affiliation(s)
- Sophia Wienbar
- Departments of Ophthalmology and Physiology, Feinberg School of Medicine, Northwestern University, United States.
| | - Gregory W Schwartz
- Departments of Ophthalmology and Physiology, Feinberg School of Medicine, Northwestern University, United States.
| |
Collapse
|
49
|
Tuten WS, Cooper RF, Tiruveedhula P, Dubra A, Roorda A, Cottaris NP, Brainard DH, Morgan JIW. Spatial summation in the human fovea: Do normal optical aberrations and fixational eye movements have an effect? J Vis 2018; 18:6. [PMID: 30105385 PMCID: PMC6091889 DOI: 10.1167/18.8.6] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Psychophysical inferences about the neural mechanisms supporting spatial vision can be undermined by uncertainties introduced by optical aberrations and fixational eye movements, particularly in fovea where the neuronal grain of the visual system is fine. We examined the effect of these preneural factors on photopic spatial summation in the human fovea using a custom adaptive optics scanning light ophthalmoscope that provided control over optical aberrations and retinal stimulus motion. Consistent with previous results, Ricco's area of complete summation encompassed multiple photoreceptors when measured with ordinary amounts of ocular aberrations and retinal stimulus motion. When both factors were minimized experimentally, summation areas were essentially unchanged, suggesting that foveal spatial summation is limited by postreceptoral neural pooling. We compared our behavioral data to predictions generated with a physiologically-inspired front-end model of the visual system, and were able to capture the shape of the summation curves obtained with and without pre-retinal factors using a single postreceptoral summing filter of fixed spatial extent. Given our data and modeling, neurons in the magnocellular visual pathway, such as parasol ganglion cells, provide a candidate neural correlate of Ricco's area in the central fovea.
Collapse
Affiliation(s)
- William S Tuten
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.,Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA
| | - Robert F Cooper
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.,Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA
| | - Pavan Tiruveedhula
- School of Optometry and Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
| | - Austin Roorda
- School of Optometry and Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
| | - Nicolas P Cottaris
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - David H Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Jessica I W Morgan
- Scheie Eye Institute, Department of Ophthalmology, University of Pennsylvania, Philadelphia, PA, USA.,Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
50
|
Maheswaranathan N, Kastner DB, Baccus SA, Ganguli S. Inferring hidden structure in multilayered neural circuits. PLoS Comput Biol 2018; 14:e1006291. [PMID: 30138312 PMCID: PMC6124781 DOI: 10.1371/journal.pcbi.1006291] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 09/05/2018] [Accepted: 06/09/2018] [Indexed: 01/26/2023] Open
Abstract
A central challenge in sensory neuroscience involves understanding how neural circuits shape computations across cascaded cell layers. Here we attempt to reconstruct the response properties of experimentally unobserved neurons in the interior of a multilayered neural circuit, using cascaded linear-nonlinear (LN-LN) models. We combine non-smooth regularization with proximal consensus algorithms to overcome difficulties in fitting such models that arise from the high dimensionality of their parameter space. We apply this framework to retinal ganglion cell processing, learning LN-LN models of retinal circuitry consisting of thousands of parameters, using 40 minutes of responses to white noise. Our models demonstrate a 53% improvement in predicting ganglion cell spikes over classical linear-nonlinear (LN) models. Internal nonlinear subunits of the model match properties of retinal bipolar cells in both receptive field structure and number. Subunits have consistently high thresholds, supressing all but a small fraction of inputs, leading to sparse activity patterns in which only one subunit drives ganglion cell spiking at any time. From the model’s parameters, we predict that the removal of visual redundancies through stimulus decorrelation across space, a central tenet of efficient coding theory, originates primarily from bipolar cell synapses. Furthermore, the composite nonlinear computation performed by retinal circuitry corresponds to a boolean OR function applied to bipolar cell feature detectors. Our methods are statistically and computationally efficient, enabling us to rapidly learn hierarchical non-linear models as well as efficiently compute widely used descriptive statistics such as the spike triggered average (STA) and covariance (STC) for high dimensional stimuli. This general computational framework may aid in extracting principles of nonlinear hierarchical sensory processing across diverse modalities from limited data. Computation in neural circuits arises from the cascaded processing of inputs through multiple cell layers. Each of these cell layers performs operations such as filtering and thresholding in order to shape a circuit’s output. It remains a challenge to describe both the computations and the mechanisms that mediate them given limited data recorded from a neural circuit. A standard approach to describing circuit computation involves building quantitative encoding models that predict the circuit response given its input, but these often fail to map in an interpretable way onto mechanisms within the circuit. In this work, we build two layer linear-nonlinear cascade models (LN-LN) in order to describe how the retinal output is shaped by nonlinear mechanisms in the inner retina. We find that these LN-LN models, fit to ganglion cell recordings alone, identify filters and nonlinearities that are readily mapped onto individual circuit components inside the retina, namely bipolar cells and the bipolar-to-ganglion cell synaptic threshold. This work demonstrates how combining simple prior knowledge of circuit properties with partial experimental recordings of a neural circuit’s output can yield interpretable models of the entire circuit computation, including parts of the circuit that are hidden or not directly observed in neural recordings.
Collapse
Affiliation(s)
- Niru Maheswaranathan
- Neurosciences Graduate Program, Stanford University, Stanford, California, United States of America
| | - David B. Kastner
- Neurosciences Graduate Program, Stanford University, Stanford, California, United States of America
| | - Stephen A. Baccus
- Department of Neurobiology, Stanford University, Stanford, California, United States of America
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
- * E-mail:
| |
Collapse
|