1
|
Karamanlis D, Khani MH, Schreyer HM, Zapp SJ, Mietsch M, Gollisch T. Nonlinear receptive fields evoke redundant retinal coding of natural scenes. Nature 2025; 637:394-401. [PMID: 39567692 PMCID: PMC11711096 DOI: 10.1038/s41586-024-08212-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 10/14/2024] [Indexed: 11/22/2024]
Abstract
The role of the vertebrate retina in early vision is generally described by the efficient coding hypothesis1,2, which predicts that the retina reduces the redundancy inherent in natural scenes3 by discarding spatiotemporal correlations while preserving stimulus information4. It is unclear, however, whether the predicted decorrelation and redundancy reduction in the activity of ganglion cells, the retina's output neurons, hold under gaze shifts, which dominate the dynamics of the natural visual input5. We show here that species-specific gaze patterns in natural stimuli can drive correlated spiking responses both in and across distinct types of ganglion cells in marmoset as well as mouse retina. These concerted responses disrupt redundancy reduction to signal fixation periods with locally high spatial contrast. Model-based analyses of ganglion cell responses to natural stimuli show that the observed response correlations follow from nonlinear pooling of ganglion cell inputs. Our results indicate cell-type-specific deviations from efficient coding in retinal processing of natural gaze shifts.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- University of Geneva, Department of Basic Neurosciences, Geneva, Switzerland.
| | - Mohammad H Khani
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - Helene M Schreyer
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - Sören J Zapp
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Matthias Mietsch
- German Primate Center, Laboratory Animal Science Unit, Göttingen, Germany
- German Center for Cardiovascular Research, Partner Site Göttingen, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany.
- Else Kröner Fresenius Center for Optogenetic Therapies, University Medical Center Göttingen, Göttingen, Germany.
| |
Collapse
|
2
|
Hoshal BD, Holmes CM, Bojanek K, Salisbury JM, Berry MJ, Marre O, Palmer SE. Stimulus-invariant aspects of the retinal code drive discriminability of natural scenes. Proc Natl Acad Sci U S A 2024; 121:e2313676121. [PMID: 39700141 DOI: 10.1073/pnas.2313676121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/11/2024] [Indexed: 12/21/2024] Open
Abstract
Everything that the brain sees must first be encoded by the retina, which maintains a reliable representation of the visual world in many different, complex natural scenes while also adapting to stimulus changes. This study quantifies whether and how the brain selectively encodes stimulus features about scene identity in complex naturalistic environments. While a wealth of previous work has dug into the static and dynamic features of the population code in retinal ganglion cells (RGCs), less is known about how populations form both flexible and reliable encoding in natural moving scenes. We record from the larval salamander retina responding to five different natural movies, over many repeats, and use these data to characterize the population code in terms of single-cell fluctuations in rate and pairwise couplings between cells. Decomposing the population code into independent and cell-cell interactions reveals how broad scene structure is encoded in the retinal output. while the single-cell activity adapts to different stimuli, the population structure captured in the sparse, strong couplings is consistent across natural movies as well as synthetic stimuli. We show that these interactions contribute to encoding scene identity. We also demonstrate that this structure likely arises in part from shared bipolar cell input as well as from gap junctions between RGCs and amacrine cells.
Collapse
Affiliation(s)
- Benjamin D Hoshal
- Committee on Computational Neuroscience, Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
| | | | - Kyle Bojanek
- Committee on Computational Neuroscience, Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
| | - Jared M Salisbury
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
- Department of Physics, University of Chicago, Chicago, IL 60637
| | - Michael J Berry
- Princeton Neuroscience Institute, Department of Molecular Biology, Princeton University, Princeton, NJ 08540
| | - Olivier Marre
- Institut de la Vision, Sorbonne Université, INSERM, Paris 75012, France
| | - Stephanie E Palmer
- Committee on Computational Neuroscience, Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
- Department of Physics, University of Chicago, Chicago, IL 60637
- Center for the Physics of Biological Function, Department of Physics, Princeton University, Princeton, NJ 08540
| |
Collapse
|
3
|
Wu EG, Brackbill N, Rhoades C, Kling A, Gogliettino AR, Shah NP, Sher A, Litke AM, Simoncelli EP, Chichilnisky EJ. Fixational eye movements enhance the precision of visual information transmitted by the primate retina. Nat Commun 2024; 15:7964. [PMID: 39261491 PMCID: PMC11390888 DOI: 10.1038/s41467-024-52304-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/29/2024] [Indexed: 09/13/2024] Open
Abstract
Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.
Collapse
Affiliation(s)
- Eric G Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA, USA
| | - Colleen Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| | - Alex R Gogliettino
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eero P Simoncelli
- Flatiron Institute, Simons Foundation, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - E J Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA, USA.
- Department of Ophthalmology, Stanford University, Stanford, CA, USA.
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA.
| |
Collapse
|
4
|
Krüppel S, Khani MH, Schreyer HM, Sridhar S, Ramakrishna V, Zapp SJ, Mietsch M, Karamanlis D, Gollisch T. Applying Super-Resolution and Tomography Concepts to Identify Receptive Field Subunits in the Retina. PLoS Comput Biol 2024; 20:e1012370. [PMID: 39226328 PMCID: PMC11398665 DOI: 10.1371/journal.pcbi.1012370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 09/13/2024] [Accepted: 07/28/2024] [Indexed: 09/05/2024] Open
Abstract
Spatially nonlinear stimulus integration by retinal ganglion cells lies at the heart of various computations performed by the retina. It arises from the nonlinear transmission of signals that ganglion cells receive from bipolar cells, which thereby constitute functional subunits within a ganglion cell's receptive field. Inferring these subunits from recorded ganglion cell activity promises a new avenue for studying the functional architecture of the retina. This calls for efficient methods, which leave sufficient experimental time to leverage the acquired knowledge for further investigating identified subunits. Here, we combine concepts from super-resolution microscopy and computed tomography and introduce super-resolved tomographic reconstruction (STR) as a technique to efficiently stimulate and locate receptive field subunits. Simulations demonstrate that this approach can reliably identify subunits across a wide range of model variations, and application in recordings of primate parasol ganglion cells validates the experimental feasibility. STR can potentially reveal comprehensive subunit layouts within only a few tens of minutes of recording time, making it ideal for online analysis and closed-loop investigations of receptive field substructure in retina recordings.
Collapse
Affiliation(s)
- Steffen Krüppel
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| | - Mohammad H Khani
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Helene M Schreyer
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Shashwat Sridhar
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Varsha Ramakrishna
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Sören J Zapp
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Matthias Mietsch
- German Primate Center, Laboratory Animal Science Unit, Göttingen, Germany
- German Center for Cardiovascular Research, Partner Site Göttingen, Göttingen, Germany
| | - Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
- Else Kröner Fresenius Center for Optogenetic Therapies, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
5
|
Hoshal BD, Holmes CM, Bojanek K, Salisbury J, Berry MJ, Marre O, Palmer SE. Stimulus invariant aspects of the retinal code drive discriminability of natural scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.08.552526. [PMID: 37609259 PMCID: PMC10441377 DOI: 10.1101/2023.08.08.552526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Everything that the brain sees must first be encoded by the retina, which maintains a reliable representation of the visual world in many different, complex natural scenes while also adapting to stimulus changes. This study quantifies whether and how the brain selectively encodes stimulus features about scene identity in complex naturalistic environments. While a wealth of previous work has dug into the static and dynamic features of the population code in retinal ganglion cells, less is known about how populations form both flexible and reliable encoding in natural moving scenes. We record from the larval salamander retina responding to five different natural movies, over many repeats, and use these data to characterize the population code in terms of single-cell fluctuations in rate and pairwise couplings between cells. Decomposing the population code into independent and cell-cell interactions reveals how broad scene structure is encoded in the retinal output. while the single-cell activity adapts to different stimuli, the population structure captured in the sparse, strong couplings is consistent across natural movies as well as synthetic stimuli. We show that these interactions contribute to encoding scene identity. We also demonstrate that this structure likely arises in part from shared bipolar cell input as well as from gap junctions between retinal ganglion cells and amacrine cells.
Collapse
|
6
|
Almasi A, Sun SH, Jung YJ, Ibbotson M, Meffin H. Data-driven modelling of visual receptive fields: comparison between the generalized quadratic model and the nonlinear input model. J Neural Eng 2024; 21:046014. [PMID: 38941988 DOI: 10.1088/1741-2552/ad5d15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 06/28/2024] [Indexed: 06/30/2024]
Abstract
Objective: Neurons in primary visual cortex (V1) display a range of sensitivity in their response to translations of their preferred visual features within their receptive field: from high specificity to a precise position through to complete invariance. This visual feature selectivity and invariance is frequently modeled by applying a selection of linear spatial filters to the input image, that define the feature selectivity, followed by a nonlinear function that combines the filter outputs, that defines the invariance, to predict the neural response. We compare two such classes of model, that are both popular and parsimonious, the generalized quadratic model (GQM) and the nonlinear input model (NIM). These two classes of model differ primarily in that the NIM can accommodate a greater diversity in the form of nonlinearity that is applied to the outputs of the filters.Approach: We compare the two model types by applying them to data from multielectrode recordings from cat primary visual cortex in response to spatially white Gaussian noise After fitting both classes of model to a database of 342 single units (SUs), we analyze the qualitative and quantitative differences in the visual feature processing performed by the two models and their ability to predict neural response.Main results: We find that the NIM predicts response rates on a held-out data at least as well as the GQM for 95% of SUs. Superior performance occurs predominantly for those units with above average spike rates and is largely due to the NIMs ability to capture aspects of the model's nonlinear function cannot be captured with the GQM rather than differences in the visual features being processed by the two different models.Significance: These results can help guide model choice for data-driven receptive field modelling.
Collapse
Affiliation(s)
- Ali Almasi
- National Vision Research Institute, Carlton, VIC 3053, Australia
| | - Shi H Sun
- National Vision Research Institute, Carlton, VIC 3053, Australia
| | - Young Jun Jung
- National Vision Research Institute, Carlton, VIC 3053, Australia
| | - Michael Ibbotson
- National Vision Research Institute, Carlton, VIC 3053, Australia
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Hamish Meffin
- National Vision Research Institute, Carlton, VIC 3053, Australia
- Department of Biomedical Engineering, The University of Melbourne, Parkville, VIC 3010, Australia
| |
Collapse
|
7
|
Hsiang JC, Shen N, Soto F, Kerschensteiner D. Distributed feature representations of natural stimuli across parallel retinal pathways. Nat Commun 2024; 15:1920. [PMID: 38429280 PMCID: PMC10907388 DOI: 10.1038/s41467-024-46348-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 02/22/2024] [Indexed: 03/03/2024] Open
Abstract
How sensory systems extract salient features from natural environments and organize them across neural pathways is unclear. Combining single-cell and population two-photon calcium imaging in mice, we discover that retinal ON bipolar cells (second-order neurons of the visual system) are divided into two blocks of four types. The two blocks distribute temporal and spatial information encoding, respectively. ON bipolar cell axons co-stratify within each block, but separate laminarly between them (upper block: diverse temporal, uniform spatial tuning; lower block: diverse spatial, uniform temporal tuning). ON bipolar cells extract temporal and spatial features similarly from artificial and naturalistic stimuli. In addition, they differ in sensitivity to coherent motion in naturalistic movies. Motion information is distributed across ON bipolar cells in the upper and the lower blocks, multiplexed with temporal and spatial contrast, independent features of natural scenes. Comparing the responses of different boutons within the same arbor, we find that axons of all ON bipolar cell types function as computational units. Thus, our results provide insights into the visual feature extraction from naturalistic stimuli and reveal how structural and functional organization cooperate to generate parallel ON pathways for temporal and spatial information in the mammalian retina.
Collapse
Affiliation(s)
- Jen-Chun Hsiang
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Ning Shen
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Florentina Soto
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Daniel Kerschensteiner
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, 63110, USA.
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, 63110, USA.
- Department of Biomedical Engineering, Washington University School of Medicine, St. Louis, MO, 63110, USA.
| |
Collapse
|
8
|
Zaidi M, Aggarwal G, Shah NP, Karniol-Tambour O, Goetz G, Madugula SS, Gogliettino AR, Wu EG, Kling A, Brackbill N, Sher A, Litke AM, Chichilnisky EJ. Inferring light responses of primate retinal ganglion cells using intrinsic electrical signatures. J Neural Eng 2023; 20:10.1088/1741-2552/ace657. [PMID: 37433293 PMCID: PMC11067857 DOI: 10.1088/1741-2552/ace657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 07/11/2023] [Indexed: 07/13/2023]
Abstract
Objective. Retinal implants are designed to stimulate retinal ganglion cells (RGCs) in a way that restores sight to individuals blinded by photoreceptor degeneration. Reproducing high-acuity vision with these devices will likely require inferring the natural light responses of diverse RGC types in the implanted retina, without being able to measure them directly. Here we demonstrate an inference approach that exploits intrinsic electrophysiological features of primate RGCs.Approach.First, ON-parasol and OFF-parasol RGC types were identified using their intrinsic electrical features in large-scale multi-electrode recordings from macaque retina. Then, the electrically inferred somatic location, inferred cell type, and average linear-nonlinear-Poisson model parameters of each cell type were used to infer a light response model for each cell. The accuracy of the cell type classification and of reproducing measured light responses with the model were evaluated.Main results.A cell-type classifier trained on 246 large-scale multi-electrode recordings from 148 retinas achieved 95% mean accuracy on 29 test retinas. In five retinas tested, the inferred models achieved an average correlation with measured firing rates of 0.49 for white noise visual stimuli and 0.50 for natural scenes stimuli, compared to 0.65 and 0.58 respectively for models fitted to recorded light responses (an upper bound). Linear decoding of natural images from predicted RGC activity in one retina showed a mean correlation of 0.55 between decoded and true images, compared to an upper bound of 0.81 using models fitted to light response data.Significance.These results suggest that inference of RGC light response properties from intrinsic features of their electrical activity may be a useful approach for high-fidelity sight restoration. The overall strategy of first inferring cell type from electrical features and then exploiting cell type to help infer natural cell function may also prove broadly useful to neural interfaces.
Collapse
Affiliation(s)
- Moosa Zaidi
- Stanford University School of Medicine, Stanford University, Stanford, CA, United States of America
- Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Gorish Aggarwal
- Neurosurgery, Stanford University, Stanford, CA, United States of America
- Electrical Engineering, Stanford University, Stanford, CA, United States of America
| | - Nishal P Shah
- Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Orren Karniol-Tambour
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States of America
| | - Georges Goetz
- Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Sasidhar S Madugula
- Stanford University School of Medicine, Stanford University, Stanford, CA, United States of America
- Neurosciences, Stanford University, Stanford, CA, United States of America
| | - Alex R Gogliettino
- Neurosciences, Stanford University, Stanford, CA, United States of America
| | - Eric G Wu
- Electrical Engineering, Stanford University, Stanford, CA, United States of America
| | - Alexandra Kling
- Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Nora Brackbill
- Physics, Stanford University, Stanford, CA, United States of America
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California Santa Cruz, Santa Cruz, CA, United States of America
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California Santa Cruz, Santa Cruz, CA, United States of America
| | - E J Chichilnisky
- Neurosurgery, Stanford University, Stanford, CA, United States of America
- Ophthalmology, Stanford University, Stanford, CA, United States of America
| |
Collapse
|
9
|
Gogliettino AR, Madugula SS, Grosberg LE, Vilkhu RS, Brown J, Nguyen H, Kling A, Hottowy P, Dąbrowski W, Sher A, Litke AM, Chichilnisky EJ. High-Fidelity Reproduction of Visual Signals by Electrical Stimulation in the Central Primate Retina. J Neurosci 2023; 43:4625-4641. [PMID: 37188516 PMCID: PMC10286946 DOI: 10.1523/jneurosci.1091-22.2023] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 05/17/2023] Open
Abstract
Electrical stimulation of retinal ganglion cells (RGCs) with electronic implants provides rudimentary artificial vision to people blinded by retinal degeneration. However, current devices stimulate indiscriminately and therefore cannot reproduce the intricate neural code of the retina. Recent work has demonstrated more precise activation of RGCs using focal electrical stimulation with multielectrode arrays in the peripheral macaque retina, but it is unclear how effective this can be in the central retina, which is required for high-resolution vision. This work probes the neural code and effectiveness of focal epiretinal stimulation in the central macaque retina, using large-scale electrical recording and stimulation ex vivo The functional organization, light response properties, and electrical properties of the major RGC types in the central retina were mostly similar to the peripheral retina, with some notable differences in density, kinetics, linearity, spiking statistics, and correlations. The major RGC types could be distinguished by their intrinsic electrical properties. Electrical stimulation targeting parasol cells revealed similar activation thresholds and reduced axon bundle activation in the central retina, but lower stimulation selectivity. Quantitative evaluation of the potential for image reconstruction from electrically evoked parasol cell signals revealed higher overall expected image quality in the central retina. An exploration of inadvertent midget cell activation suggested that it could contribute high spatial frequency noise to the visual signal carried by parasol cells. These results support the possibility of reproducing high-acuity visual signals in the central retina with an epiretinal implant.SIGNIFICANCE STATEMENT Artificial restoration of vision with retinal implants is a major treatment for blindness. However, present-day implants do not provide high-resolution visual perception, in part because they do not reproduce the natural neural code of the retina. Here, we demonstrate the level of visual signal reproduction that is possible with a future implant by examining how accurately responses to electrical stimulation of parasol retinal ganglion cells can convey visual signals. Although the precision of electrical stimulation in the central retina was diminished relative to the peripheral retina, the quality of expected visual signal reconstruction in parasol cells was greater. These findings suggest that visual signals could be restored with high fidelity in the central retina using a future retinal implant.
Collapse
Affiliation(s)
- Alex R Gogliettino
- Neurosciences PhD Program, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Sasidhar S Madugula
- Neurosciences PhD Program, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Stanford School of Medicine, Stanford University, Stanford, California 94305
| | - Lauren E Grosberg
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
| | - Ramandeep S Vilkhu
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
| | - Jeff Brown
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
| | - Huy Nguyen
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Alexandra Kling
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
| | - Paweł Hottowy
- Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, 30-059, Kraków, Poland
| | - Władysław Dąbrowski
- Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, 30-059, Kraków, Poland
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, California 95064
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, California 95064
| | - E J Chichilnisky
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
- Department of Ophthalmology, Stanford University, Stanford, California 94305
| |
Collapse
|
10
|
Freedland J, Rieke F. Systematic reduction of the dimensionality of natural scenes allows accurate predictions of retinal ganglion cell spike outputs. Proc Natl Acad Sci U S A 2022; 119:e2121744119. [PMID: 36343230 PMCID: PMC9674269 DOI: 10.1073/pnas.2121744119] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 09/23/2022] [Indexed: 11/09/2022] Open
Abstract
The mammalian retina engages a broad array of linear and nonlinear circuit mechanisms to convert natural scenes into retinal ganglion cell (RGC) spike outputs. Although many individual integration mechanisms are well understood, we know less about how multiple mechanisms interact to encode the complex spatial features present in natural inputs. Here, we identified key spatial features in natural scenes that shape encoding by primate parasol RGCs. Our approach identified simplifications in the spatial structure of natural scenes that minimally altered RGC spike responses. We observed that reducing natural movies into 16 linearly integrated regions described ∼80% of the structure of parasol RGC spike responses; this performance depended on the number of regions but not their precise spatial locations. We used simplified stimuli to design high-dimensional metamers that recapitulated responses to naturalistic movies. Finally, we modeled the retinal computations that convert flashed natural images into one-dimensional spike counts.
Collapse
Affiliation(s)
- Julian Freedland
- Molecular Engineering & Sciences Institute, University of Washington, Seattle, WA 98195
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
11
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
12
|
Zapp SJ, Nitsche S, Gollisch T. Retinal receptive-field substructure: scaffolding for coding and computation. Trends Neurosci 2022; 45:430-445. [DOI: 10.1016/j.tins.2022.03.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/29/2022]
|
13
|
Liu JK, Karamanlis D, Gollisch T. Simple model for encoding natural images by retinal ganglion cells with nonlinear spatial integration. PLoS Comput Biol 2022; 18:e1009925. [PMID: 35259159 PMCID: PMC8932571 DOI: 10.1371/journal.pcbi.1009925] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 03/18/2022] [Accepted: 02/14/2022] [Indexed: 01/05/2023] Open
Abstract
A central goal in sensory neuroscience is to understand the neuronal signal processing involved in the encoding of natural stimuli. A critical step towards this goal is the development of successful computational encoding models. For ganglion cells in the vertebrate retina, the development of satisfactory models for responses to natural visual scenes is an ongoing challenge. Standard models typically apply linear integration of visual stimuli over space, yet many ganglion cells are known to show nonlinear spatial integration, in particular when stimulated with contrast-reversing gratings. We here study the influence of spatial nonlinearities in the encoding of natural images by ganglion cells, using multielectrode-array recordings from isolated salamander and mouse retinas. We assess how responses to natural images depend on first- and second-order statistics of spatial patterns inside the receptive field. This leads us to a simple extension of current standard ganglion cell models. We show that taking not only the weighted average of light intensity inside the receptive field into account but also its variance over space can partly account for nonlinear integration and substantially improve response predictions of responses to novel images. For salamander ganglion cells, we find that response predictions for cell classes with large receptive fields profit most from including spatial contrast information. Finally, we demonstrate how this model framework can be used to assess the spatial scale of nonlinear integration. Our results underscore that nonlinear spatial stimulus integration translates to stimulation with natural images. Furthermore, the introduced model framework provides a simple, yet powerful extension of standard models and may serve as a benchmark for the development of more detailed models of the nonlinear structure of receptive fields. For understanding how sensory systems operate in the natural environment, an important goal is to develop models that capture neuronal responses to natural stimuli. For retinal ganglion cells, which connect the eye to the brain, current standard models often fail to capture responses to natural visual scenes. This shortcoming is at least partly rooted in the fact that ganglion cells may combine visual signals over space in a nonlinear fashion. We here show that a simple model, which not only considers the average light intensity inside a cell’s receptive field but also the variance of light intensity over space, can partly account for these nonlinearities and thereby improve current standard models. This provides an easy-to-obtain benchmark for modeling ganglion cell responses to natural images.
Collapse
Affiliation(s)
- Jian K. Liu
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
- Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), University of Göttingen, Göttingen, Germany
- * E-mail:
| |
Collapse
|
14
|
Shah NP, Brackbill N, Samarakoon R, Rhoades C, Kling A, Sher A, Litke A, Singer Y, Shlens J, Chichilnisky EJ. Individual variability of neural computations in the primate retina. Neuron 2022; 110:698-708.e5. [PMID: 34932942 PMCID: PMC8857061 DOI: 10.1016/j.neuron.2021.11.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 08/10/2021] [Accepted: 11/20/2021] [Indexed: 12/28/2022]
Abstract
Variation in the neural code contributes to making each individual unique. We probed neural code variation using ∼100 population recordings from major ganglion cell types in the macaque retina, combined with an interpretable computational representation of individual variability. This representation captured variation and covariation in properties such as nonlinearity, temporal dynamics, and spatial receptive field size and preserved invariances such as asymmetries between On and Off cells. The covariation of response properties in different cell types was associated with the proximity of lamination of their synaptic input. Surprisingly, male retinas exhibited higher firing rates and faster temporal integration than female retinas. Exploiting data from previously recorded retinas enabled efficient characterization of a new macaque retina, and of a human retina. Simulations indicated that combining a large dataset of retinal recordings with behavioral feedback could reveal the neural code in a living human and thus improve vision restoration with retinal implants.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Hansen Experimental Physics Lab, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA.
| | - Nora Brackbill
- Hansen Experimental Physics Lab, Stanford University, Stanford, CA 94305, USA; Department of Physics, Stanford University, Stanford, CA 94305, USA
| | - Ryan Samarakoon
- Hansen Experimental Physics Lab, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
| | - Colleen Rhoades
- Hansen Experimental Physics Lab, Stanford University, Stanford, CA 94305, USA; Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Alexandra Kling
- Hansen Experimental Physics Lab, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
| | - Alexander Sher
- University of California, Santa Cruz, Santa Cruz, CA 95064, USA
| | - Alan Litke
- University of California, Santa Cruz, Santa Cruz, CA 95064, USA
| | - Yoram Singer
- WorldQuant, LLC, 1700 E Putnam Ave., Old Greenwich, CT 06870, USA
| | - Jonathon Shlens
- Google Brain, 1600 Amphitheatre Pkwy., Mountain View, CA 94043, USA
| | - E J Chichilnisky
- Hansen Experimental Physics Lab, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
15
|
Kupers ER, Benson NC, Carrasco M, Winawer J. Asymmetries around the visual field: From retina to cortex to behavior. PLoS Comput Biol 2022; 18:e1009771. [PMID: 35007281 PMCID: PMC8782511 DOI: 10.1371/journal.pcbi.1009771] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 01/21/2022] [Accepted: 12/19/2021] [Indexed: 11/29/2022] Open
Abstract
Visual performance varies around the visual field. It is best near the fovea compared to the periphery, and at iso-eccentric locations it is best on the horizontal, intermediate on the lower, and poorest on the upper meridian. The fovea-to-periphery performance decline is linked to the decreases in cone density, retinal ganglion cell (RGC) density, and V1 cortical magnification factor (CMF) as eccentricity increases. The origins of polar angle asymmetries are not well understood. Optical quality and cone density vary across the retina, but recent computational modeling has shown that these factors can only account for a small percentage of behavior. Here, we investigate how visual processing beyond the cone photon absorptions contributes to polar angle asymmetries in performance. First, we quantify the extent of asymmetries in cone density, midget RGC density, and V1 CMF. We find that both polar angle asymmetries and eccentricity gradients increase from cones to mRGCs, and from mRGCs to cortex. Second, we extend our previously published computational observer model to quantify the contribution of phototransduction by the cones and spatial filtering by mRGCs to behavioral asymmetries. Starting with photons emitted by a visual display, the model simulates the effect of human optics, cone isomerizations, phototransduction, and mRGC spatial filtering. The model performs a forced choice orientation discrimination task on mRGC responses using a linear support vector machine classifier. The model shows that asymmetries in a decision maker's performance across polar angle are greater when assessing the photocurrents than when assessing isomerizations and are greater still when assessing mRGC signals. Nonetheless, the polar angle asymmetries of the mRGC outputs are still considerably smaller than those observed from human performance. We conclude that cone isomerizations, phototransduction, and the spatial filtering properties of mRGCs contribute to polar angle performance differences, but that a full account of these differences will entail additional contribution from cortical representations.
Collapse
Affiliation(s)
- Eline R. Kupers
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Sciences, New York University, New York, New York, United States of America
| | - Noah C. Benson
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Sciences, New York University, New York, New York, United States of America
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Sciences, New York University, New York, New York, United States of America
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Sciences, New York University, New York, New York, United States of America
| |
Collapse
|
16
|
Williams AH, Linderman SW. Statistical neuroscience in the single trial limit. Curr Opin Neurobiol 2021; 70:193-205. [PMID: 34861596 DOI: 10.1016/j.conb.2021.10.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 09/29/2021] [Accepted: 10/27/2021] [Indexed: 11/24/2022]
Abstract
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
Collapse
Affiliation(s)
- Alex H Williams
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA
| | - Scott W Linderman
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA.
| |
Collapse
|
17
|
Khani MH, Gollisch T. Linear and nonlinear chromatic integration in the mouse retina. Nat Commun 2021; 12:1900. [PMID: 33772000 PMCID: PMC7997992 DOI: 10.1038/s41467-021-22042-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 02/23/2021] [Indexed: 11/09/2022] Open
Abstract
The computations performed by a neural circuit depend on how it integrates its input signals into an output of its own. In the retina, ganglion cells integrate visual information over time, space, and chromatic channels. Unlike the former two, chromatic integration is largely unexplored. Analogous to classical studies of spatial integration, we here study chromatic integration in mouse retina by identifying chromatic stimuli for which activation from the green or UV color channel is maximally balanced by deactivation through the other color channel. This reveals nonlinear chromatic integration in subsets of On, Off, and On-Off ganglion cells. Unlike the latter two, nonlinear On cells display response suppression rather than activation under balanced chromatic stimulation. Furthermore, nonlinear chromatic integration occurs independently of nonlinear spatial integration, depends on contributions from the rod pathway and on surround inhibition, and may provide information about chromatic boundaries, such as the skyline in natural scenes.
Collapse
Affiliation(s)
- Mohammad Hossein Khani
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- International Max Planck Research School for Neuroscience, Göttingen, Germany.
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
| |
Collapse
|
18
|
Nonlinear Spatial Integration Underlies the Diversity of Retinal Ganglion Cell Responses to Natural Images. J Neurosci 2021; 41:3479-3498. [PMID: 33664129 PMCID: PMC8051676 DOI: 10.1523/jneurosci.3075-20.2021] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 02/05/2021] [Accepted: 02/09/2021] [Indexed: 02/06/2023] Open
Abstract
How neurons encode natural stimuli is a fundamental question for sensory neuroscience. In the early visual system, standard encoding models assume that neurons linearly filter incoming stimuli through their receptive fields, but artificial stimuli, such as contrast-reversing gratings, often reveal nonlinear spatial processing. We investigated to what extent such nonlinear processing is relevant for the encoding of natural images in retinal ganglion cells in mice of either sex. How neurons encode natural stimuli is a fundamental question for sensory neuroscience. In the early visual system, standard encoding models assume that neurons linearly filter incoming stimuli through their receptive fields, but artificial stimuli, such as contrast-reversing gratings, often reveal nonlinear spatial processing. We investigated to what extent such nonlinear processing is relevant for the encoding of natural images in retinal ganglion cells in mice of either sex. We found that standard linear receptive field models yielded good predictions of responses to flashed natural images for a subset of cells but failed to capture the spiking activity for many others. Cells with poor model performance displayed pronounced sensitivity to fine spatial contrast and local signal rectification as the dominant nonlinearity. By contrast, sensitivity to high-frequency contrast-reversing gratings, a classical test for nonlinear spatial integration, was not a good predictor of model performance and thus did not capture the variability of nonlinear spatial integration under natural images. In addition, we also observed a class of nonlinear ganglion cells with inverse tuning for spatial contrast, responding more strongly to spatially homogeneous than to spatially structured stimuli. These findings highlight the diversity of receptive field nonlinearities as a crucial component for understanding early sensory encoding in the context of natural stimuli. SIGNIFICANCE STATEMENT Experiments with artificial visual stimuli have revealed that many types of retinal ganglion cells pool spatial input signals nonlinearly. However, it is still unclear how relevant this nonlinear spatial integration is when the input signals are natural images. Here we analyze retinal responses to natural scenes in large populations of mouse ganglion cells. We show that nonlinear spatial integration strongly influences responses to natural images for some ganglion cells, but not for others. Cells with nonlinear spatial integration were sensitive to spatial structure inside their receptive fields, and a small group of cells displayed a surprising sensitivity to spatially homogeneous stimuli. Traditional analyses with contrast-reversing gratings did not predict this variability of nonlinear spatial integration under natural images.
Collapse
|
19
|
Solomon SG. Retinal ganglion cells and the magnocellular, parvocellular, and koniocellular subcortical visual pathways from the eye to the brain. HANDBOOK OF CLINICAL NEUROLOGY 2021; 178:31-50. [PMID: 33832683 DOI: 10.1016/b978-0-12-821377-3.00018-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
In primates including humans, most retinal ganglion cells send signals to the lateral geniculate nucleus (LGN) of the thalamus. The anatomical and functional properties of the two major pathways through the LGN, the parvocellular (P) and magnocellular (M) pathways, are now well understood. Neurones in these pathways appear to convey a filtered version of the retinal image to primary visual cortex for further analysis. The properties of the P-pathway suggest it is important for high spatial acuity and red-green color vision, while those of the M-pathway suggest it is important for achromatic visual sensitivity and motion vision. Recent work has sharpened our understanding of how these properties are built in the retina, and described subtle but important nonlinearities that shape the signals that cortex receives. In addition to the P- and M-pathways, other retinal ganglion cells also project to the LGN. These ganglion cells are larger than those in the P- and M-pathways, have different retinal connectivity, and project to distinct regions of the LGN, together forming heterogenous koniocellular (K) pathways. Recent work has started to reveal the properties of these K-pathways, in the retina and in the LGN. The functional properties of K-pathways are more complex than those in the P- and M-pathways, and the K-pathways are likely to have a distinct contribution to vision. They provide a complementary pathway to the primary visual cortex, but can also send signals directly to extrastriate visual cortex. At the level of the LGN, many neurones in the K-pathways seem to integrate retinal with non-retinal inputs, and some may provide an early site of binocular convergence.
Collapse
Affiliation(s)
- Samuel G Solomon
- Department of Experimental Psychology, University College London, London, United Kingdom.
| |
Collapse
|
20
|
Ho E, Shmakov A, Palanker D. Decoding network-mediated retinal response to electrical stimulation: implications for fidelity of prosthetic vision. J Neural Eng 2020; 17:10.1088/1741-2552/abc535. [PMID: 33108781 PMCID: PMC8284336 DOI: 10.1088/1741-2552/abc535] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 10/27/2020] [Indexed: 02/07/2023]
Abstract
Objective. Patients with photovoltaic subretinal implant PRIMA demonstrated letter acuity ∼0.1 logMAR worse than sampling limit for 100μm pixels (1.3 logMAR) and performed slower than healthy subjects tested with equivalently pixelated images. To explore the underlying differences between natural and prosthetic vision, we compare the fidelity of retinal response to visual and subretinal electrical stimulation through single-cell modeling and ensemble decoding.Approach. Responses of retinal ganglion cells (RGCs) to optical or electrical white noise stimulation in healthy and degenerate rat retinas were recorded via multi-electrode array. Each RGC was fit with linear-nonlinear and convolutional neural network models. To characterize RGC noise, we compared statistics of spike-triggered averages (STAs) in RGCs responding to electrical or visual stimulation of healthy and degenerate retinas. At the population level, we constructed a linear decoder to determine the accuracy of the ensemble of RGCs onN-way discrimination tasks.Main results. Although computational models can match natural visual responses well (correlation ∼0.6), they fit significantly worse to spike timings elicited by electrical stimulation of the healthy retina (correlation ∼0.15). In the degenerate retina, response to electrical stimulation is equally bad. The signal-to-noise ratio of electrical STAs in degenerate retinas matched that of the natural responses when 78 ± 6.5% of the spikes were replaced with random timing. However, the noise in RGC responses contributed minimally to errors in ensemble decoding. The determining factor in accuracy of decoding was the number of responding cells. To compensate for fewer responding cells under electrical stimulation than in natural vision, more presentations of the same stimulus are required to deliver sufficient information for image decoding.Significance. Slower-than-natural pattern identification by patients with the PRIMA implant may be explained by the lower number of electrically activated cells than in natural vision, which is compensated by a larger number of the stimulus presentations.
Collapse
Affiliation(s)
- Elton Ho
- Department of Physics, Stanford University, Stanford, CA 94305, United States of America
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305, United States of America
| | - Alex Shmakov
- Department of Computer Science, UC, Irvine, CA 92697, United States of America
| | - Daniel Palanker
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305, United States of America
- Department of Ophthalmology, Stanford University, Stanford, CA 94305, United States of America
| |
Collapse
|
21
|
Shah NP, Chichilnisky EJ. Computational challenges and opportunities for a bi-directional artificial retina. J Neural Eng 2020; 17:055002. [PMID: 33089827 DOI: 10.1088/1741-2552/aba8b1] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
A future artificial retina that can restore high acuity vision in blind people will rely on the capability to both read (observe) and write (control) the spiking activity of neurons using an adaptive, bi-directional and high-resolution device. Although current research is focused on overcoming the technical challenges of building and implanting such a device, exploiting its capabilities to achieve more acute visual perception will also require substantial computational advances. Using high-density large-scale recording and stimulation in the primate retina with an ex vivo multi-electrode array lab prototype, we frame several of the major computational problems, and describe current progress and future opportunities in solving them. First, we identify cell types and locations from spontaneous activity in the blind retina, and then efficiently estimate their visual response properties by using a low-dimensional manifold of inter-retina variability learned from a large experimental dataset. Second, we estimate retinal responses to a large collection of relevant electrical stimuli by passing current patterns through an electrode array, spike sorting the resulting recordings and using the results to develop a model of evoked responses. Third, we reproduce the desired responses for a given visual target by temporally dithering a diverse collection of electrical stimuli within the integration time of the visual system. Together, these novel approaches may substantially enhance artificial vision in a next-generation device.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, United States of America. Department of Neurosurgery, Stanford University, Stanford, CA, United States of America. Author to whom any correspondence should be addressed
| | | |
Collapse
|
22
|
Shah NP, Brackbill N, Rhoades C, Kling A, Goetz G, Litke AM, Sher A, Simoncelli EP, Chichilnisky EJ. Inference of nonlinear receptive field subunits with spike-triggered clustering. eLife 2020; 9:e45743. [PMID: 32149600 PMCID: PMC7062463 DOI: 10.7554/elife.45743] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Accepted: 10/29/2019] [Indexed: 11/25/2022] Open
Abstract
Responses of sensory neurons are often modeled using a weighted combination of rectified linear subunits. Since these subunits often cannot be measured directly, a flexible method is needed to infer their properties from the responses of downstream neurons. We present a method for maximum likelihood estimation of subunits by soft-clustering spike-triggered stimuli, and demonstrate its effectiveness in visual neurons. For parasol retinal ganglion cells in macaque retina, estimated subunits partitioned the receptive field into compact regions, likely representing aggregated bipolar cell inputs. Joint clustering revealed shared subunits between neighboring cells, producing a parsimonious population model. Closed-loop validation, using stimuli lying in the null space of the linear receptive field, revealed stronger nonlinearities in OFF cells than ON cells. Responses to natural images, jittered to emulate fixational eye movements, were accurately predicted by the subunit model. Finally, the generality of the approach was demonstrated in macaque V1 neurons.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical EngineeringStanford UniversityStanfordUnited States
| | - Nora Brackbill
- Department of PhysicsStanford UniversityStanfordUnited States
| | - Colleen Rhoades
- Department of BioengineeringStanford UniversityStanfordUnited States
| | - Alexandra Kling
- Department of NeurosurgeryStanford School of MedicineStanfordUnited States
- Department of OphthalmologyStanford UniversityStanfordUnited States
- Hansen Experimental Physics LaboratoryStanford UniversityStanfordUnited States
| | - Georges Goetz
- Department of NeurosurgeryStanford School of MedicineStanfordUnited States
- Department of OphthalmologyStanford UniversityStanfordUnited States
- Hansen Experimental Physics LaboratoryStanford UniversityStanfordUnited States
| | - Alan M Litke
- Institute for Particle PhysicsUniversity of California, Santa CruzSanta CruzUnited States
| | - Alexander Sher
- Santa Cruz Institute for Particle PhysicsUniversity of California, Santa CruzSanta CruzUnited States
| | - Eero P Simoncelli
- Center for Neural ScienceNew York UniversityNew YorkUnited States
- Howard Hughes Medical InstituteChevy ChaseUnited States
| | - EJ Chichilnisky
- Department of NeurosurgeryStanford School of MedicineStanfordUnited States
- Department of OphthalmologyStanford UniversityStanfordUnited States
- Hansen Experimental Physics LaboratoryStanford UniversityStanfordUnited States
| |
Collapse
|