1
|
Karamanlis D, Khani MH, Schreyer HM, Zapp SJ, Mietsch M, Gollisch T. Nonlinear receptive fields evoke redundant retinal coding of natural scenes. Nature 2025; 637:394-401. [PMID: 39567692 PMCID: PMC11711096 DOI: 10.1038/s41586-024-08212-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 10/14/2024] [Indexed: 11/22/2024]
Abstract
The role of the vertebrate retina in early vision is generally described by the efficient coding hypothesis1,2, which predicts that the retina reduces the redundancy inherent in natural scenes3 by discarding spatiotemporal correlations while preserving stimulus information4. It is unclear, however, whether the predicted decorrelation and redundancy reduction in the activity of ganglion cells, the retina's output neurons, hold under gaze shifts, which dominate the dynamics of the natural visual input5. We show here that species-specific gaze patterns in natural stimuli can drive correlated spiking responses both in and across distinct types of ganglion cells in marmoset as well as mouse retina. These concerted responses disrupt redundancy reduction to signal fixation periods with locally high spatial contrast. Model-based analyses of ganglion cell responses to natural stimuli show that the observed response correlations follow from nonlinear pooling of ganglion cell inputs. Our results indicate cell-type-specific deviations from efficient coding in retinal processing of natural gaze shifts.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- University of Geneva, Department of Basic Neurosciences, Geneva, Switzerland.
| | - Mohammad H Khani
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - Helene M Schreyer
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - Sören J Zapp
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Matthias Mietsch
- German Primate Center, Laboratory Animal Science Unit, Göttingen, Germany
- German Center for Cardiovascular Research, Partner Site Göttingen, Göttingen, Germany
| | - Tim Gollisch
- University Medical Center Göttingen, Department of Ophthalmology, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany.
- Else Kröner Fresenius Center for Optogenetic Therapies, University Medical Center Göttingen, Göttingen, Germany.
| |
Collapse
|
2
|
Hoshal BD, Holmes CM, Bojanek K, Salisbury JM, Berry MJ, Marre O, Palmer SE. Stimulus-invariant aspects of the retinal code drive discriminability of natural scenes. Proc Natl Acad Sci U S A 2024; 121:e2313676121. [PMID: 39700141 DOI: 10.1073/pnas.2313676121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/11/2024] [Indexed: 12/21/2024] Open
Abstract
Everything that the brain sees must first be encoded by the retina, which maintains a reliable representation of the visual world in many different, complex natural scenes while also adapting to stimulus changes. This study quantifies whether and how the brain selectively encodes stimulus features about scene identity in complex naturalistic environments. While a wealth of previous work has dug into the static and dynamic features of the population code in retinal ganglion cells (RGCs), less is known about how populations form both flexible and reliable encoding in natural moving scenes. We record from the larval salamander retina responding to five different natural movies, over many repeats, and use these data to characterize the population code in terms of single-cell fluctuations in rate and pairwise couplings between cells. Decomposing the population code into independent and cell-cell interactions reveals how broad scene structure is encoded in the retinal output. while the single-cell activity adapts to different stimuli, the population structure captured in the sparse, strong couplings is consistent across natural movies as well as synthetic stimuli. We show that these interactions contribute to encoding scene identity. We also demonstrate that this structure likely arises in part from shared bipolar cell input as well as from gap junctions between RGCs and amacrine cells.
Collapse
Affiliation(s)
- Benjamin D Hoshal
- Committee on Computational Neuroscience, Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
| | | | - Kyle Bojanek
- Committee on Computational Neuroscience, Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
| | - Jared M Salisbury
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
- Department of Physics, University of Chicago, Chicago, IL 60637
| | - Michael J Berry
- Princeton Neuroscience Institute, Department of Molecular Biology, Princeton University, Princeton, NJ 08540
| | - Olivier Marre
- Institut de la Vision, Sorbonne Université, INSERM, Paris 75012, France
| | - Stephanie E Palmer
- Committee on Computational Neuroscience, Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637
- Department of Physics, University of Chicago, Chicago, IL 60637
- Center for the Physics of Biological Function, Department of Physics, Princeton University, Princeton, NJ 08540
| |
Collapse
|
3
|
Hoshal BD, Holmes CM, Bojanek K, Salisbury J, Berry MJ, Marre O, Palmer SE. Stimulus invariant aspects of the retinal code drive discriminability of natural scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.08.552526. [PMID: 37609259 PMCID: PMC10441377 DOI: 10.1101/2023.08.08.552526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Everything that the brain sees must first be encoded by the retina, which maintains a reliable representation of the visual world in many different, complex natural scenes while also adapting to stimulus changes. This study quantifies whether and how the brain selectively encodes stimulus features about scene identity in complex naturalistic environments. While a wealth of previous work has dug into the static and dynamic features of the population code in retinal ganglion cells, less is known about how populations form both flexible and reliable encoding in natural moving scenes. We record from the larval salamander retina responding to five different natural movies, over many repeats, and use these data to characterize the population code in terms of single-cell fluctuations in rate and pairwise couplings between cells. Decomposing the population code into independent and cell-cell interactions reveals how broad scene structure is encoded in the retinal output. while the single-cell activity adapts to different stimuli, the population structure captured in the sparse, strong couplings is consistent across natural movies as well as synthetic stimuli. We show that these interactions contribute to encoding scene identity. We also demonstrate that this structure likely arises in part from shared bipolar cell input as well as from gap junctions between retinal ganglion cells and amacrine cells.
Collapse
|
4
|
Di Tullio RW, Wei L, Balasubramanian V. Slow and steady: auditory features for discriminating animal vocalizations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.20.599962. [PMID: 39005308 PMCID: PMC11244870 DOI: 10.1101/2024.06.20.599962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
We propose that listeners can use temporal regularities - spectro-temporal correlations that change smoothly over time - to discriminate animal vocalizations within and between species. To test this idea, we used Slow Feature Analysis (SFA) to find the most temporally regular components of vocalizations from birds (blue jay, house finch, American yellow warbler, and great blue heron), humans (English speakers), and rhesus macaques. We projected vocalizations into the learned feature space and tested intra-class (same speaker/species) and inter-class (different speakers/species) auditory discrimination by a trained classifier. We found that: 1) Vocalization discrimination was excellent (> 95%) in all cases; 2) Performance depended primarily on the ~10 most temporally regular features; 3) Most vocalizations are dominated by ~10 features with high temporal regularity; and 4) These regular features are highly correlated with the most predictable components of animal sounds.
Collapse
Affiliation(s)
- Ronald W Di Tullio
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylvania, USA
- Computational Neuroscience Initiative, University of Pennsylvania, USA
| | - Linran Wei
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylkvania, USA
| | - Vijay Balasubramanian
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylvania, USA
- Computational Neuroscience Initiative, University of Pennsylvania, USA
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
| |
Collapse
|
5
|
Tesileanu T, Piasini E, Balasubramanian V. Efficient processing of natural scenes in visual cortex. Front Cell Neurosci 2022; 16:1006703. [PMID: 36545653 PMCID: PMC9760692 DOI: 10.3389/fncel.2022.1006703] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022] Open
Abstract
Neural circuits in the periphery of the visual, auditory, and olfactory systems are believed to use limited resources efficiently to represent sensory information by adapting to the statistical structure of the natural environment. This "efficient coding" principle has been used to explain many aspects of early visual circuits including the distribution of photoreceptors, the mosaic geometry and center-surround structure of retinal receptive fields, the excess OFF pathways relative to ON pathways, saccade statistics, and the structure of simple cell receptive fields in V1. We know less about the extent to which such adaptations may occur in deeper areas of cortex beyond V1. We thus review recent developments showing that the perception of visual textures, which depends on processing in V2 and beyond in mammals, is adapted in rats and humans to the multi-point statistics of luminance in natural scenes. These results suggest that central circuits in the visual brain are adapted for seeing key aspects of natural scenes. We conclude by discussing how adaptation to natural temporal statistics may aid in learning and representing visual objects, and propose two challenges for the future: (1) explaining the distribution of shape sensitivity in the ventral visual stream from the statistics of object shape in natural images, and (2) explaining cell types of the vertebrate retina in terms of feature detectors that are adapted to the spatio-temporal structures of natural stimuli. We also discuss how new methods based on machine learning may complement the normative, principles-based approach to theoretical neuroscience.
Collapse
Affiliation(s)
- Tiberiu Tesileanu
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, United States
| | - Eugenio Piasini
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, Italy
| | - Vijay Balasubramanian
- Department of Physics and Astronomy, David Rittenhouse Laboratory, University of Pennsylvania, Philadelphia, PA, United States
- Santa Fe Institute, Santa Fe, NM, United States
| |
Collapse
|
6
|
Hsu WMM, Kastner DB, Baccus SA, Sharpee TO. How inhibitory neurons increase information transmission under threshold modulation. Cell Rep 2021; 35:109158. [PMID: 34038717 PMCID: PMC8846953 DOI: 10.1016/j.celrep.2021.109158] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 02/14/2021] [Accepted: 04/29/2021] [Indexed: 11/28/2022] Open
Abstract
Modulation of neuronal thresholds is ubiquitous in the brain. Phenomena such as figure-ground segmentation, motion detection, stimulus anticipation, and shifts in attention all involve changes in a neuron’s threshold based on signals from larger scales than its primary inputs. However, this modulation reduces the accuracy with which neurons can represent their primary inputs, creating a mystery as to why threshold modulation is so widespread in the brain. We find that modulation is less detrimental than other forms of neuronal variability and that its negative effects can be nearly completely eliminated if modulation is applied selectively to sparsely responding neurons in a circuit by inhibitory neurons. We verify these predictions in the retina where we find that inhibitory amacrine cells selectively deliver modulation signals to sparsely responding ganglion cell types. Our findings elucidate the central role that inhibitory neurons play in maximizing information transmission under modulation. Modulation of neuronal thresholds is ubiquitous in the brain but reduces the accuracy of neural signaling. Hsu et al. show that the negative impact of threshold modulation can be almost completely eliminated when modulation is not delivered uniformly to all neurons but only to a subset and via inhibitory neurons.
Collapse
Affiliation(s)
- Wei-Mien M Hsu
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA, USA; Department of Physics, University of California, San Diego, La Jolla, CA, USA
| | - David B Kastner
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, School of Medicine, San Francisco, CA, USA; Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Stephen A Baccus
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA, USA; Department of Physics, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
7
|
Tesileanu T, Conte MM, Briguglio JJ, Hermundstad AM, Victor JD, Balasubramanian V. Efficient coding of natural scene statistics predicts discrimination thresholds for grayscale textures. eLife 2020; 9:e54347. [PMID: 32744505 PMCID: PMC7494356 DOI: 10.7554/elife.54347] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 07/31/2020] [Indexed: 11/13/2022] Open
Abstract
Previously, in Hermundstad et al., 2014, we showed that when sampling is limiting, the efficient coding principle leads to a 'variance is salience' hypothesis, and that this hypothesis accounts for visual sensitivity to binary image statistics. Here, using extensive new psychophysical data and image analysis, we show that this hypothesis accounts for visual sensitivity to a large set of grayscale image statistics at a striking level of detail, and also identify the limits of the prediction. We define a 66-dimensional space of local grayscale light-intensity correlations, and measure the relevance of each direction to natural scenes. The 'variance is salience' hypothesis predicts that two-point correlations are most salient, and predicts their relative salience. We tested these predictions in a texture-segregation task using un-natural, synthetic textures. As predicted, correlations beyond second order are not salient, and predicted thresholds for over 300 second-order correlations match psychophysical thresholds closely (median fractional error <0.13).
Collapse
Affiliation(s)
| | - Mary M Conte
- Feil Family Brain and Mind Institute, Weill Cornell Medical CollegeNew YorkUnited States
| | | | | | - Jonathan D Victor
- Feil Family Brain and Mind Institute, Weill Cornell Medical CollegeNew YorkUnited States
| | | |
Collapse
|
8
|
Melanitis N, Nikita KS. Biologically-inspired image processing in computational retina models. Comput Biol Med 2019; 113:103399. [PMID: 31472425 DOI: 10.1016/j.compbiomed.2019.103399] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 08/20/2019] [Accepted: 08/20/2019] [Indexed: 11/19/2022]
Abstract
Retinal Prosthesis (RP) is an approach to restore vision, using an implanted device to electrically stimulate the retina. A fundamental problem in RP is to translate the visual scene to retina neural spike patterns, mimicking the computations normally done by retina neural circuits. Towards the perspective of improved RP interventions, we propose a Computer Vision (CV) image preprocessing method based on Retinal Ganglion Cells functions and then use the method to reproduce retina output with a standard Generalized Integrate & Fire (GIF) neuron model. "Virtual Retina" simulation software is used to provide the stimulus-retina response data to train and test our model. We use a sequence of natural images as model input and show that models using the proposed CV image preprocessing outperform models using raw image intensity (interspike-interval distance 0.17 vs 0.27). This result is aligned with our hypothesis that raw image intensity is an improper image representation for Retinal Ganglion Cells response prediction.
Collapse
Affiliation(s)
- Nikos Melanitis
- Biomedical Simulations and Imaging Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece.
| | - Konstantina S Nikita
- Biomedical Simulations and Imaging Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece.
| |
Collapse
|
9
|
Abstract
Adaptation is a common principle that recurs throughout the nervous system at all stages of processing. This principle manifests in a variety of phenomena, from spike frequency adaptation, to apparent changes in receptive fields with changes in stimulus statistics, to enhanced responses to unexpected stimuli. The ubiquity of adaptation leads naturally to the question: What purpose do these different types of adaptation serve? A diverse set of theories, often highly overlapping, has been proposed to explain the functional role of adaptive phenomena. In this review, we discuss several of these theoretical frameworks, highlighting relationships among them and clarifying distinctions. We summarize observations of the varied manifestations of adaptation, particularly as they relate to these theoretical frameworks, focusing throughout on the visual system and making connections to other sensory systems.
Collapse
Affiliation(s)
- Alison I Weber
- Department of Physiology and Biophysics and Computational Neuroscience Center, University of Washington, Seattle, Washington 98195, USA; ,
| | - Kamesh Krishnamurthy
- Neuroscience Institute and Center for Physics of Biological Function, Department of Physics, Princeton University, Princeton, New Jersey 08544, USA;
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics and Computational Neuroscience Center, University of Washington, Seattle, Washington 98195, USA; , .,UW Institute for Neuroengineering, University of Washington, Seattle, Washington 98195, USA
| |
Collapse
|
10
|
Salazar-Gatzimas E, Agrochao M, Fitzgerald JE, Clark DA. The Neuronal Basis of an Illusory Motion Percept Is Explained by Decorrelation of Parallel Motion Pathways. Curr Biol 2018; 28:3748-3762.e8. [PMID: 30471993 DOI: 10.1016/j.cub.2018.10.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 09/28/2018] [Accepted: 10/02/2018] [Indexed: 10/27/2022]
Abstract
Both vertebrates and invertebrates perceive illusory motion, known as "reverse-phi," in visual stimuli that contain sequential luminance increments and decrements. However, increment (ON) and decrement (OFF) signals are initially processed by separate visual neurons, and parallel elementary motion detectors downstream respond selectively to the motion of light or dark edges, often termed ON- and OFF-edges. It remains unknown how and where ON and OFF signals combine to generate reverse-phi motion signals. Here, we show that each of Drosophila's elementary motion detectors encodes motion by combining both ON and OFF signals. Their pattern of responses reflects combinations of increments and decrements that co-occur in natural motion, serving to decorrelate their outputs. These results suggest that the general principle of signal decorrelation drives the functional specialization of parallel motion detection channels, including their selectivity for moving light or dark edges.
Collapse
Affiliation(s)
- Emilio Salazar-Gatzimas
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06511, USA
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, 219 Prospect Street, New Haven, CT 06511, USA
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06511, USA; Department of Molecular Cellular and Developmental Biology, Yale University, 219 Prospect Street, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
11
|
Abstract
An animal’s ability to survive depends on its sensory systems being able to adapt to a wide range of environmental conditions, by maximizing the information extracted and reducing the noise transmitted. The visual system does this by adapting to luminance and contrast. While luminance adaptation can begin at the retinal photoreceptors, contrast adaptation has been shown to start at later stages in the retina. Photoreceptors adapt to changes in luminance over multiple time scales ranging from tens of milliseconds to minutes, with the adaptive changes arising from processes within the phototransduction cascade. Here we show a new form of adaptation in cones that is independent of the phototransduction process. Rather, it is mediated by voltage-gated ion channels in the cone membrane and acts by changing the frequency response of cones such that their responses speed up as the membrane potential modulation depth increases and slow down as the membrane potential modulation depth decreases. This mechanism is effectively activated by high-contrast stimuli dominated by low frequencies such as natural stimuli. However, the more generally used Gaussian white noise stimuli were not effective since they did not modulate the cone membrane potential to the same extent. This new adaptive process had a time constant of less than a second. A critical component of the underlying mechanism is the hyperpolarization-activated current, Ih, as pharmacologically blocking it prevented the long- and mid- wavelength sensitive cone photoreceptors (L- and M-cones) from adapting. Consistent with this, short- wavelength sensitive cone photoreceptors (S-cones) did not show the adaptive response, and we found they also lacked a prominent Ih. The adaptive filtering mechanism identified here improves the information flow by removing higher-frequency noise during lower signal-to-noise ratio conditions, as occurs when contrast levels are low. Although this new adaptive mechanism can be driven by contrast, it is not a contrast adaptation mechanism in its strictest sense, as will be argued in the Discussion. An animal’s ability to survive depends on its ability to adapt to a wide range of light conditions, by maximizing the information flow through the retina. Here, we show a new form of adaptation in cone photoreceptors that helps them optimize the information they transmit by adjusting their response kinetics to better match the visual conditions. The adaptive mechanism we describe is independent of the cone phototransduction process and is instead mediated by membrane processes in which the hyperpolarization-activated current, Ih, plays a critical role. Consistent with the critical role of this current, we also found that cones sensitive to short wavelengths lacked a prominent Ih current and did not show this new form of adaptation. As voltage-dependent processes underlie the adaptational mechanism, it is only apparent when the stimuli are able to sufficiently modulate the membrane potential of cones. This happens with natural stimuli, which are able to deliver high levels of “effective” contrast. However, even though this new adaptive mechanism can be driven by contrast, we argue in the Discussion that in its strictest sense it is not a contrast adaptation mechanism per se.
Collapse
|
12
|
Simmons KD, Prentice JS, Tkačik G, Homann J, Yee HK, Palmer SE, Nelson PC, Balasubramanian V. Transformation of stimulus correlations by the retina. PLoS Comput Biol 2013; 9:e1003344. [PMID: 24339756 PMCID: PMC3854086 DOI: 10.1371/journal.pcbi.1003344] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2013] [Accepted: 09/11/2013] [Indexed: 11/19/2022] Open
Abstract
Redundancies and correlations in the responses of sensory neurons may seem to waste neural resources, but they can also carry cues about structured stimuli and may help the brain to correct for response errors. To investigate the effect of stimulus structure on redundancy in retina, we measured simultaneous responses from populations of retinal ganglion cells presented with natural and artificial stimuli that varied greatly in correlation structure; these stimuli and recordings are publicly available online. Responding to spatio-temporally structured stimuli such as natural movies, pairs of ganglion cells were modestly more correlated than in response to white noise checkerboards, but they were much less correlated than predicted by a non-adapting functional model of retinal response. Meanwhile, responding to stimuli with purely spatial correlations, pairs of ganglion cells showed increased correlations consistent with a static, non-adapting receptive field and nonlinearity. We found that in response to spatio-temporally correlated stimuli, ganglion cells had faster temporal kernels and tended to have stronger surrounds. These properties of individual cells, along with gain changes that opposed changes in effective contrast at the ganglion cell input, largely explained the pattern of pairwise correlations across stimuli where receptive field measurements were possible. An influential theory of early sensory processing argues that sensory circuits should conserve scarce resources in their outputs by reducing correlations present in their inputs. Measuring simultaneous responses from large numbers of retinal ganglion cells responding to widely different classes of visual stimuli, we find that output correlations increase when we present stimuli with spatial, but not temporal, correlations. On the other hand, we find evidence that retina adjusts to spatio-temporal structure so that retinal output correlations change less than input correlations would predict. Changes in the receptive field properties of individual cells, along with gain changes, largely explain this relative constancy of correlations over the population.
Collapse
Affiliation(s)
- Kristina D. Simmons
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Jason S. Prentice
- Department of Physics, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
| | - Gašper Tkačik
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Jan Homann
- Department of Physics, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Heather K. Yee
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois, United States of America
| | - Stephanie E. Palmer
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois, United States of America
| | - Philip C. Nelson
- Department of Physics, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Vijay Balasubramanian
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Physics, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Laboratoire de Physique Théorique, cole Normale Supérieure, Paris, France
- Initiative for the Theoretical Sciences, CUNY Graduate Center, 365 Fifth Avenue, New York, New York, United States of America
- * E-mail:
| |
Collapse
|