1
|
Zeldenrust F, Calcini N, Yan X, Bijlsma A, Celikel T. The tuning of tuning: How adaptation influences single cell information transfer. PLoS Comput Biol 2024; 20:e1012043. [PMID: 38739640 PMCID: PMC11115315 DOI: 10.1371/journal.pcbi.1012043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 05/23/2024] [Accepted: 04/01/2024] [Indexed: 05/16/2024] Open
Abstract
Sensory neurons reconstruct the world from action potentials (spikes) impinging on them. To effectively transfer information about the stimulus to the next processing level, a neuron needs to be able to adapt its working range to the properties of the stimulus. Here, we focus on the intrinsic neural properties that influence information transfer in cortical neurons and how tightly their properties need to be tuned to the stimulus statistics for them to be effective. We start by measuring the intrinsic information encoding properties of putative excitatory and inhibitory neurons in L2/3 of the mouse barrel cortex. Excitatory neurons show high thresholds and strong adaptation, making them fire sparsely and resulting in a strong compression of information, whereas inhibitory neurons that favour fast spiking transfer more information. Next, we turn to computational modelling and ask how two properties influence information transfer: 1) spike-frequency adaptation and 2) the shape of the IV-curve. We find that a subthreshold (but not threshold) adaptation, the 'h-current', and a properly tuned leak conductance can increase the information transfer of a neuron, whereas threshold adaptation can increase its working range. Finally, we verify the effect of the IV-curve slope in our experimental recordings and show that excitatory neurons form a more heterogeneous population than inhibitory neurons. These relationships between intrinsic neural features and neural coding that had not been quantified before will aid computational, theoretical and systems neuroscientists in understanding how neuronal populations can alter their coding properties, such as through the impact of neuromodulators. Why the variability of intrinsic properties of excitatory neurons is larger than that of inhibitory ones is an exciting question, for which future research is needed.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen - the Netherlands
| | - Niccolò Calcini
- Maastricht Centre for Systems Biology (MaCSBio), University of Maastricht, Maastricht, The Netherlands
| | - Xuan Yan
- Institute of Neuroscience, Chinese Academy of Sciences, Beijing, China
| | - Ate Bijlsma
- Department of Population Health Sciences / Department of Biology, Universiteit Utrecht, the Netherlands
| | - Tansu Celikel
- School of Psychology, Georgia Institute of Technology, Atlanta - GA, United States of America
| |
Collapse
|
2
|
Philip P, Jainwal K, van Schaik A, Thakur CS. Tau-Cell-Based Analog Silicon Retina With Spatio- Temporal Filtering and Contrast Gain Control. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2024; 18:423-437. [PMID: 37956014 DOI: 10.1109/tbcas.2023.3332117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Developing precise artificial retinas is crucial because they hold the potential to restore vision, improve visual prosthetics, and enhance computer vision systems. Emulating the luminance and contrast adaption features of the retina is essential to improve visual perception and efficiency to provide an environment realistic representation to the user. In this article, we introduce an artificial retina model that leverages its potent adaptation to luminance and contrast to enhance vision sensing and information processing. The model has the ability to achieve the realization of both tonic and phasic cells in the simplest manner. We have implemented the retina model using 0.18 μm process technology and validated the accuracy of the hardware implementation through circuit simulation that closely matches the software retina model. Additionally, we have characterized a single pixel fabricated using the same 0.18 μm process. This pixel demonstrates an 87.7-% ratio of variance with the temporal software model and operates with a power consumption of 369 nW.
Collapse
|
3
|
Parameshwarappa V, Norena AJ. The effects of acute and chronic noise trauma on stimulus-evoked activity across primary auditory cortex layers. J Neurophysiol 2024; 131:225-240. [PMID: 38198658 DOI: 10.1152/jn.00427.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 12/19/2023] [Accepted: 01/02/2024] [Indexed: 01/12/2024] Open
Abstract
Exposure to intense noise environments is a major cause of sensorineural hearing loss and auditory perception disorders, such as tinnitus and hyperacusis, which may have a central origin. The effects of noise-induced hearing loss on the auditory cortex have been documented in many studies. One limitation of these studies, however, is that the effects of noise trauma have been mostly studied at the granular layer (i.e, the main cortical recipient of thalamic input), while the cortex is a very complex structure, with six different layers each having its own pattern of connectivity and role in sensory processing. The present study aims to investigate the effects of acute and chronic noise trauma on the laminar pattern of stimulus-evoked activity in the primary auditory cortex of the anesthetized guinea pig. We show that acute and chronic noise trauma are both followed by an increase in stimulus-evoked cortical responses, mostly in the granular and supragranular layers. The cortical responses are more monotonic as a function of the intensity level after noise trauma. There was minimal change, if any, in local field potential (LFP) amplitude after acute noise trauma, while LFP amplitude was enhanced after chronic noise trauma. Finally, LFP and the current source density analysis suggest that acute but more specifically chronic noise trauma is associated with the emergence of a new sink in the supragranular layer. This result suggests that supragranular layers become a major input recipient. We discuss the possible mechanisms and functional implications of these changes.NEW & NOTEWORTHY Our study shows that cortical activity is enhanced after trauma and that the sequence of cortical column activation during stimulus-evoked response is altered, i.e. the supragranular layer becomes a major input recipient. We speculate that these large cortical changes may play a key role in the auditory hypersensitivity (hyperacusis) that can be triggered after noise trauma in human subjects.
Collapse
Affiliation(s)
- Vinay Parameshwarappa
- Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| | - Arnaud J Norena
- Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| |
Collapse
|
4
|
Donati E, Valle G. Neuromorphic hardware for somatosensory neuroprostheses. Nat Commun 2024; 15:556. [PMID: 38228580 PMCID: PMC10791662 DOI: 10.1038/s41467-024-44723-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 01/03/2024] [Indexed: 01/18/2024] Open
Abstract
In individuals with sensory-motor impairments, missing limb functions can be restored using neuroprosthetic devices that directly interface with the nervous system. However, restoring the natural tactile experience through electrical neural stimulation requires complex encoding strategies. Indeed, they are presently limited in effectively conveying or restoring tactile sensations by bandwidth constraints. Neuromorphic technology, which mimics the natural behavior of neurons and synapses, holds promise for replicating the encoding of natural touch, potentially informing neurostimulation design. In this perspective, we propose that incorporating neuromorphic technologies into neuroprostheses could be an effective approach for developing more natural human-machine interfaces, potentially leading to advancements in device performance, acceptability, and embeddability. We also highlight ongoing challenges and the required actions to facilitate the future integration of these advanced technologies.
Collapse
Affiliation(s)
- Elisa Donati
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Giacomo Valle
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, USA.
| |
Collapse
|
5
|
Friedenberger Z, Harkin E, Tóth K, Naud R. Silences, spikes and bursts: Three-part knot of the neural code. J Physiol 2023; 601:5165-5193. [PMID: 37889516 DOI: 10.1113/jp281510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
When a neuron breaks silence, it can emit action potentials in a number of patterns. Some responses are so sudden and intense that electrophysiologists felt the need to single them out, labelling action potentials emitted at a particularly high frequency with a metonym - bursts. Is there more to bursts than a figure of speech? After all, sudden bouts of high-frequency firing are expected to occur whenever inputs surge. The burst coding hypothesis advances that the neural code has three syllables: silences, spikes and bursts. We review evidence supporting this ternary code in terms of devoted mechanisms for burst generation, synaptic transmission and synaptic plasticity. We also review the learning and attention theories for which such a triad is beneficial.
Collapse
Affiliation(s)
- Zachary Friedenberger
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Neural Dynamics and Artifical Intelligence, Department of Physics, University of Ottawa, Ottawa, Ontario, Ottawa
| | - Emerson Harkin
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Katalin Tóth
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Richard Naud
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Neural Dynamics and Artifical Intelligence, Department of Physics, University of Ottawa, Ottawa, Ontario, Ottawa
| |
Collapse
|
6
|
Salisbury JM, Palmer SE. A dynamic scale-mixture model of motion in natural scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.19.563101. [PMID: 37961311 PMCID: PMC10634686 DOI: 10.1101/2023.10.19.563101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Some of the most important tasks of visual and motor systems involve estimating the motion of objects and tracking them over time. Such systems evolved to meet the behavioral needs of the organism in its natural environment, and may therefore be adapted to the statistics of motion it is likely to encounter. By tracking the movement of individual points in videos of natural scenes, we begin to identify common properties of natural motion across scenes. As expected, objects in natural scenes move in a persistent fashion, with velocity correlations lasting hundreds of milliseconds. More subtly, we find that the observed velocity distributions are heavy-tailed and can be modeled as a Gaussian scale-mixture. Extending this model to the time domain leads to a dynamic scale-mixture model, consisting of a Gaussian process multiplied by a positive scalar quantity with its own independent dynamics. Dynamic scaling of velocity arises naturally as a consequence of changes in object distance from the observer, and may approximate the effects of changes in other parameters governing the motion in a given scene. This modeling and estimation framework has implications for the neurobiology of sensory and motor systems, which need to cope with these fluctuations in scale in order to represent motion efficiently and drive fast and accurate tracking behavior.
Collapse
|
7
|
Manookin MB, Rieke F. Two Sides of the Same Coin: Efficient and Predictive Neural Coding. Annu Rev Vis Sci 2023; 9:293-311. [PMID: 37220331 DOI: 10.1146/annurev-vision-112122-020941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Some visual properties are consistent across a wide range of environments, while other properties are more labile. The efficient coding hypothesis states that many of these regularities in the environment can be discarded from neural representations, thus allocating more of the brain's dynamic range to properties that are likely to vary. This paradigm is less clear about how the visual system prioritizes different pieces of information that vary across visual environments. One solution is to prioritize information that can be used to predict future events, particularly those that guide behavior. The relationship between the efficient coding and future prediction paradigms is an area of active investigation. In this review, we argue that these paradigms are complementary and often act on distinct components of the visual input. We also discuss how normative approaches to efficient coding and future prediction can be integrated.
Collapse
Affiliation(s)
- Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
- Karalis Johnson Retina Center, University of Washington, Seattle, Washington, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
| |
Collapse
|
8
|
Angeloni CF, Młynarski W, Piasini E, Williams AM, Wood KC, Garami L, Hermundstad AM, Geffen MN. Dynamics of cortical contrast adaptation predict perception of signals in noise. Nat Commun 2023; 14:4817. [PMID: 37558677 PMCID: PMC10412650 DOI: 10.1038/s41467-023-40477-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 07/27/2023] [Indexed: 08/11/2023] Open
Abstract
Neurons throughout the sensory pathway adapt their responses depending on the statistical structure of the sensory environment. Contrast gain control is a form of adaptation in the auditory cortex, but it is unclear whether the dynamics of gain control reflect efficient adaptation, and whether they shape behavioral perception. Here, we trained mice to detect a target presented in background noise shortly after a change in the contrast of the background. The observed changes in cortical gain and behavioral detection followed the dynamics of a normative model of efficient contrast gain control; specifically, target detection and sensitivity improved slowly in low contrast, but degraded rapidly in high contrast. Auditory cortex was required for this task, and cortical responses were not only similarly affected by contrast but predicted variability in behavioral performance. Combined, our results demonstrate that dynamic gain adaptation supports efficient coding in auditory cortex and predicts the perception of sounds in noise.
Collapse
Affiliation(s)
- Christopher F Angeloni
- Psychology Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Wiktor Młynarski
- Faculty of Biology, Ludwig Maximilian University of Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Eugenio Piasini
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| | - Katherine C Wood
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Linda Garami
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA.
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neuroscience, Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
9
|
Harkin EF, Lynn MB, Payeur A, Boucher JF, Caya-Bissonnette L, Cyr D, Stewart C, Longtin A, Naud R, Béïque JC. Temporal derivative computation in the dorsal raphe network revealed by an experimentally driven augmented integrate-and-fire modeling framework. eLife 2023; 12:72951. [PMID: 36655738 PMCID: PMC9977298 DOI: 10.7554/elife.72951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 12/19/2022] [Indexed: 01/20/2023] Open
Abstract
By means of an expansive innervation, the serotonin (5-HT) neurons of the dorsal raphe nucleus (DRN) are positioned to enact coordinated modulation of circuits distributed across the entire brain in order to adaptively regulate behavior. Yet the network computations that emerge from the excitability and connectivity features of the DRN are still poorly understood. To gain insight into these computations, we began by carrying out a detailed electrophysiological characterization of genetically identified mouse 5-HT and somatostatin (SOM) neurons. We next developed a single-neuron modeling framework that combines the realism of Hodgkin-Huxley models with the simplicity and predictive power of generalized integrate-and-fire models. We found that feedforward inhibition of 5-HT neurons by heterogeneous SOM neurons implemented divisive inhibition, while endocannabinoid-mediated modulation of excitatory drive to the DRN increased the gain of 5-HT output. Our most striking finding was that the output of the DRN encodes a mixture of the intensity and temporal derivative of its input, and that the temporal derivative component dominates this mixture precisely when the input is increasing rapidly. This network computation primarily emerged from prominent adaptation mechanisms found in 5-HT neurons, including a previously undescribed dynamic threshold. By applying a bottom-up neural network modeling approach, our results suggest that the DRN is particularly apt to encode input changes over short timescales, reflecting one of the salient emerging computations that dominate its output to regulate behavior.
Collapse
Affiliation(s)
- Emerson F Harkin
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Michael B Lynn
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Alexandre Payeur
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
- Department of Physics, University of OttawaOttawaCanada
| | - Jean-François Boucher
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Léa Caya-Bissonnette
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Dominic Cyr
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Chloe Stewart
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - André Longtin
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
- Department of Physics, University of OttawaOttawaCanada
| | - Richard Naud
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
- Department of Physics, University of OttawaOttawaCanada
| | - Jean-Claude Béïque
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| |
Collapse
|
10
|
After-image formation by adaptation to dynamic color gradients. Atten Percept Psychophys 2023; 85:174-187. [PMID: 36207667 PMCID: PMC9546419 DOI: 10.3758/s13414-022-02570-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/06/2022] [Indexed: 01/11/2023]
Abstract
The eye's retinotopic exposure to an adapter typically produces an after-image. For example, an observer who fixates a red adapter on a gray background will see an illusory cyan after-image after removing the adapter. The after-image's content, like its color or intensity, gives insight into mechanisms responsible for adaptation and processing of a specific feature. To facilitate adaptation, vision scientists traditionally present stable, unchanging adapters for prolonged durations. How adaptation affects perception when features (e.g., color) dynamically change over time is not understood. To investigate adaptation to a dynamically changing feature, participants viewed a colored patch that changed from a color to gray, following either a direct or curved path through the (roughly) equiluminant color plane of CIE LAB space. We varied the speed and curvature of color changes across trials and experiments. Results showed that dynamic adapters produce after-images, vivid enough to be reported by the majority of participants. An after-image consisted of a color complementary to the average of the adapter's colors with a small bias towards more recent rather than initial adapter colors. The modelling of the reported after-image colors further confirmed that adaptation rapidly instigates and gradually dissipates. A second experiment replicated these results and further showed that the probability of observing an after-image diminishes only slightly when the adapter displays transient (stepwise, abrupt) color transitions. We conclude from the results that the visual system can adapt to dynamic colors, to a degree that is robust to the potential interference of transient changes in adapter content.
Collapse
|
11
|
Forkosh O. Memoryless Optimality: Neurons Do Not Need Adaptation to Optimally Encode Stimuli With Arbitrarily Complex Statistics. Neural Comput 2022; 34:2374-2387. [DOI: 10.1162/neco_a_01543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 01/04/2022] [Indexed: 11/09/2022]
Abstract
Abstract
Our neurons seem capable of handling any type of data, regardless of its scale or statistical properties. In this letter, we suggest that optimal coding may occur at the single-neuron level without requiring memory, adaptation, or evolutionary-driven fit to the stimuli. We refer to a neural circuit as optimal if it maximizes the mutual information between its inputs and outputs. We show that often encountered differentiator neurons, or neurons that respond mainly to changes in the input, are capable of using all their information capacity when handling samples of any statistical distribution. We demonstrate this optimality using both analytical methods and simulations. In addition to demonstrating the simplicity and elegance of neural processing, this result might provide a way to improve the handling of data by artificial neural networks.
Collapse
Affiliation(s)
- Oren Forkosh
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Jerusalem 919050, Israel
- Department of Animal Sciences, Hebrew University of Jerusalem, Rehovot 7612001, Israel
| |
Collapse
|
12
|
Attractive and repulsive effects of sensory history concurrently shape visual perception. BMC Biol 2022; 20:247. [DOI: 10.1186/s12915-022-01444-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 10/19/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
Background
Sequential effects of environmental stimuli are ubiquitous in most behavioral tasks involving magnitude estimation, memory, decision making, and emotion. The human visual system exploits continuity in the visual environment, which induces two contrasting perceptual phenomena shaping visual perception. Previous work reported that perceptual estimation of a stimulus may be influenced either by attractive serial dependencies or repulsive aftereffects, with a number of experimental variables suggested as factors determining the direction and magnitude of sequential effects. Recent studies have theorized that these two effects concurrently arise in perceptual processing, but empirical evidence that directly supports this hypothesis is lacking, and it remains unclear whether and how attractive and repulsive sequential effects interact in a trial. Here we show that the two effects concurrently modulate estimation behavior in a typical sequence of perceptual tasks.
Results
We first demonstrate that observers’ estimation error as a function of both the previous stimulus and response cannot be fully described by either attractive or repulsive bias but is instead well captured by a summation of repulsion from the previous stimulus and attraction toward the previous response. We then reveal that the repulsive bias is centered on the observer’s sensory encoding of the previous stimulus, which is again repelled away from its own preceding trial, whereas the attractive bias is centered precisely on the previous response, which is the observer’s best prediction about the incoming stimuli.
Conclusions
Our findings provide strong evidence that sensory encoding is shaped by dynamic tuning of the system to the past stimuli, inducing repulsive aftereffects, and followed by inference incorporating the prediction from the past estimation, leading to attractive serial dependence.
Collapse
|
13
|
Conti D, Mora T. Nonequilibrium dynamics of adaptation in sensory systems. Phys Rev E 2022; 106:054404. [PMID: 36559478 DOI: 10.1103/physreve.106.054404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 10/11/2022] [Indexed: 06/17/2023]
Abstract
Adaptation is used by biological sensory systems to respond to a wide range of environmental signals, by adapting their response properties to the statistics of the stimulus in order to maximize information transmission. We derive rules of optimal adaptation to changes in the mean and variance of a continuous stimulus in terms of Bayesian filters and map them onto stochastic equations that couple the state of the environment to an internal variable controlling the response function. We calculate numerical and exact results for the speed and accuracy of adaptation and its impact on information transmission. We find that, in the regime of efficient adaptation, the speed of adaptation scales sublinearly with the rate of change of the environment. Finally, we exploit the mathematical equivalence between adaptation and stochastic thermodynamics to quantitatively relate adaptation to the irreversibility of the adaptation time course, defined by the rate of entropy production. Our results suggest a means to empirically quantify adaptation in a model-free and nonparametric way.
Collapse
Affiliation(s)
- Daniele Conti
- Laboratoire de Physique, École Normale Supérieure, CNRS, PSL Université, Sorbonne Université, Université de Paris, 75005 Paris, France
| | - Thierry Mora
- Laboratoire de Physique, École Normale Supérieure, CNRS, PSL Université, Sorbonne Université, Université de Paris, 75005 Paris, France
| |
Collapse
|
14
|
Seenivasan P, Narayanan R. Efficient information coding and degeneracy in the nervous system. Curr Opin Neurobiol 2022; 76:102620. [PMID: 35985074 PMCID: PMC7613645 DOI: 10.1016/j.conb.2022.102620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/01/2022] [Accepted: 07/07/2022] [Indexed: 11/25/2022]
Abstract
Efficient information coding (EIC) is a universal biological framework rooted in the fundamental principle that system responses should match their natural stimulus statistics for maximizing environmental information. Quantitatively assessed through information theory, such adaptation to the environment occurs at all biological levels and timescales. The context dependence of environmental stimuli and the need for stable adaptations make EIC a daunting task. We argue that biological complexity is the principal architect that subserves deft execution of stable EIC. Complexity in a system is characterized by several functionally segregated subsystems that show a high degree of functional integration when they interact with each other. Complex biological systems manifest heterogeneities and degeneracy, wherein structurally different subsystems could interact to yield the same functional outcome. We argue that complex systems offer several choices that effectively implement EIC and homeostasis for each of the different contexts encountered by the system.
Collapse
Affiliation(s)
- Pavithraa Seenivasan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India. https://twitter.com/PaveeSeeni
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India.
| |
Collapse
|
15
|
Impact of walking speed and motion adaptation on optokinetic nystagmus-like head movements in the blowfly Calliphora. Sci Rep 2022; 12:11540. [PMID: 35799051 PMCID: PMC9262929 DOI: 10.1038/s41598-022-15740-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 04/25/2022] [Indexed: 11/30/2022] Open
Abstract
The optokinetic nystagmus is a gaze-stabilizing mechanism reducing motion blur by rapid eye rotations against the direction of visual motion, followed by slower syndirectional eye movements minimizing retinal slip speed. Flies control their gaze through head turns controlled by neck motor neurons receiving input directly, or via descending neurons, from well-characterized directional-selective interneurons sensitive to visual wide-field motion. Locomotion increases the gain and speed sensitivity of these interneurons, while visual motion adaptation in walking animals has the opposite effects. To find out whether flies perform an optokinetic nystagmus, and how it may be affected by locomotion and visual motion adaptation, we recorded head movements of blowflies on a trackball stimulated by progressive and rotational visual motion. Flies flexibly responded to rotational stimuli with optokinetic nystagmus-like head movements, independent of their locomotor state. The temporal frequency tuning of these movements, though matching that of the upstream directional-selective interneurons, was only mildly modulated by walking speed or visual motion adaptation. Our results suggest flies flexibly control their gaze to compensate for rotational wide-field motion by a mechanism similar to an optokinetic nystagmus. Surprisingly, the mechanism is less state-dependent than the response properties of directional-selective interneurons providing input to the neck motor system.
Collapse
|
16
|
Price BH, Gavornik JP. Efficient Temporal Coding in the Early Visual System: Existing Evidence and Future Directions. Front Comput Neurosci 2022; 16:929348. [PMID: 35874317 PMCID: PMC9298461 DOI: 10.3389/fncom.2022.929348] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 06/13/2022] [Indexed: 01/16/2023] Open
Abstract
While it is universally accepted that the brain makes predictions, there is little agreement about how this is accomplished and under which conditions. Accurate prediction requires neural circuits to learn and store spatiotemporal patterns observed in the natural environment, but it is not obvious how such information should be stored, or encoded. Information theory provides a mathematical formalism that can be used to measure the efficiency and utility of different coding schemes for data transfer and storage. This theory shows that codes become efficient when they remove predictable, redundant spatial and temporal information. Efficient coding has been used to understand retinal computations and may also be relevant to understanding more complicated temporal processing in visual cortex. However, the literature on efficient coding in cortex is varied and can be confusing since the same terms are used to mean different things in different experimental and theoretical contexts. In this work, we attempt to provide a clear summary of the theoretical relationship between efficient coding and temporal prediction, and review evidence that efficient coding principles explain computations in the retina. We then apply the same framework to computations occurring in early visuocortical areas, arguing that data from rodents is largely consistent with the predictions of this model. Finally, we review and respond to criticisms of efficient coding and suggest ways that this theory might be used to design future experiments, with particular focus on understanding the extent to which neural circuits make predictions from efficient representations of environmental statistics.
Collapse
|
17
|
Carriot J, McAllister G, Hooshangnejad H, Mackrous I, Cullen KE, Chacron MJ. Sensory adaptation mediates efficient and unambiguous encoding of natural stimuli by vestibular thalamocortical pathways. Nat Commun 2022; 13:2612. [PMID: 35551186 PMCID: PMC9098492 DOI: 10.1038/s41467-022-30348-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 04/26/2022] [Indexed: 11/09/2022] Open
Abstract
Sensory systems must continuously adapt to optimally encode stimuli encountered within the natural environment. The prevailing view is that such optimal coding comes at the cost of increased ambiguity, yet to date, prior studies have focused on artificial stimuli. Accordingly, here we investigated whether such a trade-off between optimality and ambiguity exists in the encoding of natural stimuli in the vestibular system. We recorded vestibular nuclei and their target vestibular thalamocortical neurons during naturalistic and artificial self-motion stimulation. Surprisingly, we found no trade-off between optimality and ambiguity. Using computational methods, we demonstrate that thalamocortical neural adaptation in the form of contrast gain control actually reduces coding ambiguity without compromising the optimality of coding under naturalistic but not artificial stimulation. Thus, taken together, our results challenge the common wisdom that adaptation leads to ambiguity and instead suggest an essential role in underlying unambiguous optimized encoding of natural stimuli.
Collapse
Affiliation(s)
- Jerome Carriot
- Department of Physiology, McGill University, Montréal, Canada
| | | | - Hamed Hooshangnejad
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA
| | | | - Kathleen E Cullen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA.,Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, USA.,Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, USA.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, USA
| | | |
Collapse
|
18
|
Steinmetz ST, Layton OW, Powell NV, Fajen BR. A Dynamic Efficient Sensory Encoding Approach to Adaptive Tuning in Neural Models of Optic Flow Processing. Front Comput Neurosci 2022; 16:844289. [PMID: 35431848 PMCID: PMC9011806 DOI: 10.3389/fncom.2022.844289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 02/10/2022] [Indexed: 11/13/2022] Open
Abstract
This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.
Collapse
Affiliation(s)
- Scott T. Steinmetz
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
- *Correspondence: Scott T. Steinmetz,
| | - Oliver W. Layton
- Computer Science Department, Colby College, Waterville, ME, United States
| | - Nathaniel V. Powell
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
19
|
Bias-free estimation of information content in temporally sparse neuronal activity. PLoS Comput Biol 2022; 18:e1009832. [PMID: 35148310 PMCID: PMC8836373 DOI: 10.1371/journal.pcbi.1009832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 01/13/2022] [Indexed: 11/20/2022] Open
Abstract
Applying information theoretic measures to neuronal activity data enables the quantification of neuronal encoding quality. However, when the sample size is limited, a naïve estimation of the information content typically contains a systematic overestimation (upward bias), which may lead to misinterpretation of coding characteristics. This bias is exacerbated in Ca2+ imaging because of the temporal sparsity of elevated Ca2+ signals. Here, we introduce methods to correct for the bias in the naïve estimation of information content from limited sample sizes and temporally sparse neuronal activity. We demonstrate the higher accuracy of our methods over previous ones, when applied to Ca2+ imaging data recorded from the mouse hippocampus and primary visual cortex, as well as to simulated data with matching tuning properties and firing statistics. Our bias-correction methods allowed an accurate estimation of the information place cells carry about the animal’s position (spatial information) and uncovered the spatial resolution of hippocampal coding. Furthermore, using our methods, we found that cells with higher peak firing rates carry higher spatial information per spike and exposed differences between distinct hippocampal subfields in the long-term evolution of the spatial code. These results could be masked by the bias when applying the commonly used naïve calculation of information content. Thus, a bias-free estimation of information content can uncover otherwise overlooked properties of the neural code. Neuroscientists interested in understanding the nature of the neural code often apply methods derived from the mathematical framework of information theory to quantify the statistical relationship between neuronal activity and a certain variable of interest. For instance, when studying the neural basis for spatial navigation, it is useful to estimate how much information hippocampal neurons carry about the position of an animal within a specific environment. However, the standard measures for estimating information content suffer from an upward bias when applied to small sample sizes, which may lead to misinterpretation of the data. This bias is more pronounced in data from calcium imaging–a widely used technique for recording neuronal activity–because the activity extracted from the measured calcium signal is sparse in time. In this work, we introduce new methods to correct the bias in the naïve estimation of information content from limited sample sizes and such temporally sparse neuronal activity. We show that our bias-correction methods allow an accurate estimation of the information content carried by the activity obtained from calcium imaging data in both hippocampal and cortical neurons, and help uncover differences in the way information content changes during learning across neural circuits.
Collapse
|
20
|
Audet DJ, Gray WO, Brown AD. Audiovisual training rapidly reduces potentially hazardous perceptual errors caused by earplugs. Hear Res 2022; 414:108394. [PMID: 34911017 PMCID: PMC8761180 DOI: 10.1016/j.heares.2021.108394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 10/27/2021] [Accepted: 11/09/2021] [Indexed: 02/03/2023]
Abstract
Our ears capture sound from all directions but do not encode directional information explicitly. Instead, subtle acoustic features associated with unique sound source locations must be learned through experience. Surprisingly, aspects of this mapping process remain highly plastic throughout adulthood: Adult human listeners can accommodate acutely modified acoustic inputs ("new ears") over a period of a few weeks to recover near-normal sound localization, and this process can be accelerated with explicit training. Here we evaluated the extent of such plasticity given only transient exposure to distorted inputs. Distortions were produced via earplugs, which severely degrade sound localization performance, constraining their usability in real-world settings that require accurate directional hearing. Localization was measured over a period of ten weeks. Provision of feedback via simple paired auditory and visual stimuli led to a rapid decrease in the occurrence of large errors (responses >|±30°| from target) despite only once-weekly exposure to the altered inputs. Moreover, training effects generalized to untrained sound source locations. Lesser but qualitatively similar improvements were observed in a group of subjects that did not receive explicit feedback. In total, data demonstrate that even transient exposure to altered spatial acoustic information is sufficient for meaningful perceptual improvement (i.e., chronic exposure is not required), offering insight on the nature and time course of perceptual learning in the context of spatial hearing. Data also suggest that the large and potentially hazardous errors in localization caused by earplugs can be mitigated with appropriate training, offering a practical means to increase their usability.
Collapse
Affiliation(s)
- David J Audet
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States
| | - William O Gray
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States
| | - Andrew D Brown
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States; Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA 98195, United States.
| |
Collapse
|
21
|
Jia S, Xing D, Yu Z, Liu JK. Dissecting cascade computational components in spiking neural networks. PLoS Comput Biol 2021; 17:e1009640. [PMID: 34843460 PMCID: PMC8659421 DOI: 10.1371/journal.pcbi.1009640] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 12/09/2021] [Accepted: 11/14/2021] [Indexed: 01/15/2023] Open
Abstract
Finding out the physical structure of neuronal circuits that governs neuronal responses is an important goal for brain research. With fast advances for large-scale recording techniques, identification of a neuronal circuit with multiple neurons and stages or layers becomes possible and highly demanding. Although methods for mapping the connection structure of circuits have been greatly developed in recent years, they are mostly limited to simple scenarios of a few neurons in a pairwise fashion; and dissecting dynamical circuits, particularly mapping out a complete functional circuit that converges to a single neuron, is still a challenging question. Here, we show that a recent method, termed spike-triggered non-negative matrix factorization (STNMF), can address these issues. By simulating different scenarios of spiking neural networks with various connections between neurons and stages, we demonstrate that STNMF is a persuasive method to dissect functional connections within a circuit. Using spiking activities recorded at neurons of the output layer, STNMF can obtain a complete circuit consisting of all cascade computational components of presynaptic neurons, as well as their spiking activities. For simulated simple and complex cells of the primary visual cortex, STNMF allows us to dissect the pathway of visual computation. Taken together, these results suggest that STNMF could provide a useful approach for investigating neuronal systems leveraging recorded functional neuronal activity. It is well known that the computation of neuronal circuits is carried out through the staged and cascade structure of different types of neurons. Nevertheless, the information, particularly sensory information, is processed in a network primarily with feedforward connections through different pathways. A peculiar example is the early visual system, where light is transcoded by the retinal cells, routed by the lateral geniculate nucleus, and reached the primary visual cortex. One meticulous interest in recent years is to map out these physical structures of neuronal pathways. However, most methods so far are limited to taking snapshots of a static view of connections between neurons. It remains unclear how to obtain a functional and dynamical neuronal circuit beyond the simple scenarios of a few randomly sampled neurons. Using simulated spiking neural networks of visual pathways with different scenarios of multiple stages, mixed cell types, and natural image stimuli, we demonstrate that a recent computational tool, named spike-triggered non-negative matrix factorization, can resolve these issues. It enables us to recover the entire structural components of neural networks underlying the computation, together with the functional components of each individual neuron. Utilizing it for complex cells of the primary visual cortex allows us to reveal every underpinning of the nonlinear computation. Our results, together with other recent experimental and computational efforts, show that it is possible to systematically dissect neural circuitry with detailed structural and functional components.
Collapse
Affiliation(s)
- Shanshan Jia
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
| | - Dajun Xing
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
- * E-mail: (ZY); (JKL)
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
- * E-mail: (ZY); (JKL)
| |
Collapse
|
22
|
Regev TI, Markusfeld G, Deouell LY, Nelken I. Context Sensitivity across Multiple Time scales with a Flexible Frequency Bandwidth. Cereb Cortex 2021; 32:158-175. [PMID: 34289019 DOI: 10.1093/cercor/bhab200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 05/29/2021] [Accepted: 06/07/2021] [Indexed: 12/15/2022] Open
Abstract
Everyday auditory streams are complex, including spectro-temporal content that varies at multiple timescales. Using EEG, we investigated the sensitivity of human auditory cortex to the content of past stimulation in unattended sequences of equiprobable tones. In 3 experiments including 82 participants overall, we found that neural responses measured at different latencies after stimulus onset were sensitive to frequency intervals computed over distinct timescales. Importantly, early responses were sensitive to a longer history of stimulation than later responses. To account for these results, we tested a model consisting of neural populations with frequency-specific but broad tuning that undergo adaptation with exponential recovery. We found that the coexistence of neural populations with distinct recovery rates can explain our results. Furthermore, the adaptation bandwidth of these populations depended on spectral context-it was wider when the stimulation sequence had a wider frequency range. Our results provide electrophysiological evidence as well as a possible mechanistic explanation for dynamic and multiscale context-dependent auditory processing in the human cortex.
Collapse
Affiliation(s)
- Tamar I Regev
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel.,MIT Department of Brain and Cognitive Sciences, Cambridge, MA 02139, USA
| | - Geffen Markusfeld
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem 9190501, Israel
| | - Leon Y Deouell
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel.,Department of Psychology, The Hebrew University of Jerusalem, Jerusalem 9190501, Israel
| | - Israel Nelken
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel.,Department of Neurobiology, The Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| |
Collapse
|
23
|
Saccomanno V, Love H, Sylvester A, Li WC. The early development and physiology of Xenopus laevis tadpole lateral line system. J Neurophysiol 2021; 126:1814-1830. [PMID: 34705593 DOI: 10.1152/jn.00618.2020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Xenopus laevis has a lateral line mechanosensory system throughout its full life cycle, and a previous study on prefeeding stage tadpoles revealed that it may play a role in motor responses to both water suction and water jets. Here, we investigated the physiology of the anterior lateral line system in newly hatched tadpoles and the motor outputs induced by its activation in response to brief suction stimuli. High-speed videoing showed tadpoles tended to turn and swim away when strong suction was applied close to the head. The lateral line neuromasts were revealed by using DASPEI staining, and their inactivation with neomycin eliminated tadpole motor responses to suction. In immobilized preparations, suction or electrically stimulating the anterior lateral line nerve reliably initiated swimming but the motor nerve discharges implicating turning was observed only occasionally. The same stimulation applied during ongoing fictive swimming produced a halting response. The anterior lateral line nerve showed spontaneous afferent discharges at rest and increased activity during stimulation. Efferent activities were only recorded during tadpole fictive swimming and were largely synchronous with the ipsilateral motor nerve discharges. Finally, calcium imaging identified neurons with fluorescence increase time-locked with suction stimulation in the hindbrain and midbrain. A cluster of neurons at the entry point of the anterior lateral line nerve in the dorsolateral hindbrain had the shortest latency in their responses, supporting their potential sensory interneuron identity. Future studies need to reveal how the lateral line sensory information is processed by the central circuit to determine tadpole motor behavior.NEW & NOTEWORTHY We studied Xenopus tadpole motor responses to anterior lateral line stimulation using high-speed videos, electrophysiology and calcium imaging. Activating the lateral line reliably started swimming. At high stimulation intensities, turning was observed behaviorally but suitable motor nerve discharges were seen only occasionally in immobilized tadpoles. Suction applied during swimming produced a halting response. We analyzed afferent and efferent activities of the tadpole anterior lateral line nerve and located sensory interneurons using calcium imaging.
Collapse
Affiliation(s)
- Valentina Saccomanno
- School of Psychology and Neuroscience, grid.11914.3cUniversity of St Andrews, Fife, United Kingdom.,Department of Life Sciences, University of Trieste, Trieste, Italy
| | - Heather Love
- School of Psychology and Neuroscience, grid.11914.3cUniversity of St Andrews, Fife, United Kingdom
| | - Amy Sylvester
- School of Psychology and Neuroscience, grid.11914.3cUniversity of St Andrews, Fife, United Kingdom
| | - Wen-Chang Li
- School of Psychology and Neuroscience, grid.11914.3cUniversity of St Andrews, Fife, United Kingdom
| |
Collapse
|
24
|
Abstract
A central goal of neuroscience is to understand the representations formed by brain activity patterns and their connection to behaviour. The classic approach is to investigate how individual neurons encode stimuli and how their tuning determines the fidelity of the neural representation. Tuning analyses often use the Fisher information to characterize the sensitivity of neural responses to small changes of the stimulus. In recent decades, measurements of large populations of neurons have motivated a complementary approach, which focuses on the information available to linear decoders. The decodable information is captured by the geometry of the representational patterns in the multivariate response space. Here we review neural tuning and representational geometry with the goal of clarifying the relationship between them. The tuning induces the geometry, but different sets of tuned neurons can induce the same geometry. The geometry determines the Fisher information, the mutual information and the behavioural performance of an ideal observer in a range of psychophysical tasks. We argue that future studies can benefit from considering both tuning and geometry to understand neural codes and reveal the connections between stimuli, brain activity and behaviour.
Collapse
|
25
|
Adibi M, Lampl I. Sensory Adaptation in the Whisker-Mediated Tactile System: Physiology, Theory, and Function. Front Neurosci 2021; 15:770011. [PMID: 34776857 PMCID: PMC8586522 DOI: 10.3389/fnins.2021.770011] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 09/30/2021] [Indexed: 12/03/2022] Open
Abstract
In the natural environment, organisms are constantly exposed to a continuous stream of sensory input. The dynamics of sensory input changes with organism's behaviour and environmental context. The contextual variations may induce >100-fold change in the parameters of the stimulation that an animal experiences. Thus, it is vital for the organism to adapt to the new diet of stimulation. The response properties of neurons, in turn, dynamically adjust to the prevailing properties of sensory stimulation, a process known as "neuronal adaptation." Neuronal adaptation is a ubiquitous phenomenon across all sensory modalities and occurs at different stages of processing from periphery to cortex. In spite of the wealth of research on contextual modulation and neuronal adaptation in visual and auditory systems, the neuronal and computational basis of sensory adaptation in somatosensory system is less understood. Here, we summarise the recent finding and views about the neuronal adaptation in the rodent whisker-mediated tactile system and further summarise the functional effect of neuronal adaptation on the response dynamics and encoding efficiency of neurons at single cell and population levels along the whisker-mediated touch system in rodents. Based on direct and indirect pieces of evidence presented here, we suggest sensory adaptation provides context-dependent functional mechanisms for noise reduction in sensory processing, salience processing and deviant stimulus detection, shift between integration and coincidence detection, band-pass frequency filtering, adjusting neuronal receptive fields, enhancing neural coding and improving discriminability around adapting stimuli, energy conservation, and disambiguating encoding of principal features of tactile stimuli.
Collapse
Affiliation(s)
- Mehdi Adibi
- Department of Physiology and Biomedicine Discovery Institute, Monash University, Clayton, VIC, Australia
- Department of Neuroscience and Padova Neuroscience Center (PNC), University of Padova, Padova, Italy
| | - Ilan Lampl
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
26
|
Abstract
Identical physical inputs do not always evoke identical percepts. To investigate the role of stimulus history in tactile perception, we designed a task in which rats had to judge each vibrissal vibration, in a long series, as strong or weak depending on its mean speed. After a low-speed stimulus (trial n - 1), rats were more likely to report the next stimulus (trial n) as strong, and after a high-speed stimulus, they were more likely to report the next stimulus as weak, a repulsive effect that did not depend on choice or reward on trial n - 1. This effect could be tracked over several preceding trials (i.e., n - 2 and earlier) and was characterized by an exponential decay function, reflecting a trial-by-trial incorporation of sensory history. Surprisingly, the influence of trial n - 1 strengthened as the time interval between n - 1 and n grew. Human subjects receiving fingertip vibrations showed these same key findings. We are able to account for the repulsive stimulus history effect, and its detailed time scale, through a single-parameter model, wherein each new stimulus gradually updates the subject's decision criterion. This model points to mechanisms underlying how the past affects the ongoing subjective experience.
Collapse
|
27
|
Li J, Niemeier M, Kern R, Egelhaaf M. Disentangling of Local and Wide-Field Motion Adaptation. Front Neural Circuits 2021; 15:713285. [PMID: 34531728 PMCID: PMC8438216 DOI: 10.3389/fncir.2021.713285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Accepted: 08/11/2021] [Indexed: 11/21/2022] Open
Abstract
Motion adaptation has been attributed in flying insects a pivotal functional role in spatial vision based on optic flow. Ongoing motion enhances in the visual pathway the representation of spatial discontinuities, which manifest themselves as velocity discontinuities in the retinal optic flow pattern during translational locomotion. There is evidence for different spatial scales of motion adaptation at the different visual processing stages. Motion adaptation is supposed to take place, on the one hand, on a retinotopic basis at the level of local motion detecting neurons and, on the other hand, at the level of wide-field neurons pooling the output of many of these local motion detectors. So far, local and wide-field adaptation could not be analyzed separately, since conventional motion stimuli jointly affect both adaptive processes. Therefore, we designed a novel stimulus paradigm based on two types of motion stimuli that had the same overall strength but differed in that one led to local motion adaptation while the other did not. We recorded intracellularly the activity of a particular wide-field motion-sensitive neuron, the horizontal system equatorial cell (HSE) in blowflies. The experimental data were interpreted based on a computational model of the visual motion pathway, which included the spatially pooling HSE-cell. By comparing the difference between the recorded and modeled HSE-cell responses induced by the two types of motion adaptation, the major characteristics of local and wide-field adaptation could be pinpointed. Wide-field adaptation could be shown to strongly depend on the activation level of the cell and, thus, on the direction of motion. In contrast, the response gain is reduced by local motion adaptation to a similar extent independent of the direction of motion. This direction-independent adaptation differs fundamentally from the well-known adaptive adjustment of response gain according to the prevailing overall stimulus level that is considered essential for an efficient signal representation by neurons with a limited operating range. Direction-independent adaptation is discussed to result from the joint activity of local motion-sensitive neurons of different preferred directions and to lead to a representation of the local motion direction that is independent of the overall direction of global motion.
Collapse
Affiliation(s)
- Jinglin Li
- Neurobiology, Bielefeld University, Bielefeld, Germany
| | | | - Roland Kern
- Neurobiology, Bielefeld University, Bielefeld, Germany
| | | |
Collapse
|
28
|
Meirhaeghe N, Sohn H, Jazayeri M. A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex. Neuron 2021; 109:2995-3011.e5. [PMID: 34534456 PMCID: PMC9737059 DOI: 10.1016/j.neuron.2021.08.025] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 07/02/2021] [Accepted: 08/18/2021] [Indexed: 12/14/2022]
Abstract
The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.
Collapse
Affiliation(s)
- Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences & Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
29
|
Predictive encoding of motion begins in the primate retina. Nat Neurosci 2021; 24:1280-1291. [PMID: 34341586 PMCID: PMC8728393 DOI: 10.1038/s41593-021-00899-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 06/25/2021] [Indexed: 02/06/2023]
Abstract
Predictive motion encoding is an important aspect of visually guided behavior that allows animals to estimate the trajectory of moving objects. Motion prediction is understood primarily in the context of translational motion, but the environment contains other types of behaviorally salient motion correlation such as those produced by approaching or receding objects. However, the neural mechanisms that detect and predictively encode these correlations remain unclear. We report here that four of the parallel output pathways in the primate retina encode predictive motion information, and this encoding occurs for several classes of spatiotemporal correlation that are found in natural vision. Such predictive coding can be explained by known nonlinear circuit mechanisms that produce a nearly optimal encoding, with transmitted information approaching the theoretical limit imposed by the stimulus itself. Thus, these neural circuit mechanisms efficiently separate predictive information from nonpredictive information during the encoding process.
Collapse
|
30
|
Martelli C, Storace DA. Stimulus Driven Functional Transformations in the Early Olfactory System. Front Cell Neurosci 2021; 15:684742. [PMID: 34413724 PMCID: PMC8369031 DOI: 10.3389/fncel.2021.684742] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022] Open
Abstract
Olfactory stimuli are encountered across a wide range of odor concentrations in natural environments. Defining the neural computations that support concentration invariant odor perception, odor discrimination, and odor-background segmentation across a wide range of stimulus intensities remains an open question in the field. In principle, adaptation could allow the olfactory system to adjust sensory representations to the current stimulus conditions, a well-known process in other sensory systems. However, surprisingly little is known about how adaptation changes olfactory representations and affects perception. Here we review the current understanding of how adaptation impacts processing in the first two stages of the vertebrate olfactory system, olfactory receptor neurons (ORNs), and mitral/tufted cells.
Collapse
Affiliation(s)
- Carlotta Martelli
- Institute of Developmental Biology and Neurobiology, University of Mainz, Mainz, Germany
| | - Douglas Anthony Storace
- Department of Biological Science, Florida State University, Tallahassee, FL, United States
- Program in Neuroscience, Florida State University, Tallahassee, FL, United States
| |
Collapse
|
31
|
Roy A, Narayanan R. Spatial information transfer in hippocampal place cells depends on trial-to-trial variability, symmetry of place-field firing, and biophysical heterogeneities. Neural Netw 2021; 142:636-660. [PMID: 34399375 PMCID: PMC7611579 DOI: 10.1016/j.neunet.2021.07.026] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 03/25/2021] [Accepted: 07/21/2021] [Indexed: 11/19/2022]
Abstract
The relationship between the feature-tuning curve and information transfer profile of individual neurons provides vital insights about neural encoding. However, the relationship between the spatial tuning curve and spatial information transfer of hippocampal place cells remains unexplored. Here, employing a stochastic search procedure spanning thousands of models, we arrived at 127 conductance-based place-cell models that exhibited signature electrophysiological characteristics and sharp spatial tuning, with parametric values that exhibited neither clustering nor strong pairwise correlations. We introduced trial-to-trial variability in responses and computed model tuning curves and information transfer profiles, using stimulus-specific (SSI) and mutual (MI) information metrics, across locations within the place field. We found spatial information transfer to be heterogeneous across models, but to reduce consistently with increasing levels of variability. Importantly, whereas reliable low-variability responses implied that maximal information transfer occurred at high-slope regions of the tuning curve, increase in variability resulted in maximal transfer occurring at the peak-firing location in a subset of models. Moreover, experience-dependent asymmetry in place-field firing introduced asymmetries in the information transfer computed through MI, but not SSI, and the impact of activity-dependent variability on information transfer was minimal compared to activity-independent variability. We unveiled ion-channel degeneracy in the regulation of spatial information transfer, and demonstrated critical roles for N-methyl-d-aspartate receptors, transient potassium and dendritic sodium channels in regulating information transfer. Our results demonstrate that trial-to-trial variability, tuning-curve shape and biological heterogeneities critically regulate the relationship between the spatial tuning curve and spatial information transfer in hippocampal place cells.
Collapse
Affiliation(s)
- Ankit Roy
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India; Undergraduate program, Indian Institute of Science, Bangalore, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India.
| |
Collapse
|
32
|
Abstract
The ability to adapt to changes in stimulus statistics is a hallmark of sensory systems. Here, we developed a theoretical framework that can account for the dynamics of adaptation from an information processing perspective. We use this framework to optimize and analyze adaptive sensory codes, and we show that codes optimized for stationary environments can suffer from prolonged periods of poor performance when the environment changes. To mitigate the adversarial effects of these environmental changes, sensory systems must navigate tradeoffs between the ability to accurately encode incoming stimuli and the ability to rapidly detect and adapt to changes in the distribution of these stimuli. We derive families of codes that balance these objectives, and we demonstrate their close match to experimentally observed neural dynamics during mean and variance adaptation. Our results provide a unifying perspective on adaptation across a range of sensory systems, environments, and sensory tasks.
Collapse
|
33
|
Hsu WMM, Kastner DB, Baccus SA, Sharpee TO. How inhibitory neurons increase information transmission under threshold modulation. Cell Rep 2021; 35:109158. [PMID: 34038717 PMCID: PMC8846953 DOI: 10.1016/j.celrep.2021.109158] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 02/14/2021] [Accepted: 04/29/2021] [Indexed: 11/28/2022] Open
Abstract
Modulation of neuronal thresholds is ubiquitous in the brain. Phenomena such as figure-ground segmentation, motion detection, stimulus anticipation, and shifts in attention all involve changes in a neuron’s threshold based on signals from larger scales than its primary inputs. However, this modulation reduces the accuracy with which neurons can represent their primary inputs, creating a mystery as to why threshold modulation is so widespread in the brain. We find that modulation is less detrimental than other forms of neuronal variability and that its negative effects can be nearly completely eliminated if modulation is applied selectively to sparsely responding neurons in a circuit by inhibitory neurons. We verify these predictions in the retina where we find that inhibitory amacrine cells selectively deliver modulation signals to sparsely responding ganglion cell types. Our findings elucidate the central role that inhibitory neurons play in maximizing information transmission under modulation. Modulation of neuronal thresholds is ubiquitous in the brain but reduces the accuracy of neural signaling. Hsu et al. show that the negative impact of threshold modulation can be almost completely eliminated when modulation is not delivered uniformly to all neurons but only to a subset and via inhibitory neurons.
Collapse
Affiliation(s)
- Wei-Mien M Hsu
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA, USA; Department of Physics, University of California, San Diego, La Jolla, CA, USA
| | - David B Kastner
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, School of Medicine, San Francisco, CA, USA; Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Stephen A Baccus
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA, USA; Department of Physics, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
34
|
Makkeh A, Gutknecht AJ, Wibral M. Introducing a differentiable measure of pointwise shared information. Phys Rev E 2021; 103:032149. [PMID: 33862718 DOI: 10.1103/physreve.103.032149] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 02/19/2021] [Indexed: 11/07/2022]
Abstract
Partial information decomposition of the multivariate mutual information describes the distinct ways in which a set of source variables contains information about a target variable. The groundbreaking work of Williams and Beer has shown that this decomposition cannot be determined from classic information theory without making additional assumptions, and several candidate measures have been proposed, often drawing on principles from related fields such as decision theory. None of these measures is differentiable with respect to the underlying probability mass function. We here present a measure that satisfies this property, emerges solely from information-theoretic principles, and has the form of a local mutual information. We show how the measure can be understood from the perspective of exclusions of probability mass, a principle that is foundational to the original definition of mutual information by Fano. Since our measure is well defined for individual realizations of random variables it lends itself, for example, to local learning in artificial neural networks. We also show that it has a meaningful Möbius inversion on a redundancy lattice and obeys a target chain rule. We give an operational interpretation of the measure based on the decisions that an agent should take if given only the shared information.
Collapse
Affiliation(s)
- Abdullah Makkeh
- Campus Institute for Dynamics of Biological Networks, Georg-August Univeristy, Goettingen, Germany
| | - Aaron J Gutknecht
- Campus Institute for Dynamics of Biological Networks, Georg-August Univeristy, Goettingen, Germany
| | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, Georg-August Univeristy, Goettingen, Germany
| |
Collapse
|
35
|
Röth K, Shao S, Gjorgjieva J. Efficient population coding depends on stimulus convergence and source of noise. PLoS Comput Biol 2021; 17:e1008897. [PMID: 33901195 PMCID: PMC8075262 DOI: 10.1371/journal.pcbi.1008897] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 03/19/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory organs transmit information to downstream brain circuits using a neural code comprised of spikes from multiple neurons. According to the prominent efficient coding framework, the properties of sensory populations have evolved to encode maximum information about stimuli given biophysical constraints. How information coding depends on the way sensory signals from multiple channels converge downstream is still unknown, especially in the presence of noise which corrupts the signal at different points along the pathway. Here, we calculated the optimal information transfer of a population of nonlinear neurons under two scenarios. First, a lumped-coding channel where the information from different inputs converges to a single channel, thus reducing the number of neurons. Second, an independent-coding channel when different inputs contribute independent information without convergence. In each case, we investigated information loss when the sensory signal was corrupted by two sources of noise. We determined critical noise levels at which the optimal number of distinct thresholds of individual neurons in the population changes. Comparing our system to classical physical systems, these changes correspond to first- or second-order phase transitions for the lumped- or the independent-coding channel, respectively. We relate our theoretical predictions to coding in a population of auditory nerve fibers recorded experimentally, and find signatures of efficient coding. Our results yield important insights into the diverse coding strategies used by neural populations to optimally integrate sensory stimuli in the presence of distinct sources of noise.
Collapse
Affiliation(s)
- Kai Röth
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- School of Life Sciences, Technical University of Munich, Freising, Germany
| | - Shuai Shao
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Donders Institute and Faculty of Science, Radboud University, Nijmegen, Netherlands
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- School of Life Sciences, Technical University of Munich, Freising, Germany
| |
Collapse
|
36
|
Zeldenrust F, Gutkin B, Denéve S. Efficient and robust coding in heterogeneous recurrent networks. PLoS Comput Biol 2021; 17:e1008673. [PMID: 33930016 PMCID: PMC8115785 DOI: 10.1371/journal.pcbi.1008673] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 05/12/2021] [Accepted: 04/07/2021] [Indexed: 11/19/2022] Open
Abstract
Cortical networks show a large heterogeneity of neuronal properties. However, traditional coding models have focused on homogeneous populations of excitatory and inhibitory neurons. Here, we analytically derive a class of recurrent networks of spiking neurons that close to optimally track a continuously varying input online, based on two assumptions: 1) every spike is decoded linearly and 2) the network aims to reduce the mean-squared error between the input and the estimate. From this we derive a class of predictive coding networks, that unifies encoding and decoding and in which we can investigate the difference between homogeneous networks and heterogeneous networks, in which each neurons represents different features and has different spike-generating properties. We find that in this framework, 'type 1' and 'type 2' neurons arise naturally and networks consisting of a heterogeneous population of different neuron types are both more efficient and more robust against correlated noise. We make two experimental predictions: 1) we predict that integrators show strong correlations with other integrators and resonators are correlated with resonators, whereas the correlations are much weaker between neurons with different coding properties and 2) that 'type 2' neurons are more coherent with the overall network activity than 'type 1' neurons.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Boris Gutkin
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
- Center for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia
| | - Sophie Denéve
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
| |
Collapse
|
37
|
Tabas A, von Kriegstein K. Adjudicating Between Local and Global Architectures of Predictive Processing in the Subcortical Auditory Pathway. Front Neural Circuits 2021; 15:644743. [PMID: 33776657 PMCID: PMC7994860 DOI: 10.3389/fncir.2021.644743] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Predictive processing, a leading theoretical framework for sensory processing, suggests that the brain constantly generates predictions on the sensory world and that perception emerges from the comparison between these predictions and the actual sensory input. This requires two distinct neural elements: generative units, which encode the model of the sensory world; and prediction error units, which compare these predictions against the sensory input. Although predictive processing is generally portrayed as a theory of cerebral cortex function, animal and human studies over the last decade have robustly shown the ubiquitous presence of prediction error responses in several nuclei of the auditory, somatosensory, and visual subcortical pathways. In the auditory modality, prediction error is typically elicited using so-called oddball paradigms, where sequences of repeated pure tones with the same pitch are at unpredictable intervals substituted by a tone of deviant frequency. Repeated sounds become predictable promptly and elicit decreasing prediction error; deviant tones break these predictions and elicit large prediction errors. The simplicity of the rules inducing predictability make oddball paradigms agnostic about the origin of the predictions. Here, we introduce two possible models of the organizational topology of the predictive processing auditory network: (1) the global view, that assumes that predictions on the sensory input are generated at high-order levels of the cerebral cortex and transmitted in a cascade of generative models to the subcortical sensory pathways; and (2) the local view, that assumes that independent local models, computed using local information, are used to perform predictions at each processing stage. In the global view information encoding is optimized globally but biases sensory representations along the entire brain according to the subjective views of the observer. The local view results in a diminished coding efficiency, but guarantees in return a robust encoding of the features of sensory input at each processing stage. Although most experimental results to-date are ambiguous in this respect, recent evidence favors the global model.
Collapse
Affiliation(s)
- Alejandro Tabas
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
38
|
Gutierrez GJ, Rieke F, Shea-Brown ET. Nonlinear convergence boosts information coding in circuits with parallel outputs. Proc Natl Acad Sci U S A 2021; 118:e1921882118. [PMID: 33593894 PMCID: PMC7923546 DOI: 10.1073/pnas.1921882118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Neural circuits are structured with layers of converging and diverging connectivity and selectivity-inducing nonlinearities at neurons and synapses. These components have the potential to hamper an accurate encoding of the circuit inputs. Past computational studies have optimized the nonlinearities of single neurons, or connection weights in networks, to maximize encoded information, but have not grappled with the simultaneous impact of convergent circuit structure and nonlinear response functions for efficient coding. Our approach is to compare model circuits with different combinations of convergence, divergence, and nonlinear neurons to discover how interactions between these components affect coding efficiency. We find that a convergent circuit with divergent parallel pathways can encode more information with nonlinear subunits than with linear subunits, despite the compressive loss induced by the convergence and the nonlinearities when considered separately.
Collapse
Affiliation(s)
- Gabrielle J Gutierrez
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195;
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Eric T Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
39
|
Malaguti G, ten Wolde PR. Theory for the optimal detection of time-varying signals in cellular sensing systems. eLife 2021; 10:e62574. [PMID: 33594978 PMCID: PMC7946427 DOI: 10.7554/elife.62574] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 02/12/2021] [Indexed: 11/29/2022] Open
Abstract
Living cells often need to measure chemical concentrations that vary in time, yet how accurately they can do so is poorly understood. Here, we present a theory that fully specifies, without any adjustable parameters, the optimal design of a canonical sensing system in terms of two elementary design principles: (1) there exists an optimal integration time, which is determined by the input statistics and the number of receptors; and (2) in the optimally designed system, the number of independent concentration measurements as set by the number of receptors and the optimal integration time equals the number of readout molecules that store these measurements and equals the work to store these measurements reliably; no resource is then in excess and hence wasted. Applying our theory to the Escherichia coli chemotaxis system indicates that its integration time is not only optimal for sensing shallow gradients but also necessary to enable navigation in these gradients.
Collapse
|
40
|
Tabas A, Mihai G, Kiebel S, Trampel R, von Kriegstein K. Abstract rules drive adaptation in the subcortical sensory pathway. eLife 2020; 9:64501. [PMID: 33289479 PMCID: PMC7785290 DOI: 10.7554/elife.64501] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 12/03/2020] [Indexed: 01/19/2023] Open
Abstract
The subcortical sensory pathways are the fundamental channels for mapping the outside world to our minds. Sensory pathways efficiently transmit information by adapting neural responses to the local statistics of the sensory input. The long-standing mechanistic explanation for this adaptive behaviour is that neural activity decreases with increasing regularities in the local statistics of the stimuli. An alternative account is that neural coding is directly driven by expectations of the sensory input. Here, we used abstract rules to manipulate expectations independently of local stimulus statistics. The ultra-high-field functional-MRI data show that abstract expectations can drive the response amplitude to tones in the human auditory pathway. These results provide first unambiguous evidence of abstract processing in a subcortical sensory pathway. They indicate that the neural representation of the outside world is altered by our prior beliefs even at initial points of the processing hierarchy.
Collapse
Affiliation(s)
- Alejandro Tabas
- Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Research Group Neural Mechanism of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Glad Mihai
- Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Research Group Neural Mechanism of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Stefan Kiebel
- Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - Robert Trampel
- Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Research Group Neural Mechanism of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
41
|
Dupan SSG, McNeill Z, Brunton E, Nazarpour K. Temporal Modulation of Transcutaneous Electrical Nerve Stimulation Influences Sensory Perception. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:3885-3888. [PMID: 33018849 DOI: 10.1109/embc44109.2020.9176472] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The incorporation of sensory feedback in prosthetics can lead to a range of benefits, such as improved hand control, increased prosthesis embodiment, and the reduction of phantom limb pain. However, the creation of reliable sensory feedback is complicated by the temporal modulation of the nervous system. Sensory fibres in the hand are primed to react to changing conditions, firing when discrete mechanical events occur. In this study, we investigate the minimal possible stimulation needed to distinguish different sensory patterns that can be used to indicate events. We presented a two-alternative forced-choice task of transcutaneous electrical nerve stimuli to 10 participants. The results showed that different stimuli can be distinguished when double pulses have an inter-stimulus-interval of 10 ms. Additionally, providing a pause of at least 350 ms between stimuli increases the discrimination of the perception. These results suggest that humans can distinguish different patterns of transcutaneous electrical nerve stimulation with as little as two stimuli, illustrating the possibility of providing event-related stimulation.
Collapse
|
42
|
van Gendt MJ, Siebrecht M, Briaire JJ, Bohte SM, Frijns JHM. Short and long-term adaptation in the auditory nerve stimulated with high-rate electrical pulse trains are better described by a power law. Hear Res 2020; 398:108090. [PMID: 33070033 DOI: 10.1016/j.heares.2020.108090] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 09/02/2020] [Accepted: 09/22/2020] [Indexed: 01/18/2023]
Abstract
Despite the introduction of many new sound-coding strategies speech perception outcomes in cochlear implant listeners have leveled off. Computer models may help speed up the evaluation of new sound-coding strategies, but most existing models of auditory nerve responses to electrical stimulation include limited temporal detail, as the effects of longer stimulation, such as adaptation, are not well-studied. Measured neural responses to stimulation with both short (400 ms) and long (10 min) duration high-rate (5kpps) pulse trains were compared in terms of spike rate and vector strength (VS) with model outcomes obtained with different forms of adaptation. A previously published model combining biophysical and phenomenological approaches was adjusted with adaptation modeled as a single decaying exponent, multiple exponents and a power law. For long duration data, power law adaptation by far outperforms the single exponent model, especially when it is optimized per fiber. For short duration data, all tested models performed comparably well, with slightly better performance of the single exponent model for VS and of the power law model for the spike rates. The power law parameter sets obtained when fitted to the long duration data also yielded adequate predictions for short duration stimulation, and vice versa. The power law function can be approximated with multiple exponents, which is physiologically more viable. The number of required exponents depends on the duration of simulation; the 400 ms data was well-replicated by two exponents (23 and 212 ms), whereas the 10-minute data required at least seven exponents (ranging from 4 ms to 600 s). Adaptation of the auditory nerve to high-rate electrical stimulation can best be described by a power-law or a sum of exponents. This gives an adequate fit for both short and long duration stimuli, such as CI speech segments.
Collapse
Affiliation(s)
- M J van Gendt
- ENT-Department, Leiden University Medical Centre, PO Box 9600, Leiden 2300 RC, The Netherlands.
| | - M Siebrecht
- ENT-Department, Leiden University Medical Centre, PO Box 9600, Leiden 2300 RC, The Netherlands
| | - J J Briaire
- ENT-Department, Leiden University Medical Centre, PO Box 9600, Leiden 2300 RC, The Netherlands
| | - S M Bohte
- CWI, Center for Mathematics and Informatics, Amsterdam, The Netherlands
| | - J H M Frijns
- ENT-Department, Leiden University Medical Centre, PO Box 9600, Leiden 2300 RC, The Netherlands
| |
Collapse
|
43
|
Seenivasan P, Narayanan R. Efficient phase coding in hippocampal place cells. PHYSICAL REVIEW RESEARCH 2020; 2:033393. [PMID: 32984841 PMCID: PMC7116119 DOI: 10.1103/physrevresearch.2.033393] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Neural codes have been postulated to build efficient representations of the external world. The hippocampus, an encoding system, employs neuronal firing rates and spike phases to encode external space. Although the biophysical origin of such codes is at a single neuronal level, the role of neural components in efficient coding is not understood. The complexity of this problem lies in the dimensionality of the parametric space encompassing neural components, and is amplified by the enormous biological heterogeneity observed in each parameter. A central question that spans encoding systems therefore is how neurons arrive at efficient codes in the face of widespread biological heterogeneities. To answer this, we developed a conductance-based spiking model for phase precession, a phase code of external space exhibited by hippocampal place cells. Our model accounted for several experimental observations on place cell firing and electrophysiology: the emergence of phase precession from exact spike timings of conductance-based models with neuron-specific ion channels and receptors; biological heterogeneities in neural components and excitability; the emergence of subthreshold voltage ramp, increased firing rate, enhanced theta power within the place field; a signature reduction in extracellular theta frequency compared to its intracellular counterpart; and experience-dependent asymmetry in firing-rate profile. We formulated phase-coding efficiency, using Shannon's information theory, as an information maximization problem with spike phase as the response and external space within a single place field as the stimulus. We employed an unbiased stochastic search spanning an 11-dimensional neural space, involving thousands of iterations that accounted for the biophysical richness and neuron-to-neuron heterogeneities. We found a small subset of models that exhibited efficient spatial information transfer through the phase code, and investigated the distinguishing features of this subpopulation at the parametric and functional scales. At the parametric scale, which spans the molecular components that defined the neuron, several nonunique parametric combinations with weak pairwise correlations yielded models with similar high phase-coding efficiency. Importantly, placing additional constraints on these models in terms of matching other aspects of hippocampal neural responses did not hamper parametric degeneracy. We provide quantitative evidence demonstrating this parametric degeneracy to be a consequence of a many-to-one relationship between the different parameters and phase-coding efficiency. At the functional scale, involving the cellular-scale neural properties, our analyses revealed an important higher-order constraint that was exclusive to models exhibiting efficient phase coding. Specifically, we found a counterbalancing negative correlation between neuronal gain and the strength of external synaptic inputs as a critical functional constraint for the emergence of efficient phase coding. These observations implicate intrinsic neural properties as important contributors in effectuating such counterbalance, which can be achieved by recruiting nonunique parametric combinations. Finally, we show that a change in afferent statistics, manifesting as input asymmetry onto these neuronal models, induced an adaptive shift in the phase code that preserved its efficiency. Together, our analyses unveil parametric degeneracy as a mechanism to harness widespread neuron-to-neuron heterogeneity towards accomplishing stable and efficient encoding, provided specific higher-order functional constraints on the relationship of neural gain to external inputs are satisfied.
Collapse
|
44
|
Liu X, Engel SA. Higher-Level Meta-Adaptation Mitigates Visual Distortions Produced by Lower-Level Adaptation. Psychol Sci 2020; 31:654-662. [PMID: 32348188 DOI: 10.1177/0956797620907090] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The visual system adapts to the environment, changing neural responses to aid efficiency and improve perception. However, these changes sometimes lead to negative consequences: If neurons at later processing stages fail to account for adaptation at earlier stages, perceptual errors result, including common visual illusions. These negative effects of adaptation have been termed the coding catastrophe. How does the visual system resolve them? We hypothesized that higher-level adaptation can correct errors arising from the coding catastrophe by changing what appears normal, a common form of adaptation across domains. Observers (N = 15) viewed flickering checkerboards that caused a normal face to appear distorted. We tested whether the visual system can adapt to this adaptation-distorted face through repeated viewing. Results from two experiments show that such meta-adaptation does occur and that it makes the distorted face gradually appear more normal. Meta-adaptation may be a general strategy to correct negative consequences of low-level adaptation.
Collapse
Affiliation(s)
- Xinyu Liu
- Department of Psychology, University of Minnesota
| | | |
Collapse
|
45
|
Charkhkar H, Christie BP, Triolo RJ. Sensory neuroprosthesis improves postural stability during Sensory Organization Test in lower-limb amputees. Sci Rep 2020; 10:6984. [PMID: 32332861 PMCID: PMC7181811 DOI: 10.1038/s41598-020-63936-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 04/08/2020] [Indexed: 12/26/2022] Open
Abstract
To maintain postural stability, unilateral lower-limb amputees (LLAs) heavily rely on visual and vestibular inputs, and somatosensory cues from their intact leg to compensate for missing somatosensory information from the amputated limb. When any of these resources are compromised, LLAs exhibit poor balance control compared to able-bodied individuals. We hypothesized that restoring somatosensation related to the missing limb via direct activation of the sensory nerves in the residuum would improve the standing stability of LLAs. We developed a closed-loop sensory neuroprosthesis utilizing non-penetrating multi-contact cuff electrodes implanted around the residual nerves to elicit perceptions of the location and intensity of plantar pressures under the prosthetic feet of two transtibial amputees. Effects of the sensory neuroprosthesis on balance were quantified with the Sensory Organization Test and other posturographic measures of sway. In both participants, the sensory neuroprosthesis improved equilibrium and sway when somatosensation from the intact leg and visual inputs were perturbed simultaneously. One participant also showed improvement with the sensory neuroprosthesis whenever somatosensation in the intact leg was compromised via perturbations of the platform. These observations suggest the sensory feedback elicited by neural stimulation can significantly improve the standing stability of LLAs, particularly when other sensory inputs are depleted or otherwise compromised.
Collapse
Affiliation(s)
- Hamid Charkhkar
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA. .,Louis Stokes Cleveland Veterans Affairs Medical Center, 10701 East Boulevard, Cleveland, OH, 44106, USA.
| | - Breanne P Christie
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Louis Stokes Cleveland Veterans Affairs Medical Center, 10701 East Boulevard, Cleveland, OH, 44106, USA
| | - Ronald J Triolo
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Louis Stokes Cleveland Veterans Affairs Medical Center, 10701 East Boulevard, Cleveland, OH, 44106, USA
| |
Collapse
|
46
|
Abstract
The visual system optimizes its functioning for a given environment through processes collectively called adaptation. It is currently unknown, however, whether adaptation is affected by the particular task the observer performs within that environment. Two experiments tested whether this is the case. Observers adapted to high contrast grating patterns, and the decay of adaptation was measured using a version of the tilt-aftereffect, while they performed two different secondary tasks. One task involved judging the luminance of a small circular spot at fixation, and was expected to be unaffected by adaptation. The other secondary task involved judging a low contrast grating, and adaptation was expected to make this task difficult by reducing the visibility of the grating. Identical displays containing both a fixation spot and a grating were used for both tasks. Tilt-aftereffects were smaller when subjects concurrently performed the grating task than when they performed the fixation task. These results suggest that the control of adaptation, in this case its decay, is sensitive to the nature of the task the observer is performing. Adaptation may attempt to optimize vision with respect to many different criteria simultaneously; task is likely one of the criteria included in this process.
Collapse
Affiliation(s)
- Mark Vergeer
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States of America
| | - Stephen A. Engel
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States of America
| |
Collapse
|
47
|
Matulis CA, Chen J, Gonzalez-Suarez AD, Behnia R, Clark DA. Heterogeneous Temporal Contrast Adaptation in Drosophila Direction-Selective Circuits. Curr Biol 2020; 30:222-236.e6. [PMID: 31928874 PMCID: PMC7003801 DOI: 10.1016/j.cub.2019.11.077] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 11/06/2019] [Accepted: 11/26/2019] [Indexed: 11/23/2022]
Abstract
In visual systems, neurons adapt both to the mean light level and to the range of light levels, or the contrast. Contrast adaptation has been studied extensively, but it remains unclear how it is distributed among neurons in connected circuits, and how early adaptation affects subsequent computations. Here, we investigated temporal contrast adaptation in neurons across Drosophila's visual motion circuitry. Several ON-pathway neurons showed strong adaptation to changes in contrast over time. One of these neurons, Mi1, showed almost complete adaptation on fast timescales, and experiments ruled out several potential mechanisms for its adaptive properties. When contrast adaptation reduced the gain in ON-pathway cells, it was accompanied by decreased motion responses in downstream direction-selective cells. Simulations show that contrast adaptation can substantially improve motion estimates in natural scenes. The benefits are larger for ON-pathway adaptation, which helps explain the heterogeneous distribution of contrast adaptation in these circuits.
Collapse
Affiliation(s)
- Catherine A Matulis
- Department of Physics, Yale University, 217 Prospect Street, New Haven, CT 06511, USA
| | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06510, USA
| | | | - Rudy Behnia
- Department of Neuroscience, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Damon A Clark
- Department of Physics, Yale University, 217 Prospect Street, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06510, USA; Department of Molecular, Cellular, and Developmental Biology, Yale University, 260 Whitney Avenue, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, 333 Cedar Street, New Haven, CT 06510, USA.
| |
Collapse
|
48
|
Zamani Y, Nategh N. A Convolutional Neural Network-based Model of Neural Pathways in the Retina .. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:6906-6909. [PMID: 31947427 DOI: 10.1109/embc.2019.8857278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
A variety of the visual functions of the vertebrate retina rely on the interactions between excitation and inhibition within specialized retinal circuitry. The nonlinear properties of these interactions, as well as a large number of cell types and pathways involved, pose a challenge for characterizing many of these retinal functions. To address these challenges, we develop a computational model in the convolutional neural network framework, along with an efficient, robust learning procedure that enables incorporating both excitatory and inhibitory interactions between the network layers of the retina through feedforward and lateral connections. The model successfully describes the spiking rate of retinal ganglion cells, for both real and simulated validation data. The recovered biologically-plausible model parameters represent the linear and nonlinear computations across retinal network layers. This novel model of retinal responses and the associated learning algorithm provide a powerful tool for understanding the complex visual computations in the retina and for replicating naturalistic visual functions of the retina for retinal prostheses applications.
Collapse
|
49
|
Lohse M, Bajo VM, King AJ, Willmore BDB. Neural circuits underlying auditory contrast gain control and their perceptual implications. Nat Commun 2020; 11:324. [PMID: 31949136 PMCID: PMC6965083 DOI: 10.1038/s41467-019-14163-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 12/19/2019] [Indexed: 11/09/2022] Open
Abstract
Neural adaptation enables sensory information to be represented optimally in the brain despite large fluctuations over time in the statistics of the environment. Auditory contrast gain control represents an important example, which is thought to arise primarily from cortical processing. Here we show that neurons in the auditory thalamus and midbrain of mice show robust contrast gain control, and that this is implemented independently of cortical activity. Although neurons at each level exhibit contrast gain control to similar degrees, adaptation time constants become longer at later stages of the processing hierarchy, resulting in progressively more stable representations. We also show that auditory discrimination thresholds in human listeners compensate for changes in contrast, and that the strength of this perceptual adaptation can be predicted from physiological measurements. Contrast adaptation is therefore a robust property of both the subcortical and cortical auditory system and accounts for the short-term adaptability of perceptual judgments.
Collapse
Affiliation(s)
- Michael Lohse
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK.
| | - Victoria M Bajo
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Andrew J King
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK.
| | - Ben D B Willmore
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
50
|
Gordon U, Marom S, Brenner N. Visual detection of time-varying signals: Opposing biases and their timescales. PLoS One 2019; 14:e0224256. [PMID: 31725731 PMCID: PMC6855422 DOI: 10.1371/journal.pone.0224256] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 10/10/2019] [Indexed: 11/23/2022] Open
Abstract
Human visual perception is a complex, dynamic and fluctuating process. In addition to the incoming visual stimulus, it is affected by many other factors including temporal context, both external and internal to the observer. In this study we investigate the dynamic properties of psychophysical responses to a continuous stream of visual near-threshold detection tasks. We manipulate the incoming signals to have temporal structures with various characteristic timescales. Responses of human observers to these signals are analyzed using tools that highlight their dynamical features as well. Our experiments show two opposing biases that shape perceptual decision making simultaneously: positive recency, biasing towards repeated response; and adaptation, entailing an increased probability of changed response. While both these effects have been reported in previous work, our results shed new light on the timescales involved in these effects, and on their interplay with varying inputs. We find that positive recency is a short-term bias, inversely correlated with response time, suggesting it can be compensated by afterthought. Adaptation, in contrast, reflects trends over longer times possibly including multiple previous trials. Our entire dataset, which includes different input signal temporal structures, is consistent with a simple model with the two biases characterized by a fixed parameter set. These results suggest that perceptual biases are inherent features which are not flexible to tune to input signals.
Collapse
Affiliation(s)
- Urit Gordon
- Faculty of Medicine, Technion, Haifa, Israel
- Network Biology Research Lab, Lorry Lockey Interdisciplinary Center for Life Science and Engineering, Technion, Haifa, Israel
| | - Shimon Marom
- Faculty of Medicine, Technion, Haifa, Israel
- Network Biology Research Lab, Lorry Lockey Interdisciplinary Center for Life Science and Engineering, Technion, Haifa, Israel
| | - Naama Brenner
- Faculty of Chemical Engineering, Technion, Haifa, Israel
- Network Biology Research Lab, Lorry Lockey Interdisciplinary Center for Life Science and Engineering, Technion, Haifa, Israel
- * E-mail:
| |
Collapse
|