1
|
Brudner S, Zhou B, Jayaram V, Santana GM, Clark DA, Emonet T. Fly navigational responses to odor motion and gradient cues are tuned to plume statistics. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.31.646361. [PMID: 40235995 PMCID: PMC11996313 DOI: 10.1101/2025.03.31.646361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2025]
Abstract
Odor cues guide animals to food and mates. Different environmental conditions can create differently patterned odor plumes, making navigation more challenging. Prior work has shown that animals turn upwind when they detect odor and cast crosswind when they lose it. Animals with bilateral olfactory sensors can also detect directional odor cues, such as odor gradient and odor motion. It remains unknown how animals use these two directional odor cues to guide crosswind navigation in odor plumes with distinct statistics. Here, we investigate this problem theoretically and experimentally. We show that these directional odor cues provide complementary information for navigation in different plume environments. We numerically analyzed real plumes to show that odor gradient cues are more informative about crosswind directions in relatively smooth odor plumes, while odor motion cues are more informative in turbulent or complex plumes. Neural networks trained to optimize crosswind turning converge to distinctive network structures that are tuned to odor gradient cues in smooth plumes and to odor motion cues in complex plumes. These trained networks improve the performance of artificial agents navigating plume environments that match the training environment. By recording Drosophila fruit flies as they navigated different odor plume environments, we verified that flies show the same correspondence between informative cues and plume types. Fly turning in the crosswind direction is correlated with odor gradients in smooth plumes and with odor motion in complex plumes. Overall, these results demonstrate that these directional odor cues are complementary across environments, and that animals exploit this relationship. Significance Many animals use smell to find food and mates, often navigating complex odor plumes shaped by environmental conditions. While upwind movement upon odor detection is well established, less is known about how animals steer crosswind to stay in the plume. We show that directional odor cues-gradients and motion-guide crosswind navigation differently depending on plume structure. Gradients carry more information in smooth plumes, while motion dominates in turbulent ones. Neural network trained to optimize crosswind navigation reflect this distinction, developing gradient sensitivity in smooth environments and motion sensitivity in complex ones. Experimentally, fruit flies adjust their turning behavior to prioritize the most informative cue in each context. These findings likely generalize to other animals navigating similarly structured odor plumes.
Collapse
|
2
|
Wu N, Zhou B, Agrochao M, Clark DA. Broken time-reversal symmetry in visual motion detection. Proc Natl Acad Sci U S A 2025; 122:e2410768122. [PMID: 40048271 PMCID: PMC11912477 DOI: 10.1073/pnas.2410768122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 01/09/2025] [Indexed: 03/12/2025] Open
Abstract
Our intuition suggests that when a movie is played in reverse, our perception of motion at each location in the reversed movie will be perfectly inverted compared to the original. This intuition is also reflected in classical theoretical and practical models of motion estimation, in which velocity flow fields invert when inputs are reversed in time. However, here we report that this symmetry of motion perception upon time reversal is broken in real visual systems. We designed a set of visual stimuli to investigate time reversal symmetry breaking in the fruit fly Drosophila's well-studied optomotor rotation behavior. We identified a suite of stimuli with a wide variety of properties that can uncover broken time reversal symmetry in fly behavioral responses. We then trained neural network models to predict the velocity of scenes with both natural and artificial contrast distributions. Training with naturalistic contrast distributions yielded models that broke time reversal symmetry, even when the training data themselves were time reversal symmetric. We show analytically and numerically that the breaking of time reversal symmetry in the model responses can arise from contrast asymmetry in the training data, but can also arise from other features of the contrast distribution. Furthermore, shallower neural network models can exhibit stronger symmetry breaking than deeper ones, suggesting that less flexible neural networks may be more prone to time reversal symmetry breaking. Overall, these results reveal a surprising feature of biological motion detectors and suggest that it could arise from constrained optimization in natural environments.
Collapse
Affiliation(s)
| | - Baohua Zhou
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT06511
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT06511
| | - Damon A. Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT06511
- Department of Physics, Yale University, New Haven, CT06511
- Department of Neuroscience, Yale University, New Haven, CT06511
- Quantitative Biology Institute, Yale University, New Haven, CT06511
- Wu Tsai Institute, Yale University, New Haven, CT06511
| |
Collapse
|
3
|
Pang MM, Chen F, Xie M, Druckmann S, Clandinin TR, Yang HH. A recurrent neural circuit in Drosophila temporally sharpens visual inputs. Curr Biol 2025; 35:333-346.e6. [PMID: 39706173 PMCID: PMC11769683 DOI: 10.1016/j.cub.2024.11.064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 10/28/2024] [Accepted: 11/26/2024] [Indexed: 12/23/2024]
Abstract
A critical goal of vision is to detect changes in light intensity, even when these changes are blurred by the spatial resolution of the eye and the motion of the animal. Here, we describe a recurrent neural circuit in Drosophila that compensates for blur and thereby selectively enhances the perceived contrast of moving edges. Using in vivo, two-photon voltage imaging, we measured the temporal response properties of L1 and L2, two cell types that receive direct synaptic input from photoreceptors. These neurons have biphasic responses to brief flashes of light, a hallmark of cells that encode changes in stimulus intensity. However, the second phase was often much larger in area than the first, creating an unusual temporal filter. Genetic dissection revealed that recurrent neural circuitry strongly shapes the second phase of the response, informing the structure of a dynamical model. By applying this model to moving natural images, we demonstrate that rather than veridically representing stimulus changes, this temporal processing strategy systematically enhances them, amplifying and sharpening responses. Comparing the measured responses of L2 to model predictions across both artificial and natural stimuli revealed that L2 tunes its properties as the model predicts to temporally sharpen visual inputs. Since this strategy is tunable to behavioral context, generalizable to any time-varying sensory input, and implementable with a common circuit motif, we propose that it could be broadly used to selectively enhance sharp and salient changes.
Collapse
Affiliation(s)
- Michelle M Pang
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA
| | - Feng Chen
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA; Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Marjorie Xie
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA
| | - Shaul Druckmann
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA 94305, USA
| | - Thomas R Clandinin
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA; Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Helen H Yang
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
4
|
Zhang S, Li K, Luo Z, Xu M, Zheng S. A Bio-Inspired Visual Neural Model for Robustly and Steadily Detecting Motion Directions of Translating Objects Against Variable Contrast in the Figure-Ground and Noise Interference. Biomimetics (Basel) 2025; 10:51. [PMID: 39851767 PMCID: PMC11761596 DOI: 10.3390/biomimetics10010051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2024] [Revised: 12/26/2024] [Accepted: 01/07/2025] [Indexed: 01/26/2025] Open
Abstract
(1) Background: At present, the bio-inspired visual neural models have made significant achievements in detecting the motion direction of the translating object. Variable contrast in the figure-ground and environmental noise interference, however, have a strong influence on the existing model. The responses of the lobula plate tangential cell (LPTC) neurons of Drosophila are robust and stable in the face of variable contrast in the figure-ground and environmental noise interference, which provides an excellent paradigm for addressing these challenges. (2) Methods: To resolve these challenges, we propose a bio-inspired visual neural model, which consists of four stages. Firstly, the photoreceptors (R1-R6) are utilized to perceive the change in luminance. Secondly, the change in luminance is divided into parallel ON and OFF pathways based on the lamina monopolar cell (LMC), and the spatial denoising and the spatio-temporal lateral inhibition (LI) mechanisms can suppress environmental noise and improve motion boundaries, respectively. Thirdly, the non-linear instantaneous feedback mechanism in divisive contrast normalization is adopted to reduce local contrast sensitivity; further, the parallel ON and OFF contrast pathways are activated. Finally, the parallel motion and contrast pathways converge on the LPTC in the lobula complex. (3) Results: By comparing numerous experimental simulations with state-of-the-art (SotA) bio-inspired models, we can draw four conclusions. Firstly, the effectiveness of the contrast neural computation and the spatial denoising mechanism is verified by the ablation study. Secondly, this model can robustly detect the motion direction of the translating object against variable contrast in the figure-ground and environmental noise interference. Specifically, the average detection success rate of the proposed bio-inspired model under the pure and real-world complex noise datasets was increased by 5.38% and 5.30%. Thirdly, this model can effectively reduce the fluctuation in this model response against variable contrast in the figure-ground and environmental noise interference, which shows the stability of this model; specifically, the average inter-quartile range of the coefficient of variation in the proposed bio-inspired model under the pure and real-world complex noise datasets was reduced by 38.77% and 47.84%, respectively. The average decline ratio of the sum of the coefficient of variation in the proposed bio-inspired model under the pure and real-world complex noise datasets was 57.03% and 67.47%, respectively. Finally, the robustness and stability of this model are further verified by comparing other early visual pre-processing mechanisms and engineering denoising methods. (4) Conclusions: This model can robustly and steadily detect the motion direction of the translating object under variable contrast in the figure-ground and environmental noise interference.
Collapse
Affiliation(s)
- Sheng Zhang
- College of Information Science and Engineering, Hohai University, Nanjing 211100, China; (S.Z.); (S.Z.)
| | - Ke Li
- School of Mechanical and Electrical Engineering, Nanchang Institute of Technology, Nanchang 330044, China
| | - Zhonghua Luo
- School of Mechanical and Electrical Engineering, Nanchang Institute of Technology, Nanchang 330044, China
| | - Mengxi Xu
- School of Computer Engineering, Nanjing Institute of Technology, Nanjing 211167, China;
| | - Shengnan Zheng
- College of Information Science and Engineering, Hohai University, Nanjing 211100, China; (S.Z.); (S.Z.)
- School of Computer Engineering, Nanjing Institute of Technology, Nanjing 211167, China;
| |
Collapse
|
5
|
Clark DA, Fitzgerald JE. Optimization in Visual Motion Estimation. Annu Rev Vis Sci 2024; 10:23-46. [PMID: 38663426 PMCID: PMC11998607 DOI: 10.1146/annurev-vision-101623-025432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2025]
Abstract
Sighted animals use visual signals to discern directional motion in their environment. Motion is not directly detected by visual neurons, and it must instead be computed from light signals that vary over space and time. This makes visual motion estimation a near universal neural computation, and decades of research have revealed much about the algorithms and mechanisms that generate directional signals. The idea that sensory systems are optimized for performance in natural environments has deeply impacted this research. In this article, we review the many ways that optimization has been used to quantitatively model visual motion estimation and reveal its underlying principles. We emphasize that no single optimization theory has dominated the literature. Instead, researchers have adeptly incorporated different computational demands and biological constraints that are pertinent to the specific brain system and animal model under study. The successes and failures of the resulting optimization models have thereby provided insights into how computational demands and biological constraints together shape neural computation.
Collapse
Affiliation(s)
- Damon A Clark
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, Connecticut, USA;
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
6
|
Cai LT, Krishna VS, Hladnik TC, Guilbeault NC, Vijayakumar C, Arunachalam M, Juntti SA, Arrenberg AB, Thiele TR, Cooper EA. Spatiotemporal visual statistics of aquatic environments in the natural habitats of zebrafish. Sci Rep 2023; 13:12028. [PMID: 37491571 PMCID: PMC10368656 DOI: 10.1038/s41598-023-36099-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 05/29/2023] [Indexed: 07/27/2023] Open
Abstract
Animal sensory systems are tightly adapted to the demands of their environment. In the visual domain, research has shown that many species have circuits and systems that exploit statistical regularities in natural visual signals. The zebrafish is a popular model animal in visual neuroscience, but relatively little quantitative data is available about the visual properties of the aquatic habitats where zebrafish reside, as compared to terrestrial environments. Improving our understanding of the visual demands of the aquatic habitats of zebrafish can enhance the insights about sensory neuroscience yielded by this model system. We analyzed a video dataset of zebrafish habitats captured by a stationary camera and compared this dataset to videos of terrestrial scenes in the same geographic area. Our analysis of the spatiotemporal structure in these videos suggests that zebrafish habitats are characterized by low visual contrast and strong motion when compared to terrestrial environments. Similar to terrestrial environments, zebrafish habitats tended to be dominated by dark contrasts, particularly in the lower visual field. We discuss how these properties of the visual environment can inform the study of zebrafish visual behavior and neural processing and, by extension, can inform our understanding of the vertebrate brain.
Collapse
Affiliation(s)
- Lanya T Cai
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, CA, USA
| | - Venkatesh S Krishna
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada
| | - Tim C Hladnik
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tübingen, Tübingen, Germany
- Graduate Training Centre for Neuroscience, University of Tübingen, Tübingen, Germany
| | - Nicholas C Guilbeault
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada
- Department of Cell and Systems Biology, University of Toronto, Toronto, Canada
| | - Chinnian Vijayakumar
- Department of Zoology, Department of Zoology, St. Andrew's College, Gorakhpur, Uttar Pradesh, India
| | - Muthukumarasamy Arunachalam
- Department of Zoology, School of Biological Sciences, Central University of Kerala, Kasaragod, Kerala, India
- Centre for Inland Fishes and Conservation, St. Andrew's College, Gorakhpur, Uttar Pradesh, India
| | - Scott A Juntti
- Department of Biology, University of Maryland, College Park, MD, USA
| | - Aristides B Arrenberg
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tübingen, Tübingen, Germany
| | - Tod R Thiele
- Department of Biological Sciences, University of Toronto, Scarborough, ON, Canada.
- Department of Cell and Systems Biology, University of Toronto, Toronto, Canada.
| | - Emily A Cooper
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, CA, USA.
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA.
| |
Collapse
|
7
|
Currier TA, Pang MM, Clandinin TR. Visual processing in the fly, from photoreceptors to behavior. Genetics 2023; 224:iyad064. [PMID: 37128740 PMCID: PMC10213501 DOI: 10.1093/genetics/iyad064] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 03/22/2023] [Indexed: 05/03/2023] Open
Abstract
Originally a genetic model organism, the experimental use of Drosophila melanogaster has grown to include quantitative behavioral analyses, sophisticated perturbations of neuronal function, and detailed sensory physiology. A highlight of these developments can be seen in the context of vision, where pioneering studies have uncovered fundamental and generalizable principles of sensory processing. Here we begin with an overview of vision-guided behaviors and common methods for probing visual circuits. We then outline the anatomy and physiology of brain regions involved in visual processing, beginning at the sensory periphery and ending with descending motor control. Areas of focus include contrast and motion detection in the optic lobe, circuits for visual feature selectivity, computations in support of spatial navigation, and contextual associative learning. Finally, we look to the future of fly visual neuroscience and discuss promising topics for further study.
Collapse
Affiliation(s)
- Timothy A Currier
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Michelle M Pang
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Thomas R Clandinin
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| |
Collapse
|
8
|
Fu Q, Li Z, Peng J. Harmonizing motion and contrast vision for robust looming detection. ARRAY 2023. [DOI: 10.1016/j.array.2022.100272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
9
|
Zhou B, Li Z, Kim S, Lafferty J, Clark DA. Shallow neural networks trained to detect collisions recover features of visual loom-selective neurons. eLife 2022; 11:72067. [PMID: 35023828 PMCID: PMC8849349 DOI: 10.7554/elife.72067] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 01/11/2022] [Indexed: 11/13/2022] Open
Abstract
Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.
Collapse
Affiliation(s)
- Baohua Zhou
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, United States
| | - Zifan Li
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - Sunnie Kim
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - John Lafferty
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - Damon A Clark
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, United States
| |
Collapse
|
10
|
Mano O, Creamer MS, Badwan BA, Clark DA. Predicting individual neuron responses with anatomically constrained task optimization. Curr Biol 2021; 31:4062-4075.e4. [PMID: 34324832 DOI: 10.1016/j.cub.2021.06.090] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 05/24/2021] [Accepted: 06/29/2021] [Indexed: 01/28/2023]
Abstract
Artificial neural networks trained to solve sensory tasks can develop statistical representations that match those in biological circuits. However, it remains unclear whether they can reproduce properties of individual neurons. Here, we investigated how artificial networks predict individual neuron properties in the visual motion circuits of the fruit fly Drosophila. We trained anatomically constrained networks to predict movement in natural scenes, solving the same inference problem as fly motion detectors. Units in the artificial networks adopted many properties of analogous individual neurons, even though they were not explicitly trained to match these properties. Among these properties was the split into ON and OFF motion detectors, which is not predicted by classical motion detection models. The match between model and neurons was closest when models were trained to be robust to noise. These results demonstrate how anatomical, task, and noise constraints can explain properties of individual neurons in a small neural network.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Bara A Badwan
- School of Engineering and Applied Science, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
11
|
Predictive encoding of motion begins in the primate retina. Nat Neurosci 2021; 24:1280-1291. [PMID: 34341586 PMCID: PMC8728393 DOI: 10.1038/s41593-021-00899-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 06/25/2021] [Indexed: 02/06/2023]
Abstract
Predictive motion encoding is an important aspect of visually guided behavior that allows animals to estimate the trajectory of moving objects. Motion prediction is understood primarily in the context of translational motion, but the environment contains other types of behaviorally salient motion correlation such as those produced by approaching or receding objects. However, the neural mechanisms that detect and predictively encode these correlations remain unclear. We report here that four of the parallel output pathways in the primate retina encode predictive motion information, and this encoding occurs for several classes of spatiotemporal correlation that are found in natural vision. Such predictive coding can be explained by known nonlinear circuit mechanisms that produce a nearly optimal encoding, with transmitted information approaching the theoretical limit imposed by the stimulus itself. Thus, these neural circuit mechanisms efficiently separate predictive information from nonpredictive information during the encoding process.
Collapse
|
12
|
Maximally efficient prediction in the early fly visual system may support evasive flight maneuvers. PLoS Comput Biol 2021; 17:e1008965. [PMID: 34014926 PMCID: PMC8136689 DOI: 10.1371/journal.pcbi.1008965] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 04/13/2021] [Indexed: 11/20/2022] Open
Abstract
The visual system must make predictions to compensate for inherent delays in its processing. Yet little is known, mechanistically, about how prediction aids natural behaviors. Here, we show that despite a 20-30ms intrinsic processing delay, the vertical motion sensitive (VS) network of the blowfly achieves maximally efficient prediction. This prediction enables the fly to fine-tune its complex, yet brief, evasive flight maneuvers according to its initial ego-rotation at the time of detection of the visual threat. Combining a rich database of behavioral recordings with detailed compartmental modeling of the VS network, we further show that the VS network has axonal gap junctions that are critical for optimal prediction. During evasive maneuvers, a VS subpopulation that directly innervates the neck motor center can convey predictive information about the fly’s future ego-rotation, potentially crucial for ongoing flight control. These results suggest a novel sensory-motor pathway that links sensory prediction to behavior. Survival-critical behaviors shape neural circuits to translate sensory information into strikingly fast predictions, e.g. in escaping from a predator faster than the system’s processing delay. We show that the fly visual system implements fast and accurate prediction of its visual experience. This provides crucial information for directing fast evasive maneuvers that unfold over just 40ms. Our work shows how this fast prediction is implemented, mechanistically, and suggests the existence of a novel sensory-motor pathway from the fly visual system to a wing steering motor neuron. Echoing and amplifying previous work in the retina, our work hypothesizes that the efficient encoding of predictive information is a universal design principle supporting fast, natural behaviors.
Collapse
|
13
|
Agrochao M, Tanaka R, Salazar-Gatzimas E, Clark DA. Mechanism for analogous illusory motion perception in flies and humans. Proc Natl Acad Sci U S A 2020; 117:23044-23053. [PMID: 32839324 PMCID: PMC7502748 DOI: 10.1073/pnas.2002937117] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Visual motion detection is one of the most important computations performed by visual circuits. Yet, we perceive vivid illusory motion in stationary, periodic luminance gradients that contain no true motion. This illusion is shared by diverse vertebrate species, but theories proposed to explain this illusion have remained difficult to test. Here, we demonstrate that in the fruit fly Drosophila, the illusory motion percept is generated by unbalanced contributions of direction-selective neurons' responses to stationary edges. First, we found that flies, like humans, perceive sustained motion in the stationary gradients. The percept was abolished when the elementary motion detector neurons T4 and T5 were silenced. In vivo calcium imaging revealed that T4 and T5 neurons encode the location and polarity of stationary edges. Furthermore, our proposed mechanistic model allowed us to predictably manipulate both the magnitude and direction of the fly's illusory percept by selectively silencing either T4 or T5 neurons. Interestingly, human brains possess the same mechanistic ingredients that drive our model in flies. When we adapted human observers to moving light edges or dark edges, we could manipulate the magnitude and direction of their percepts as well, suggesting that mechanisms similar to the fly's may also underlie this illusion in humans. By taking a comparative approach that exploits Drosophila neurogenetics, our results provide a causal, mechanistic account for a long-known visual illusion. These results argue that this illusion arises from architectures for motion detection that are shared across phyla.
Collapse
Affiliation(s)
- Margarida Agrochao
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, CT 06511
| | - Ryosuke Tanaka
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511
| | | | - Damon A Clark
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, CT 06511;
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511
- Department of Physics, Yale University, New Haven, CT 06511
- Department of Neuroscience, Yale University, New Haven, CT 06511
| |
Collapse
|
14
|
Zavatone-Veth JA, Badwan BA, Clark DA. A minimal synaptic model for direction selective neurons in Drosophila. J Vis 2020; 20:2. [PMID: 32040161 PMCID: PMC7343402 DOI: 10.1167/jov.20.2.2] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Visual motion estimation is a canonical neural computation. In Drosophila, recent advances have identified anatomic and functional circuitry underlying direction-selective computations. Models with varying levels of abstraction have been proposed to explain specific experimental results but have rarely been compared across experiments. Here we use the wealth of available anatomical and physiological data to construct a minimal, biophysically inspired synaptic model for Drosophila’s first-order direction-selective T4 cells. We show how this model relates mathematically to classical models of motion detection, including the Hassenstein-Reichardt correlator model. We used numerical simulation to test how well this synaptic model could reproduce measurements of T4 cells across many datasets and stimulus modalities. These comparisons include responses to sinusoid gratings, to apparent motion stimuli, to stochastic stimuli, and to natural scenes. Without fine-tuning this model, it sufficed to reproduce many, but not all, response properties of T4 cells. Since this model is flexible and based on straightforward biophysical properties, it provides an extensible framework for developing a mechanistic understanding of T4 neural response properties. Moreover, it can be used to assess the sufficiency of simple biophysical mechanisms to describe features of the direction-selective computation and identify where our understanding must be improved.
Collapse
|
15
|
Cafaro J, Zylberberg J, Field GD. Global Motion Processing by Populations of Direction-Selective Retinal Ganglion Cells. J Neurosci 2020; 40:5807-5819. [PMID: 32561674 PMCID: PMC7380974 DOI: 10.1523/jneurosci.0564-20.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 06/09/2020] [Accepted: 06/12/2020] [Indexed: 11/21/2022] Open
Abstract
Simple stimuli have been critical to understanding neural population codes in sensory systems. Yet it remains necessary to determine the extent to which this understanding generalizes to more complex conditions. To examine this problem, we measured how populations of direction-selective ganglion cells (DSGCs) from the retinas of male and female mice respond to a global motion stimulus with its direction and speed changing dynamically. We then examined the encoding and decoding of motion direction in both individual and populations of DSGCs. Individual cells integrated global motion over ∼200 ms, and responses were tuned to direction. However, responses were sparse and broadly tuned, which severely limited decoding performance from small DSGC populations. In contrast, larger populations compensated for response sparsity, enabling decoding with high temporal precision (<100 ms). At these timescales, correlated spiking was minimal and had little impact on decoding performance, unlike results obtained using simpler local motion stimuli decoded over longer timescales. We use these data to define different DSGC population decoding regimes that use or mitigate correlated spiking to achieve high-spatial versus high-temporal resolution.SIGNIFICANCE STATEMENT ON-OFF direction-selective ganglion cells (ooDSGCs) in the mammalian retina are typically thought to signal local motion to the brain. However, several recent studies suggest they may signal global motion. Here we analyze the fidelity of encoding and decoding global motion in a natural scene across large populations of ooDSGCs. We show that large populations of DSGCs are capable of signaling rapid changes in global motion.
Collapse
Affiliation(s)
- Jon Cafaro
- Department of Neurobiology, Duke University, Durham, North Carolina, 27710
| | - Joel Zylberberg
- Department of Physics and Astronomy, York University, Toronto, Ontario, M3J 1P3
| | - Greg D Field
- Department of Neurobiology, Duke University, Durham, North Carolina, 27710
| |
Collapse
|
16
|
Yildizoglu T, Riegler C, Fitzgerald JE, Portugues R. A Neural Representation of Naturalistic Motion-Guided Behavior in the Zebrafish Brain. Curr Biol 2020; 30:2321-2333.e6. [PMID: 32386533 DOI: 10.1016/j.cub.2020.04.043] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 03/13/2020] [Accepted: 04/20/2020] [Indexed: 11/20/2022]
Abstract
All animals must transform ambiguous sensory data into successful behavior. This requires sensory representations that accurately reflect the statistics of natural stimuli and behavior. Multiple studies show that visual motion processing is tuned for accuracy under naturalistic conditions, but the sensorimotor circuits extracting these cues and implementing motion-guided behavior remain unclear. Here we show that the larval zebrafish retina extracts a diversity of naturalistic motion cues, and the retinorecipient pretectum organizes these cues around the elements of behavior. We find that higher-order motion stimuli, gliders, induce optomotor behavior matching expectations from natural scene analyses. We then image activity of retinal ganglion cell terminals and pretectal neurons. The retina exhibits direction-selective responses across glider stimuli, and anatomically clustered pretectal neurons respond with magnitudes matching behavior. Peripheral computations thus reflect natural input statistics, whereas central brain activity precisely codes information needed for behavior. This general principle could organize sensorimotor transformations across animal species.
Collapse
Affiliation(s)
- Tugce Yildizoglu
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany
| | - Clemens Riegler
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Neurobiology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| | - Ruben Portugues
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany; Institute of Neuroscience, Technical University of Munich, Munich 80802, Germany; Munich Cluster for Systems Neurology (SyNergy), Munich 80802, Germany.
| |
Collapse
|
17
|
Matulis CA, Chen J, Gonzalez-Suarez AD, Behnia R, Clark DA. Heterogeneous Temporal Contrast Adaptation in Drosophila Direction-Selective Circuits. Curr Biol 2020; 30:222-236.e6. [PMID: 31928874 PMCID: PMC7003801 DOI: 10.1016/j.cub.2019.11.077] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 11/06/2019] [Accepted: 11/26/2019] [Indexed: 11/23/2022]
Abstract
In visual systems, neurons adapt both to the mean light level and to the range of light levels, or the contrast. Contrast adaptation has been studied extensively, but it remains unclear how it is distributed among neurons in connected circuits, and how early adaptation affects subsequent computations. Here, we investigated temporal contrast adaptation in neurons across Drosophila's visual motion circuitry. Several ON-pathway neurons showed strong adaptation to changes in contrast over time. One of these neurons, Mi1, showed almost complete adaptation on fast timescales, and experiments ruled out several potential mechanisms for its adaptive properties. When contrast adaptation reduced the gain in ON-pathway cells, it was accompanied by decreased motion responses in downstream direction-selective cells. Simulations show that contrast adaptation can substantially improve motion estimates in natural scenes. The benefits are larger for ON-pathway adaptation, which helps explain the heterogeneous distribution of contrast adaptation in these circuits.
Collapse
Affiliation(s)
- Catherine A Matulis
- Department of Physics, Yale University, 217 Prospect Street, New Haven, CT 06511, USA
| | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06510, USA
| | | | - Rudy Behnia
- Department of Neuroscience, Columbia University, 3227 Broadway, New York, NY 10027, USA
| | - Damon A Clark
- Department of Physics, Yale University, 217 Prospect Street, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06510, USA; Department of Molecular, Cellular, and Developmental Biology, Yale University, 260 Whitney Avenue, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, 333 Cedar Street, New Haven, CT 06510, USA.
| |
Collapse
|
18
|
Dynamic Signal Compression for Robust Motion Vision in Flies. Curr Biol 2020; 30:209-221.e8. [PMID: 31928873 DOI: 10.1016/j.cub.2019.10.035] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 09/17/2019] [Accepted: 10/18/2019] [Indexed: 12/16/2022]
Abstract
Sensory systems need to reliably extract information from highly variable natural signals. Flies, for instance, use optic flow to guide their course and are remarkably adept at estimating image velocity regardless of image statistics. Current circuit models, however, cannot account for this robustness. Here, we demonstrate that the Drosophila visual system reduces input variability by rapidly adjusting its sensitivity to local contrast conditions. We exhaustively map functional properties of neurons in the motion detection circuit and find that local responses are compressed by surround contrast. The compressive signal is fast, integrates spatially, and derives from neural feedback. Training convolutional neural networks on estimating the velocity of natural stimuli shows that this dynamic signal compression can close the performance gap between model and organism. Overall, our work represents a comprehensive mechanistic account of how neural systems attain the robustness to carry out survival-critical tasks in challenging real-world environments.
Collapse
|
19
|
Chen J, Mandel HB, Fitzgerald JE, Clark DA. Asymmetric ON-OFF processing of visual motion cancels variability induced by the structure of natural scenes. eLife 2019; 8:e47579. [PMID: 31613221 PMCID: PMC6884396 DOI: 10.7554/elife.47579] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 10/12/2019] [Indexed: 02/05/2023] Open
Abstract
Animals detect motion using a variety of visual cues that reflect regularities in the natural world. Experiments in animals across phyla have shown that motion percepts incorporate both pairwise and triplet spatiotemporal correlations that could theoretically benefit motion computation. However, it remains unclear how visual systems assemble these cues to build accurate motion estimates. Here, we used systematic behavioral measurements of fruit fly motion perception to show how flies combine local pairwise and triplet correlations to reduce variability in motion estimates across natural scenes. By generating synthetic images with statistics controlled by maximum entropy distributions, we show that the triplet correlations are useful only when images have light-dark asymmetries that mimic natural ones. This suggests that asymmetric ON-OFF processing is tuned to the particular statistics of natural scenes. Since all animals encounter the world's light-dark asymmetries, many visual systems are likely to use asymmetric ON-OFF processing to improve motion estimation.
Collapse
Affiliation(s)
- Juyue Chen
- Interdepartmental Neuroscience ProgramYale UniversityNew HavenUnited States
| | - Holly B Mandel
- Department of Molecular, Cellular and Developmental BiologyYale UniversityNew HavenUnited States
| | - James E Fitzgerald
- Janelia Research CampusHoward Hughes Medical InstituteAshburnUnited States
| | - Damon A Clark
- Interdepartmental Neuroscience ProgramYale UniversityNew HavenUnited States
- Department of Molecular, Cellular and Developmental BiologyYale UniversityNew HavenUnited States
- Department of PhysicsYale UniversityNew HavenUnited States
- Department of NeuroscienceYale UniversityNew HavenUnited States
| |
Collapse
|
20
|
Dynamic nonlinearities enable direction opponency in Drosophila elementary motion detectors. Nat Neurosci 2019; 22:1318-1326. [PMID: 31346296 PMCID: PMC6748873 DOI: 10.1038/s41593-019-0443-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Accepted: 06/03/2019] [Indexed: 12/13/2022]
Abstract
Direction-selective neurons respond to visual motion in a preferred direction. They are direction-opponent if they are also inhibited by motion in the opposite direction. In flies and vertebrates, direction opponency has been observed in second-order direction-selective neurons, which achieve this opponency by subtracting signals from first-order direction-selective cells with opposite directional tunings. Here, we report direction opponency in Drosophila that emerges in first-order direction-selective neurons, the elementary motion detectors T4 and T5. This opponency persists when synaptic output from these cells is blocked, suggesting that it arises from feedforward, not feedback, computations. These observations exclude a broad class of linear-nonlinear models that have been proposed to describe direction-selective computations. However, they are consistent with models that include dynamic nonlinearities. Simulations of opponent models suggest that direction opponency in first-order motion detectors improves motion discriminability by suppressing noise generated by the local structure of natural scenes.
Collapse
|
21
|
Neural mechanisms of contextual modulation in the retinal direction selective circuit. Nat Commun 2019; 10:2431. [PMID: 31160566 PMCID: PMC6547848 DOI: 10.1038/s41467-019-10268-z] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Accepted: 04/26/2019] [Indexed: 01/07/2023] Open
Abstract
Contextual modulation of neuronal responses by surrounding environments is a fundamental attribute of sensory processing. In the mammalian retina, responses of On–Off direction selective ganglion cells (DSGCs) are modulated by motion contexts. However, the underlying mechanisms are unknown. Here, we show that posterior-preferring DSGCs (pDSGCs) are sensitive to discontinuities of moving contours owing to contextually modulated cholinergic excitation from starburst amacrine cells (SACs). Using a combination of synapse-specific genetic manipulations, patch clamp electrophysiology and connectomic analysis, we identified distinct circuit motifs upstream of On and Off SACs that are required for the contextual modulation of pDSGC activity for bright and dark contrasts. Furthermore, our results reveal a class of wide-field amacrine cells (WACs) with straight, unbranching dendrites that function as “continuity detectors” of moving contours. Therefore, divergent circuit motifs in the On and Off pathways extend the information encoding of On-Off DSGCs beyond their direction selectivity during complex stimuli. The mechanisms of contextual modulation in direction selective ganglion cells in the retina remain unclear. Here, the authors find that that On-Off direction-selective ganglion cells are differentially sensitive to discontinuities of dark and bright moving edges in the visual environment and, using synapse-specific genetic manipulations with functional measurements, reveal the microcircuits underlying this contextual sensitivity.
Collapse
|
22
|
Image statistics of the environment surrounding freely behaving hoverflies. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2019; 205:373-385. [PMID: 30937518 PMCID: PMC6579776 DOI: 10.1007/s00359-019-01329-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 02/12/2019] [Accepted: 03/14/2019] [Indexed: 12/04/2022]
Abstract
Natural scenes are not as random as they might appear, but are constrained in both space and time. The 2-dimensional spatial constraints can be described by quantifying the image statistics of photographs. Human observers perceive images with naturalistic image statistics as more pleasant to view, and both fly and vertebrate peripheral and higher order visual neurons are tuned to naturalistic image statistics. However, for a given animal, what is natural differs depending on the behavior, and even if we have a broad understanding of image statistics, we know less about the scenes relevant for particular behaviors. To mitigate this, we here investigate the image statistics surrounding Episyrphus balteatus hoverflies, where the males hover in sun shafts created by surrounding trees, producing a rich and dense background texture and also intricate shadow patterns on the ground. We quantified the image statistics of photographs of the ground and the surrounding panorama, as the ventral and lateral visual field is particularly important for visual flight control, and found differences in spatial statistics in photos where the hoverflies were hovering compared to where they were flying. Our results can, in the future, be used to create more naturalistic stimuli for experimenter-controlled experiments in the laboratory.
Collapse
|
23
|
Dyakova O, Rångtell FH, Tan X, Nordström K, Benedict C. Acute sleep loss induces signs of visual discomfort in young men. J Sleep Res 2019; 28:e12837. [PMID: 30815934 PMCID: PMC6900002 DOI: 10.1111/jsr.12837] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 01/22/2019] [Accepted: 01/31/2019] [Indexed: 01/24/2023]
Abstract
Acute sleep loss influences visual processes in humans, such as recognizing facial emotions. However, to the best of our knowledge, no study till date has examined whether acute sleep loss alters visual comfort when looking at images. One image statistic that can be used to investigate the level of visual comfort experienced under visual encoding is the slope of the amplitude spectrum, also referred to as the slope constant. The slope constant describes the spatial distribution of pixel intensities and deviations from the natural slope constant can induce visual discomfort. In the present counterbalanced crossover design study, 11 young men with normal or corrected‐to‐normal vision participated in two experimental conditions: one night of sleep loss and one night of sleep. In the morning after each intervention, subjects performed a computerized psychophysics task. Specifically, they were required to adjust the slope constant of images depicting natural landscapes and close‐ups with a randomly chosen initial slope constant until they perceived each image as most natural looking. Subjects also rated the pleasantness of each selected image. Our analysis showed that following sleep loss, higher slope constants were perceived as most natural looking when viewing images of natural landscapes. Images with a higher slope constant are generally perceived as blurrier. The selected images were also rated as less pleasant after sleep loss. No such differences between the experimental conditions were noted for images of close‐ups. The results suggest that sleep loss induces signs of visual discomfort in young men. Possible implications of these findings are discussed.
Collapse
Affiliation(s)
- Olga Dyakova
- Department of Neuroscience, Uppsala University, Uppsala, Sweden
| | | | - Xiao Tan
- Department of Neuroscience, Uppsala University, Uppsala, Sweden
| | - Karin Nordström
- Department of Neuroscience, Uppsala University, Uppsala, Sweden.,Centre for Neuroscience, Flinders University, Adelaide, South Australia, Australia
| | | |
Collapse
|
24
|
Creamer MS, Mano O, Clark DA. Visual Control of Walking Speed in Drosophila. Neuron 2018; 100:1460-1473.e6. [PMID: 30415994 PMCID: PMC6405217 DOI: 10.1016/j.neuron.2018.10.028] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Revised: 08/29/2018] [Accepted: 10/16/2018] [Indexed: 10/27/2022]
Abstract
An animal's self-motion generates optic flow across its retina, and it can use this visual signal to regulate its orientation and speed through the world. While orientation control has been studied extensively in Drosophila and other insects, much less is known about the visual cues and circuits that regulate translational speed. Here, we show that flies regulate walking speed with an algorithm that is tuned to the speed of visual motion, causing them to slow when visual objects are nearby. This regulation does not depend strongly on the spatial structure or the direction of visual stimuli, making it algorithmically distinct from the classic computation that controls orientation. Despite the different algorithms, the visual circuits that regulate walking speed overlap with those that regulate orientation. Taken together, our findings suggest that walking speed is controlled by a hierarchical computation that combines multiple motion detectors with distinct tunings. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Omer Mano
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
25
|
Cyr A, Thériault F, Ross M, Berberian N, Chartier S. Spiking Neurons Integrating Visual Stimuli Orientation and Direction Selectivity in a Robotic Context. Front Neurorobot 2018; 12:75. [PMID: 30524261 PMCID: PMC6256284 DOI: 10.3389/fnbot.2018.00075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 10/31/2018] [Indexed: 11/13/2022] Open
Abstract
Visual motion detection is essential for the survival of many species. The phenomenon includes several spatial properties, not fully understood at the level of a neural circuit. This paper proposes a computational model of a visual motion detector that integrates direction and orientation selectivity features. A recent experiment in the Drosophila model highlights that stimulus orientation influences the neural response of direction cells. However, this interaction and the significance at the behavioral level are currently unknown. As such, another objective of this article is to study the effect of merging these two visual processes when contextualized in a neuro-robotic model and an operant conditioning procedure. In this work, the learning task was solved using an artificial spiking neural network, acting as the brain controller for virtual and physical robots, showing a behavior modulation from the integration of both visual processes.
Collapse
Affiliation(s)
- André Cyr
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Frédéric Thériault
- Department of Computer Science, Cégep du Vieux Montréal, Montreal, QC, Canada
| | - Matthew Ross
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Nareg Berberian
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Sylvain Chartier
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| |
Collapse
|
26
|
Salazar-Gatzimas E, Agrochao M, Fitzgerald JE, Clark DA. The Neuronal Basis of an Illusory Motion Percept Is Explained by Decorrelation of Parallel Motion Pathways. Curr Biol 2018; 28:3748-3762.e8. [PMID: 30471993 DOI: 10.1016/j.cub.2018.10.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 09/28/2018] [Accepted: 10/02/2018] [Indexed: 10/27/2022]
Abstract
Both vertebrates and invertebrates perceive illusory motion, known as "reverse-phi," in visual stimuli that contain sequential luminance increments and decrements. However, increment (ON) and decrement (OFF) signals are initially processed by separate visual neurons, and parallel elementary motion detectors downstream respond selectively to the motion of light or dark edges, often termed ON- and OFF-edges. It remains unknown how and where ON and OFF signals combine to generate reverse-phi motion signals. Here, we show that each of Drosophila's elementary motion detectors encodes motion by combining both ON and OFF signals. Their pattern of responses reflects combinations of increments and decrements that co-occur in natural motion, serving to decorrelate their outputs. These results suggest that the general principle of signal decorrelation drives the functional specialization of parallel motion detection channels, including their selectivity for moving light or dark edges.
Collapse
Affiliation(s)
- Emilio Salazar-Gatzimas
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06511, USA
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, 219 Prospect Street, New Haven, CT 06511, USA
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, 333 Cedar Street, New Haven, CT 06511, USA; Department of Molecular Cellular and Developmental Biology, Yale University, 219 Prospect Street, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
27
|
Abstract
Motion in the visual world provides critical information to guide the behavior of sighted animals. Furthermore, as visual motion estimation requires comparisons of signals across inputs and over time, it represents a paradigmatic and generalizable neural computation. Focusing on the Drosophila visual system, where an explosion of technological advances has recently accelerated experimental progress, we review our understanding of how, algorithmically and mechanistically, motion signals are first computed.
Collapse
Affiliation(s)
- Helen H Yang
- Department of Neurobiology, Stanford University, Stanford, California 94305, USA; .,Current affiliation: Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115, USA;
| | - Thomas R Clandinin
- Department of Neurobiology, Stanford University, Stanford, California 94305, USA;
| |
Collapse
|
28
|
Gruntman E, Romani S, Reiser MB. Simple integration of fast excitation and offset, delayed inhibition computes directional selectivity in Drosophila. Nat Neurosci 2018; 21:250-257. [PMID: 29311742 PMCID: PMC5967973 DOI: 10.1038/s41593-017-0046-4] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 11/06/2017] [Indexed: 02/07/2023]
Abstract
A neuron that extracts directionally selective motion information from upstream signals lacking this selectivity must compare visual responses from spatially offset inputs. Distinguishing among prevailing algorithmic models for this computation requires measuring fast neuronal activity and inhibition. In the Drosophila melanogaster visual system, a fourth-order neuron-T4-is the first cell type in the ON pathway to exhibit directionally selective signals. Here we use in vivo whole-cell recordings of T4 to show that directional selectivity originates from simple integration of spatially offset fast excitatory and slow inhibitory inputs, resulting in a suppression of responses to the nonpreferred motion direction. We constructed a passive, conductance-based model of a T4 cell that accurately predicts the neuron's response to moving stimuli. These results connect the known circuit anatomy of the motion pathway to the algorithmic mechanism by which the direction of motion is computed.
Collapse
Affiliation(s)
- Eyal Gruntman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Sandro Romani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Michael B Reiser
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| |
Collapse
|
29
|
Bialek W. Perspectives on theory at the interface of physics and biology. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2018; 81:012601. [PMID: 29214982 DOI: 10.1088/1361-6633/aa995b] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Theoretical physics is the search for simple and universal mathematical descriptions of the natural world. In contrast, much of modern biology is an exploration of the complexity and diversity of life. For many, this contrast is prima facie evidence that theory, in the sense that physicists use the word, is impossible in a biological context. For others, this contrast serves to highlight a grand challenge. I am an optimist, and believe (along with many colleagues) that the time is ripe for the emergence of a more unified theoretical physics of biological systems, building on successes in thinking about particular phenomena. In this essay I try to explain the reasons for my optimism, through a combination of historical and modern examples.
Collapse
Affiliation(s)
- William Bialek
- Joseph Henry Laboratories of Physics, and Lewis-Sigler Institute for Integrative Genomics, Princeton University, 08544, Princeton NJ, United States of America. Initiative for the Theoretical Sciences, The Graduate Center, City University of New York, 365 Fifth Ave, 10016, New York NY, United States of America
| |
Collapse
|
30
|
Clark DA, Demb JB. Parallel Computations in Insect and Mammalian Visual Motion Processing. Curr Biol 2017; 26:R1062-R1072. [PMID: 27780048 DOI: 10.1016/j.cub.2016.08.003] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Sensory systems use receptors to extract information from the environment and neural circuits to perform subsequent computations. These computations may be described as algorithms composed of sequential mathematical operations. Comparing these operations across taxa reveals how different neural circuits have evolved to solve the same problem, even when using different mechanisms to implement the underlying math. In this review, we compare how insect and mammalian neural circuits have solved the problem of motion estimation, focusing on the fruit fly Drosophila and the mouse retina. Although the two systems implement computations with grossly different anatomy and molecular mechanisms, the underlying circuits transform light into motion signals with strikingly similar processing steps. These similarities run from photoreceptor gain control and spatiotemporal tuning to ON and OFF pathway structures, motion detection, and computed motion signals. The parallels between the two systems suggest that a limited set of algorithms for estimating motion satisfies both the needs of sighted creatures and the constraints imposed on them by metabolism, anatomy, and the structure and regularities of the visual world.
Collapse
Affiliation(s)
- Damon A Clark
- Department of Molecular, Cellular, and Developmental Biology and Department of Physics, Yale University, New Haven, CT 06511, USA.
| | - Jonathan B Demb
- Department of Ophthalmology and Visual Science and Department of Cellular and Molecular Physiology, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
31
|
Dyakova O, Nordström K. Image statistics and their processing in insect vision. CURRENT OPINION IN INSECT SCIENCE 2017; 24:7-14. [PMID: 29208226 DOI: 10.1016/j.cois.2017.08.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 08/17/2017] [Accepted: 08/24/2017] [Indexed: 06/07/2023]
Abstract
Natural scenes may appear random, but are not only constrained in space and time, but also show strong spatial and temporal correlations. Spatial constraints and correlations can be described by quantifying image statistics, which include intuitive measures such as contrast, color and luminance, but also parameters that need some type of transformation of the image. In this review we will discuss some common tools used to quantify spatial and temporal parameters of naturalistic visual input, and how these tools have been used to inform us about visual processing in insects. In particular, we will review findings that would not have been possible using conventional, experimenter defined stimuli.
Collapse
Affiliation(s)
- Olga Dyakova
- Department of Neuroscience, Uppsala University, Box 593, 751 24 Uppsala, Sweden
| | - Karin Nordström
- Department of Neuroscience, Uppsala University, Box 593, 751 24 Uppsala, Sweden; Centre for Neuroscience, Flinders University, GPO Box 2100, Adelaide, SA 5001, Australia.
| |
Collapse
|
32
|
Salazar-Gatzimas E, Chen J, Creamer MS, Mano O, Mandel HB, Matulis CA, Pottackal J, Clark DA. Direct Measurement of Correlation Responses in Drosophila Elementary Motion Detectors Reveals Fast Timescale Tuning. Neuron 2017; 92:227-239. [PMID: 27710784 DOI: 10.1016/j.neuron.2016.09.017] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2016] [Revised: 05/22/2016] [Accepted: 08/29/2016] [Indexed: 10/20/2022]
Abstract
Animals estimate visual motion by integrating light intensity information over time and space. The integration requires nonlinear processing, which makes motion estimation circuitry sensitive to specific spatiotemporal correlations that signify visual motion. Classical models of motion estimation weight these correlations to produce direction-selective signals. However, the correlational algorithms they describe have not been directly measured in elementary motion-detecting neurons (EMDs). Here, we employed stimuli to directly measure responses to pairwise correlations in Drosophila's EMD neurons, T4 and T5. Activity in these neurons was required for behavioral responses to pairwise correlations and was predictive of those responses. The pattern of neural responses in the EMDs was inconsistent with one classical model of motion detection, and the timescale and selectivity of correlation responses constrained the temporal filtering properties in potential models. These results reveal how neural responses to pairwise correlations drive visual behavior in this canonical motion-detecting circuit.
Collapse
Affiliation(s)
| | - Juyue Chen
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Omer Mano
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Holly B Mandel
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | | | - Joseph Pottackal
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
33
|
Strother JA, Wu ST, Wong AM, Nern A, Rogers EM, Le JQ, Rubin GM, Reiser MB. The Emergence of Directional Selectivity in the Visual Motion Pathway of Drosophila. Neuron 2017; 94:168-182.e10. [PMID: 28384470 DOI: 10.1016/j.neuron.2017.03.010] [Citation(s) in RCA: 114] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 12/22/2016] [Accepted: 03/08/2017] [Indexed: 01/19/2023]
Abstract
The perception of visual motion is critical for animal navigation, and flies are a prominent model system for exploring this neural computation. In Drosophila, the T4 cells of the medulla are directionally selective and necessary for ON motion behavioral responses. To examine the emergence of directional selectivity, we developed genetic driver lines for the neuron types with the most synapses onto T4 cells. Using calcium imaging, we found that these neuron types are not directionally selective and that selectivity arises in the T4 dendrites. By silencing each input neuron type, we identified which neurons are necessary for T4 directional selectivity and ON motion behavioral responses. We then determined the sign of the connections between these neurons and T4 cells using neuronal photoactivation. Our results indicate a computational architecture for motion detection that is a hybrid of classic theoretical models.
Collapse
Affiliation(s)
- James A Strother
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Shiuan-Tze Wu
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Allan M Wong
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Edward M Rogers
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Jasmine Q Le
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Gerald M Rubin
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | - Michael B Reiser
- Janelia Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA.
| |
Collapse
|
34
|
Abstract
Motion signals are a rich source of information used in many everyday tasks, such as segregation of objects from background and navigation. Motion analysis by biological systems is generally considered to consist of two stages: extraction of local motion signals followed by spatial integration. Studies using synthetic stimuli show that there are many kinds and subtypes of local motion signals. When presented in isolation, these stimuli elicit behavioral and neurophysiological responses in a wide range of species, from insects to mammals. However, these mathematically-distinct varieties of local motion signals typically co-exist in natural scenes. This study focuses on interactions between two kinds of local motion signals: Fourier and glider. Fourier signals are typically associated with translation, while glider signals occur when an object approaches or recedes. Here, using a novel class of synthetic stimuli, we ask how distinct kinds of local motion signals interact and whether context influences sensitivity to Fourier motion. We report that local motion signals of different types interact at the perceptual level, and that this interaction can include subthreshold summation and, in some subjects, subtle context-dependent changes in sensitivity. We discuss the implications of these observations, and the factors that may underlie them.
Collapse
Affiliation(s)
- Eyal I Nitzany
- Program in Computational Biology & Medicine, Cornell University, Ithaca, NY, USAFeil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York City, NY, USADepartment of Organismal Biology and Anatomy, University of Chicago, Chicago, IL,
| | - Maren E Loe
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL,
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy and Committee on Computational Neuroscience, University of Chicago, Chicago, IL, ://pondside.uchicago.edu/oba/faculty/palmer_s.html
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York City, NY, ://www-users.med.cornell.edu/~jdvicto/jdvonweb.html
| |
Collapse
|
35
|
Li J, Lindemann JP, Egelhaaf M. Peripheral Processing Facilitates Optic Flow-Based Depth Perception. Front Comput Neurosci 2016; 10:111. [PMID: 27818631 PMCID: PMC5073142 DOI: 10.3389/fncom.2016.00111] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Accepted: 10/04/2016] [Indexed: 12/19/2022] Open
Abstract
Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements ("optic flow") during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions.
Collapse
Affiliation(s)
- Jinglin Li
- Department of Neurobiology and Center of Excellence Cognitive Interaction Technology, Bielefeld UniversityBielefeld, Germany
| | | | | |
Collapse
|
36
|
Leong JCS, Esch JJ, Poole B, Ganguli S, Clandinin TR. Direction Selectivity in Drosophila Emerges from Preferred-Direction Enhancement and Null-Direction Suppression. J Neurosci 2016; 36:8078-92. [PMID: 27488629 PMCID: PMC4971360 DOI: 10.1523/jneurosci.1272-16.2016] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2016] [Revised: 05/22/2016] [Accepted: 05/25/2016] [Indexed: 01/12/2023] Open
Abstract
UNLABELLED Across animal phyla, motion vision relies on neurons that respond preferentially to stimuli moving in one, preferred direction over the opposite, null direction. In the elementary motion detector of Drosophila, direction selectivity emerges in two neuron types, T4 and T5, but the computational algorithm underlying this selectivity remains unknown. We find that the receptive fields of both T4 and T5 exhibit spatiotemporally offset light-preferring and dark-preferring subfields, each obliquely oriented in spacetime. In a linear-nonlinear modeling framework, the spatiotemporal organization of the T5 receptive field predicts the activity of T5 in response to motion stimuli. These findings demonstrate that direction selectivity emerges from the enhancement of responses to motion in the preferred direction, as well as the suppression of responses to motion in the null direction. Thus, remarkably, T5 incorporates the essential algorithmic strategies used by the Hassenstein-Reichardt correlator and the Barlow-Levick detector. Our model for T5 also provides an algorithmic explanation for the selectivity of T5 for moving dark edges: our model captures all two- and three-point spacetime correlations relevant to motion in this stimulus class. More broadly, our findings reveal the contribution of input pathway visual processing, specifically center-surround, temporally biphasic receptive fields, to the generation of direction selectivity in T5. As the spatiotemporal receptive field of T5 in Drosophila is common to the simple cell in vertebrate visual cortex, our stimulus-response model of T5 will inform efforts in an experimentally tractable context to identify more detailed, mechanistic models of a prevalent computation. SIGNIFICANCE STATEMENT Feature selective neurons respond preferentially to astonishingly specific stimuli, providing the neurobiological basis for perception. Direction selectivity serves as a paradigmatic model of feature selectivity that has been examined in many species. While insect elementary motion detectors have served as premiere experimental models of direction selectivity for 60 years, the central question of their underlying algorithm remains unanswered. Using in vivo two-photon imaging of intracellular calcium signals, we measure the receptive fields of the first direction-selective cells in the Drosophila visual system, and define the algorithm used to compute the direction of motion. Computational modeling of these receptive fields predicts responses to motion and reveals how this circuit efficiently captures many useful correlations intrinsic to moving dark edges.
Collapse
Affiliation(s)
| | | | | | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California 94305
| | | |
Collapse
|