1
|
Brudner S, Zhou B, Jayaram V, Santana GM, Clark DA, Emonet T. Fly navigational responses to odor motion and gradient cues are tuned to plume statistics. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.31.646361. [PMID: 40235995 PMCID: PMC11996313 DOI: 10.1101/2025.03.31.646361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2025]
Abstract
Odor cues guide animals to food and mates. Different environmental conditions can create differently patterned odor plumes, making navigation more challenging. Prior work has shown that animals turn upwind when they detect odor and cast crosswind when they lose it. Animals with bilateral olfactory sensors can also detect directional odor cues, such as odor gradient and odor motion. It remains unknown how animals use these two directional odor cues to guide crosswind navigation in odor plumes with distinct statistics. Here, we investigate this problem theoretically and experimentally. We show that these directional odor cues provide complementary information for navigation in different plume environments. We numerically analyzed real plumes to show that odor gradient cues are more informative about crosswind directions in relatively smooth odor plumes, while odor motion cues are more informative in turbulent or complex plumes. Neural networks trained to optimize crosswind turning converge to distinctive network structures that are tuned to odor gradient cues in smooth plumes and to odor motion cues in complex plumes. These trained networks improve the performance of artificial agents navigating plume environments that match the training environment. By recording Drosophila fruit flies as they navigated different odor plume environments, we verified that flies show the same correspondence between informative cues and plume types. Fly turning in the crosswind direction is correlated with odor gradients in smooth plumes and with odor motion in complex plumes. Overall, these results demonstrate that these directional odor cues are complementary across environments, and that animals exploit this relationship. Significance Many animals use smell to find food and mates, often navigating complex odor plumes shaped by environmental conditions. While upwind movement upon odor detection is well established, less is known about how animals steer crosswind to stay in the plume. We show that directional odor cues-gradients and motion-guide crosswind navigation differently depending on plume structure. Gradients carry more information in smooth plumes, while motion dominates in turbulent ones. Neural network trained to optimize crosswind navigation reflect this distinction, developing gradient sensitivity in smooth environments and motion sensitivity in complex ones. Experimentally, fruit flies adjust their turning behavior to prioritize the most informative cue in each context. These findings likely generalize to other animals navigating similarly structured odor plumes.
Collapse
|
2
|
Wu N, Zhou B, Agrochao M, Clark DA. Broken time-reversal symmetry in visual motion detection. Proc Natl Acad Sci U S A 2025; 122:e2410768122. [PMID: 40048271 PMCID: PMC11912477 DOI: 10.1073/pnas.2410768122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 01/09/2025] [Indexed: 03/12/2025] Open
Abstract
Our intuition suggests that when a movie is played in reverse, our perception of motion at each location in the reversed movie will be perfectly inverted compared to the original. This intuition is also reflected in classical theoretical and practical models of motion estimation, in which velocity flow fields invert when inputs are reversed in time. However, here we report that this symmetry of motion perception upon time reversal is broken in real visual systems. We designed a set of visual stimuli to investigate time reversal symmetry breaking in the fruit fly Drosophila's well-studied optomotor rotation behavior. We identified a suite of stimuli with a wide variety of properties that can uncover broken time reversal symmetry in fly behavioral responses. We then trained neural network models to predict the velocity of scenes with both natural and artificial contrast distributions. Training with naturalistic contrast distributions yielded models that broke time reversal symmetry, even when the training data themselves were time reversal symmetric. We show analytically and numerically that the breaking of time reversal symmetry in the model responses can arise from contrast asymmetry in the training data, but can also arise from other features of the contrast distribution. Furthermore, shallower neural network models can exhibit stronger symmetry breaking than deeper ones, suggesting that less flexible neural networks may be more prone to time reversal symmetry breaking. Overall, these results reveal a surprising feature of biological motion detectors and suggest that it could arise from constrained optimization in natural environments.
Collapse
Affiliation(s)
| | - Baohua Zhou
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT06511
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT06511
| | - Damon A. Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT06511
- Department of Physics, Yale University, New Haven, CT06511
- Department of Neuroscience, Yale University, New Haven, CT06511
- Quantitative Biology Institute, Yale University, New Haven, CT06511
- Wu Tsai Institute, Yale University, New Haven, CT06511
| |
Collapse
|
3
|
Hosseini E, Casto C, Zaslavsky N, Conwell C, Richardson M, Fedorenko E. Universality of representation in biological and artificial neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.12.26.629294. [PMID: 39764030 PMCID: PMC11703180 DOI: 10.1101/2024.12.26.629294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Abstract
Many artificial neural networks (ANNs) trained with ecologically plausible objectives on naturalistic data align with behavior and neural representations in biological systems. Here, we show that this alignment is a consequence of convergence onto the same representations by high-performing ANNs and by brains. We developed a method to identify stimuli that systematically vary the degree of inter-model representation agreement. Across language and vision, we then showed that stimuli from high- and low-agreement sets predictably modulated model-to-brain alignment. We also examined which stimulus features distinguish high- from low-agreement sentences and images. Our results establish representation universality as a core component in the model-to-brain alignment and provide a new approach for using ANNs to uncover the structure of biological representations and computations.
Collapse
Affiliation(s)
- Eghbal Hosseini
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Colton Casto
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology (SHBT), Harvard University, Boston, MA, USA
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Allston, MA, USA
| | - Noga Zaslavsky
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Psychology, New York University, New York, NY, USA
| | - Colin Conwell
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Mark Richardson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology (SHBT), Harvard University, Boston, MA, USA
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology (SHBT), Harvard University, Boston, MA, USA
| |
Collapse
|
4
|
Lappalainen JK, Tschopp FD, Prakhya S, McGill M, Nern A, Shinomiya K, Takemura SY, Gruntman E, Macke JH, Turaga SC. Connectome-constrained networks predict neural activity across the fly visual system. Nature 2024; 634:1132-1140. [PMID: 39261740 PMCID: PMC11525180 DOI: 10.1038/s41586-024-07939-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 08/09/2024] [Indexed: 09/13/2024]
Abstract
We can now measure the connectivity of every neuron in a neural circuit1-9, but we cannot measure other biological details, including the dynamical characteristics of each neuron. The degree to which measurements of connectivity alone can inform the understanding of neural computation is an open question10. Here we show that with experimental measurements of only the connectivity of a biological neural network, we can predict the neural activity underlying a specified neural computation. We constructed a model neural network with the experimentally determined connectivity for 64 cell types in the motion pathways of the fruit fly optic lobe1-5 but with unknown parameters for the single-neuron and single-synapse properties. We then optimized the values of these unknown parameters using techniques from deep learning11, to allow the model network to detect visual motion12. Our mechanistic model makes detailed, experimentally testable predictions for each neuron in the connectome. We found that model predictions agreed with experimental measurements of neural activity across 26 studies. Our work demonstrates a strategy for generating detailed hypotheses about the mechanisms of neural circuit function from connectivity measurements. We show that this strategy is more likely to be successful when neurons are sparsely connected-a universally observed feature of biological neural networks across species and brain regions.
Collapse
Affiliation(s)
- Janne K Lappalainen
- Machine Learning in Science, Tübingen University, Tübingen, Germany
- Tübingen AI Center, Tübingen, Germany
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Fabian D Tschopp
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Sridhama Prakhya
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Mason McGill
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Kazunori Shinomiya
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Shin-Ya Takemura
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Eyal Gruntman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Dept of Biological Sciences, University of Toronto Scarborough, Toronto, Ontario, Canada
| | - Jakob H Macke
- Machine Learning in Science, Tübingen University, Tübingen, Germany
- Tübingen AI Center, Tübingen, Germany
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Srinivas C Turaga
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| |
Collapse
|
5
|
Clark DA, Fitzgerald JE. Optimization in Visual Motion Estimation. Annu Rev Vis Sci 2024; 10:23-46. [PMID: 38663426 PMCID: PMC11998607 DOI: 10.1146/annurev-vision-101623-025432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2025]
Abstract
Sighted animals use visual signals to discern directional motion in their environment. Motion is not directly detected by visual neurons, and it must instead be computed from light signals that vary over space and time. This makes visual motion estimation a near universal neural computation, and decades of research have revealed much about the algorithms and mechanisms that generate directional signals. The idea that sensory systems are optimized for performance in natural environments has deeply impacted this research. In this article, we review the many ways that optimization has been used to quantitatively model visual motion estimation and reveal its underlying principles. We emphasize that no single optimization theory has dominated the literature. Instead, researchers have adeptly incorporated different computational demands and biological constraints that are pertinent to the specific brain system and animal model under study. The successes and failures of the resulting optimization models have thereby provided insights into how computational demands and biological constraints together shape neural computation.
Collapse
Affiliation(s)
- Damon A Clark
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, Connecticut, USA;
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
6
|
Idrees S, Manookin MB, Rieke F, Field GD, Zylberberg J. Biophysical neural adaptation mechanisms enable artificial neural networks to capture dynamic retinal computation. Nat Commun 2024; 15:5957. [PMID: 39009568 PMCID: PMC11251147 DOI: 10.1038/s41467-024-50114-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 06/28/2024] [Indexed: 07/17/2024] Open
Abstract
Adaptation is a universal aspect of neural systems that changes circuit computations to match prevailing inputs. These changes facilitate efficient encoding of sensory inputs while avoiding saturation. Conventional artificial neural networks (ANNs) have limited adaptive capabilities, hindering their ability to reliably predict neural output under dynamic input conditions. Can embedding neural adaptive mechanisms in ANNs improve their performance? To answer this question, we develop a new deep learning model of the retina that incorporates the biophysics of photoreceptor adaptation at the front-end of conventional convolutional neural networks (CNNs). These conventional CNNs build on 'Deep Retina,' a previously developed model of retinal ganglion cell (RGC) activity. CNNs that include this new photoreceptor layer outperform conventional CNN models at predicting male and female primate and rat RGC responses to naturalistic stimuli that include dynamic local intensity changes and large changes in the ambient illumination. These improved predictions result directly from adaptation within the phototransduction cascade. This research underscores the potential of embedding models of neural adaptation in ANNs and using them to determine how neural circuits manage the complexities of encoding natural inputs that are dynamic and span a large range of light levels.
Collapse
Affiliation(s)
- Saad Idrees
- Department of Physics and Astronomy, York University, Toronto, ON, Canada.
- Centre for Vision Research, York University, Toronto, ON, Canada.
| | | | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Greg D Field
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles, CA, USA
| | - Joel Zylberberg
- Department of Physics and Astronomy, York University, Toronto, ON, Canada.
- Centre for Vision Research, York University, Toronto, ON, Canada.
- Learning in Machines and Brains Program, Canadian Institute for Advanced Research, Toronto, ON, Canada.
| |
Collapse
|
7
|
Wu N, Zhou B, Agrochao M, Clark DA. Broken time reversal symmetry in visual motion detection. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.08.598068. [PMID: 38915608 PMCID: PMC11195140 DOI: 10.1101/2024.06.08.598068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Our intuition suggests that when a movie is played in reverse, our perception of motion in the reversed movie will be perfectly inverted compared to the original. This intuition is also reflected in many classical theoretical and practical models of motion detection. However, here we demonstrate that this symmetry of motion perception upon time reversal is often broken in real visual systems. In this work, we designed a set of visual stimuli to investigate how stimulus symmetries affect time reversal symmetry breaking in the fruit fly Drosophila's well-studied optomotor rotation behavior. We discovered a suite of new stimuli with a wide variety of different properties that can lead to broken time reversal symmetries in fly behavioral responses. We then trained neural network models to predict the velocity of scenes with both natural and artificial contrast distributions. Training with naturalistic contrast distributions yielded models that break time reversal symmetry, even when the training data was time reversal symmetric. We show analytically and numerically that the breaking of time reversal symmetry in the model responses can arise from contrast asymmetry in the training data, but can also arise from other features of the contrast distribution. Furthermore, shallower neural network models can exhibit stronger symmetry breaking than deeper ones, suggesting that less flexible neural networks promote some forms of time reversal symmetry breaking. Overall, these results reveal a surprising feature of biological motion detectors and suggest that it could arise from constrained optimization in natural environments.
Collapse
Affiliation(s)
- Nathan Wu
- Yale College, New Haven, CT 06511, USA
| | - Baohua Zhou
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Damon A. Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
- Department of Physics, Yale University, New Haven, CT 06511, USA
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA
- Quantitative Biology Institute, Yale University, New Haven, CT 06511, USA
- Wu Tsai Institute, Yale University, New Haven, CT 06511, USA
| |
Collapse
|
8
|
Cowley BR, Calhoun AJ, Rangarajan N, Ireland E, Turner MH, Pillow JW, Murthy M. Mapping model units to visual neurons reveals population code for social behaviour. Nature 2024; 629:1100-1108. [PMID: 38778103 PMCID: PMC11136655 DOI: 10.1038/s41586-024-07451-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 04/19/2024] [Indexed: 05/25/2024]
Abstract
The rich variety of behaviours observed in animals arises through the interplay between sensory processing and motor control. To understand these sensorimotor transformations, it is useful to build models that predict not only neural responses to sensory input1-5 but also how each neuron causally contributes to behaviour6,7. Here we demonstrate a novel modelling approach to identify a one-to-one mapping between internal units in a deep neural network and real neurons by predicting the behavioural changes that arise from systematic perturbations of more than a dozen neuronal cell types. A key ingredient that we introduce is 'knockout training', which involves perturbing the network during training to match the perturbations of the real neurons during behavioural experiments. We apply this approach to model the sensorimotor transformations of Drosophila melanogaster males during a complex, visually guided social behaviour8-11. The visual projection neurons at the interface between the optic lobe and central brain form a set of discrete channels12, and prior work indicates that each channel encodes a specific visual feature to drive a particular behaviour13,14. Our model reaches a different conclusion: combinations of visual projection neurons, including those involved in non-social behaviours, drive male interactions with the female, forming a rich population code for behaviour. Overall, our framework consolidates behavioural effects elicited from various neural perturbations into a single, unified model, providing a map from stimulus to neuronal cell type to behaviour, and enabling future incorporation of wiring diagrams of the brain15 into the model.
Collapse
Affiliation(s)
- Benjamin R Cowley
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| | - Adam J Calhoun
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Elise Ireland
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Maxwell H Turner
- Department of Neurobiology, Stanford University, Stanford, CA, USA
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Mala Murthy
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
9
|
Ramdya P. AI networks reveal how flies find a mate. Nature 2024; 629:1010-1011. [PMID: 38778186 DOI: 10.1038/d41586-024-01320-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
|
10
|
Tanaka R, Zhou B, Agrochao M, Badwan BA, Au B, Matos NCB, Clark DA. Neural mechanisms to incorporate visual counterevidence in self-movement estimation. Curr Biol 2023; 33:4960-4979.e7. [PMID: 37918398 PMCID: PMC10848174 DOI: 10.1016/j.cub.2023.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 11/04/2023]
Abstract
In selecting appropriate behaviors, animals should weigh sensory evidence both for and against specific beliefs about the world. For instance, animals measure optic flow to estimate and control their own rotation. However, existing models of flow detection can be spuriously triggered by visual motion created by objects moving in the world. Here, we show that stationary patterns on the retina, which constitute evidence against observer rotation, suppress inappropriate stabilizing rotational behavior in the fruit fly Drosophila. In silico experiments show that artificial neural networks (ANNs) that are optimized to distinguish observer movement from external object motion similarly detect stationarity and incorporate negative evidence. Employing neural measurements and genetic manipulations, we identified components of the circuitry for stationary pattern detection, which runs parallel to the fly's local motion and optic-flow detectors. Our results show how the fly brain incorporates negative evidence to improve heading stability, exemplifying how a compact brain exploits geometrical constraints of the visual world.
Collapse
Affiliation(s)
- Ryosuke Tanaka
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Baohua Zhou
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Statistics and Data Science, Yale University, New Haven, CT 06511, USA
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Bara A Badwan
- School of Engineering and Applied Science, Yale University, New Haven, CT 06511, USA
| | - Braedyn Au
- Department of Physics, Yale University, New Haven, CT 06511, USA
| | - Natalia C B Matos
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Wu Tsai Institute, Yale University, New Haven, CT 06511, USA; Quantitative Biology Institute, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
11
|
Chen J, Gish CM, Fransen JW, Salazar-Gatzimas E, Clark DA, Borghuis BG. Direct comparison reveals algorithmic similarities in fly and mouse visual motion detection. iScience 2023; 26:107928. [PMID: 37810236 PMCID: PMC10550730 DOI: 10.1016/j.isci.2023.107928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/07/2023] [Accepted: 09/12/2023] [Indexed: 10/10/2023] Open
Abstract
Evolution has equipped vertebrates and invertebrates with neural circuits that selectively encode visual motion. While similarities in the computations performed by these circuits in mouse and fruit fly have been noted, direct experimental comparisons have been lacking. Because molecular mechanisms and neuronal morphology in the two species are distinct, we directly compared motion encoding in these two species at the algorithmic level, using matched stimuli and focusing on a pair of analogous neurons, the mouse ON starburst amacrine cell (ON SAC) and Drosophila T4 neurons. We find that the cells share similar spatiotemporal receptive field structures, sensitivity to spatiotemporal correlations, and tuning to sinusoidal drifting gratings, but differ in their responses to apparent motion stimuli. Both neuron types showed a response to summed sinusoids that deviates from models for motion processing in these cells, underscoring the similarities in their processing and identifying response features that remain to be explained.
Collapse
Affiliation(s)
- Juyue Chen
- Interdepartmental Neurosciences Program, Yale University, New Haven, CT 06511, USA
| | - Caitlin M Gish
- Department of Physics, Yale University, New Haven, CT 06511, USA
| | - James W Fransen
- Department of Anatomical Sciences and Neurobiology, University of Louisville, Louisville, KY 40202, USA
| | | | - Damon A Clark
- Interdepartmental Neurosciences Program, Yale University, New Haven, CT 06511, USA
- Department of Physics, Yale University, New Haven, CT 06511, USA
- Department of Molecular, Cellular, Developmental Biology, Yale University, New Haven, CT 06511, USA
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - Bart G Borghuis
- Department of Anatomical Sciences and Neurobiology, University of Louisville, Louisville, KY 40202, USA
| |
Collapse
|
12
|
Hayashi M, Kazawa T, Tsunoda H, Kanzaki R. The Understanding of ON-Edge Motion Detection Through the Simulation Based on the Connectome of Drosophila’s Optic Lobe. JOURNAL OF ROBOTICS AND MECHATRONICS 2022. [DOI: 10.20965/jrm.2022.p0795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The optic lobe of the fly is one of the prominent model systems for the neural mechanism of the motion detection. How a fly who lives under various visual situations of the nature processes the information from at most a few thousands of ommatidia in their neural circuit for the detection of moving objects is not exactly clear though many computational models of the fly optic lobe as a moving objects detector were suggested. Here we attempted to elucidate the mechanisms of ON-edge motion detection by a simulation approach based on the TEM connectome of Drosophila. Our simulation model of the optic lobe with the NEURON simulator that covers the full scale of ommatidia, reproduced the characteristics of the receptor neurons, lamina monopolar neurons, and T4 cells in the lobula. The contribution of each neuron can be estimated by changing synaptic connection strengths in the simulation and measuring the response to the motion stimulus. Those show the paradelle pathway provide motion detection in the fly optic lobe has more robustness and is more sophisticated than a simple combination of HR and BL systems.
Collapse
|
13
|
Zhou B, Li Z, Kim S, Lafferty J, Clark DA. Shallow neural networks trained to detect collisions recover features of visual loom-selective neurons. eLife 2022; 11:72067. [PMID: 35023828 PMCID: PMC8849349 DOI: 10.7554/elife.72067] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 01/11/2022] [Indexed: 11/13/2022] Open
Abstract
Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.
Collapse
Affiliation(s)
- Baohua Zhou
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, United States
| | - Zifan Li
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - Sunnie Kim
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - John Lafferty
- Department of Statistics and Data Science, Yale University, New Haven, United States
| | - Damon A Clark
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, United States
| |
Collapse
|
14
|
Turner MH, Clandinin TR. Neuroscience: Convergence of biological and artificial networks. Curr Biol 2021; 31:R1079-R1081. [PMID: 34582814 DOI: 10.1016/j.cub.2021.07.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A new study shows that an artificial neural network trained to predict visual motion reproduces key properties of motion detecting circuits in the fruit fly.
Collapse
Affiliation(s)
- Maxwell H Turner
- Department of Neurobiology, Stanford University, Stanford, CA 94103, USA
| | - Thomas R Clandinin
- Department of Neurobiology, Stanford University, Stanford, CA 94103, USA.
| |
Collapse
|