1
|
Vélez-Fort M, Cossell L, Porta L, Clopath C, Margrie TW. Motor and vestibular signals in the visual cortex permit the separation of self versus externally generated visual motion. Cell 2025; 188:2175-2189.e15. [PMID: 39978344 DOI: 10.1016/j.cell.2025.01.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 01/06/2025] [Accepted: 01/24/2025] [Indexed: 02/22/2025]
Abstract
Knowing whether we are moving or something in the world is moving around us is possibly the most critical sensory discrimination we need to perform. How the brain and, in particular, the visual system solves this motion-source separation problem is not known. Here, we find that motor, vestibular, and visual motion signals are used by the mouse primary visual cortex (VISp) to differentially represent the same visual flow information according to whether the head is stationary or experiencing passive versus active translation. During locomotion, we find that running suppresses running-congruent translation input and that translation signals dominate VISp activity when running and translation speed become incongruent. This cross-modal interaction between the motor and vestibular systems was found throughout the cortex, indicating that running and translation signals provide a brain-wide egocentric reference frame for computing the internally generated and actual speed of self when moving through and sensing the external world.
Collapse
Affiliation(s)
- Mateo Vélez-Fort
- Sainsbury Wellcome Centre, University College London, London, UK
| | - Lee Cossell
- Sainsbury Wellcome Centre, University College London, London, UK
| | - Laura Porta
- Sainsbury Wellcome Centre, University College London, London, UK
| | - Claudia Clopath
- Sainsbury Wellcome Centre, University College London, London, UK; Bioengineering Department, Imperial College London, London, UK
| | - Troy W Margrie
- Sainsbury Wellcome Centre, University College London, London, UK.
| |
Collapse
|
2
|
Meier AM, D'Souza RD, Ji W, Han EB, Burkhalter A. Interdigitating Modules for Visual Processing During Locomotion and Rest in Mouse V1. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.21.639505. [PMID: 40060542 PMCID: PMC11888233 DOI: 10.1101/2025.02.21.639505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/20/2025]
Abstract
Layer 1 of V1 has been shown to receive locomotion-related signals from the dorsal lateral geniculate (dLGN) and lateral posterior (LP) thalamic nuclei (Roth et al., 2016). Inputs from the dLGN terminate in M2+ patches while inputs from LP target M2- interpatches (D'Souza et al., 2019) suggesting that motion related signals are processed in distinct networks. Here, we investigated by calcium imaging in head-fixed awake mice whether L2/3 neurons underneath L1 M2+ and M2- modules are differentially activated by locomotion, and whether distinct networks of feedback connections from higher cortical areas to L1 may contribute to these differences. We found that strongly locomotion-modulated cell clusters during visual stimulation were aligned with M2- interpatches, while weakly modulated cells clustered under M2+ patches. Unlike M2+ patch cells, pairs of M2- interpatch cells showed increased correlated variability of calcium transients when the sites in the visuotopic map were far apart, suggesting that activity is integrated across large parts of the visual field. Pathway tracing further suggests that strong locomotion modulation in L2/3 M2- interpatch cells of V1 relies on looped, like-to-like networks between apical dendrites of MOs-, PM- and RSP-projecting neurons and feedback input from these areas to L1. M2- interpatches receive strong inputs from SST neurons, suggesting that during locomotion these interneurons influence the firing of specific subnetworks by controlling the excitability of apical dendrites in M2- interpatches.
Collapse
Affiliation(s)
- A M Meier
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110; USA
| | - R D D'Souza
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110; USA
| | - W Ji
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110; USA
| | - E B Han
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110; USA
| | - A Burkhalter
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110; USA
| |
Collapse
|
3
|
Wallace DJ, Voit KM, Martin Machado D, Bahadorian M, Sawinski J, Greenberg DS, Stahr P, Holmgren CD, Bassetto G, Rosselli FB, Koseska A, Fitzpatrick D, Kerr JND. Eye saccades align optic flow with retinal specializations during object pursuit in freely moving ferrets. Curr Biol 2025; 35:761-775.e10. [PMID: 39909033 DOI: 10.1016/j.cub.2024.12.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 11/13/2024] [Accepted: 12/11/2024] [Indexed: 02/07/2025]
Abstract
During prey pursuit, how eye rotations, such as saccades, enable continuous tracking of erratically moving targets while enabling an animal to navigate through the environment is unknown. To better understand this, we measured head and eye rotations in freely running ferrets during pursuit behavior. By also tracking the target and all environmental features, we reconstructed the animal's visual fields and their relationship to retinal structures. In the reconstructed visual fields, the target position clustered on and around the high-acuity retinal area location, the area centralis, and surprisingly, this cluster was not significantly shifted by digital removal of either eye saccades, exclusively elicited when the ferrets made turns, or head rotations that were tightly synchronized with the saccades. Here, we show that, while the saccades did not fixate the moving target with the area centralis, they instead aligned the area centralis with the intended direction of travel. This also aligned the area centralis with features of the optic flow pattern, such as flow direction and focus of expansion, used for navigation by many species. While saccades initially rotated the eyes in the same direction as the head turn, saccades were followed by eye rotations countering the ongoing head rotation, which reduced image blur and limited information loss across the visual field during head turns. As we measured the same head and eye rotational relationship in freely moving tree shrews, rats, and mice, we suggest that these saccades and counter-rotations are a generalized mechanism enabling mammals to navigate complex environments during pursuit.
Collapse
Affiliation(s)
- Damian J Wallace
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany.
| | - Kay-Michael Voit
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - Daniela Martin Machado
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - Mohammadreza Bahadorian
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany; Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - Juergen Sawinski
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - David S Greenberg
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - Paul Stahr
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - Carl D Holmgren
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - Giacomo Bassetto
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany; Machine Learning in Science, Eberhard Karls University of Tübingen, 72074 Tübingen, Germany
| | - Federica B Rosselli
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - Aneta Koseska
- Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany
| | - David Fitzpatrick
- Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, Jupiter, FL 33458, USA
| | - Jason N D Kerr
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, 53175 Bonn, Germany.
| |
Collapse
|
4
|
Singh VP, Li J, Dawson K, Mitchell JF, Miller CT. Active vision in freely moving marmosets using head-mounted eye tracking. Proc Natl Acad Sci U S A 2025; 122:e2412954122. [PMID: 39899712 PMCID: PMC11831172 DOI: 10.1073/pnas.2412954122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Accepted: 12/19/2024] [Indexed: 02/05/2025] Open
Abstract
Our understanding of how vision functions as primates actively navigate the real-world is remarkably sparse. As most data have been limited to chaired and typically head-restrained animals, the synergistic interactions of different motor actions/plans inherent to active sensing-e.g., eyes, head, posture, movement, etc.-on visual perception are largely unknown. To address this considerable gap in knowledge, we developed an innovative wireless head-mounted eye-tracking system that performs Chair-free Eye-Recording using Backpack mounted micROcontrollers (CEREBRO) for small mammals, such as marmoset monkeys. Because eye illumination and environment lighting change continuously in natural contexts, we developed a segmentation artificial neural network to perform robust pupil tracking in these conditions. Leveraging this innovative system to investigate active vision, we demonstrate that although freely moving marmosets exhibit frequent compensatory eye movements equivalent to other primates, including humans, the predictability of the visual behavior (gaze) is higher when animals are freely moving relative to when they are head-fixed. Moreover, despite increases in eye/head-motion during locomotion, gaze stabilization remains steady because of an increase in vestibularocular reflex gain during locomotion. These results demonstrate the efficient, dynamic visuo-motor mechanisms and related behaviors that enable stable, high-resolution foveal vision in primates as they explore the natural world.
Collapse
Affiliation(s)
- Vikram Pal Singh
- Department of Psychology, Cortical Systems and Behavior Lab, University of California San Diego, San Diego, CA92093
| | - Jingwen Li
- Department of Psychology, Cortical Systems and Behavior Lab, University of California San Diego, San Diego, CA92093
| | - Kana Dawson
- Department of Psychology, Cortical Systems and Behavior Lab, University of California San Diego, San Diego, CA92093
| | - Jude F. Mitchell
- Department of Brain and Cognitive Science, University of Rochester, Rochester, NY14627
| | - Cory T. Miller
- Department of Psychology, Cortical Systems and Behavior Lab, University of California San Diego, San Diego, CA92093
- Department of Psychology, Neurosciences Graduate Program, University of California San Diego, San Diego, CA92093
| |
Collapse
|
5
|
Shapcott KA, Weigand M, Glukhova M, Havenith MN, Schölvinck ML. DomeVR: Immersive virtual reality for primates and rodents. PLoS One 2025; 20:e0308848. [PMID: 39820059 PMCID: PMC11737658 DOI: 10.1371/journal.pone.0308848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 07/30/2024] [Indexed: 01/19/2025] Open
Abstract
Immersive virtual reality (VR) environments are a powerful tool to explore cognitive processes ranging from memory and navigation to visual processing and decision making-and to do so in a naturalistic yet controlled setting. As such, they have been employed across different species, and by a diverse range of research groups. Unfortunately, designing and implementing behavioral tasks in such environments often proves complicated. To tackle this challenge, we created DomeVR, an immersive VR environment built using Unreal Engine 4 (UE4). UE4 is a powerful game engine supporting photo-realistic graphics and containing a visual scripting language designed for use by non-programmers. As a result, virtual environments are easily created using drag-and-drop elements. DomeVR aims to make these features accessible to neuroscience experiments. This includes a logging and synchronization system to solve timing uncertainties inherent in UE4; an interactive GUI for scientists to observe subjects during experiments and adjust task parameters on the fly, and a dome projection system for full task immersion in non-human subjects. These key features are modular and can easily be added individually into other UE4 projects. Finally, we present proof-of-principle data highlighting the functionality of DomeVR in three different species: human, macaque and mouse.
Collapse
Affiliation(s)
- Katharine A. Shapcott
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with the Max Planck Society, Frankfurt-am-Main, Germany
| | - Marvin Weigand
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with the Max Planck Society, Frankfurt-am-Main, Germany
| | - Mina Glukhova
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with the Max Planck Society, Frankfurt-am-Main, Germany
| | - Martha N. Havenith
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with the Max Planck Society, Frankfurt-am-Main, Germany
| | - Marieke L. Schölvinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with the Max Planck Society, Frankfurt-am-Main, Germany
| |
Collapse
|
6
|
Fei Y, Luh M, Ontiri A, Ghauri D, Hu W, Liang L. Coordination of distinct sources of excitatory inputs enhances motion selectivity in the mouse visual thalamus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.08.631826. [PMID: 39829841 PMCID: PMC11741327 DOI: 10.1101/2025.01.08.631826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2025]
Abstract
Multiple sources innervate the visual thalamus to influence image-forming vision prior to the cortex, yet it remains unclear how non-retinal and retinal input coordinate to shape thalamic visual selectivity. Using dual-color two-photon calcium imaging in the thalamus of awake mice, we observed similar coarse-scale retinotopic organization between axons of superior colliculus neurons and retinal ganglion cells, both providing strong converging excitatory input to thalamic neurons. At a fine scale of ∼10 µm, collicular boutons often shared visual feature preferences with nearby retinal boutons. Inhibiting collicular input significantly suppressed visual responses in thalamic neurons and specifically reduced motion selectivity in neurons preferring nasal-to-temporal motion. The reduction in motion selectivity could be the result of silencing sharply tuned direction-selective colliculogeniculate input. These findings suggest that the thalamus is not merely a relay but selectively integrates inputs from multiple regions to build stimulus selectivity and shape the information transmitted to the cortex. HIGHLIGHTS Chronic dual-color calcium imaging reveals diverse visual tuning of collicular axonal boutons.Nearby collicular and retinal boutons often share feature preferences at ∼10 µm scaleSilencing of collicular input suppresses visual responses in the majority of thalamic neurons.Silencing of collicular input reduces motion selectivity in thalamic neurons.
Collapse
|
7
|
Kang I, Talluri BC, Yates JL, Niell CM, Nienborg H. Is the impact of spontaneous movements on early visual cortex species specific? Trends Neurosci 2025; 48:7-21. [PMID: 39701910 PMCID: PMC11741931 DOI: 10.1016/j.tins.2024.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2024] [Revised: 10/22/2024] [Accepted: 11/20/2024] [Indexed: 12/21/2024]
Abstract
Recent studies in non-human primates do not find pronounced signals related to the animal's own body movements in the responses of neurons in the visual cortex. This is notable because such pronounced signals have been widely observed in the visual cortex of mice. Here, we discuss factors that may contribute to the differences observed between species, such as state, slow neural drift, eccentricity, and changes in retinal input. The interpretation of movement-related signals in the visual cortex also exemplifies the challenge of identifying the sources of correlated variables. Dissecting these sources is central for understanding the functional roles of movement-related signals. We suggest a functional classification of the possible sources, aimed at facilitating cross-species comparative approaches to studying the neural mechanisms of vision during natural behavior.
Collapse
Affiliation(s)
- Incheol Kang
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Bharath Chandra Talluri
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Jacob L Yates
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Cristopher M Niell
- Department of Biology and Institute of Neuroscience, University of Oregon, Eugene, OR, USA
| | - Hendrikje Nienborg
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
8
|
Haley SP, Surinach DA, Nietz AK, Carter RE, Zecker LS, Popa LS, Kodandaramaiah SB, Ebner TJ. Cortex-wide characterization of decision-making neural dynamics during spatial navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.10.23.619896. [PMID: 39484475 PMCID: PMC11526902 DOI: 10.1101/2024.10.23.619896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/03/2024]
Abstract
Decision-making during freely moving behaviors involves complex interactions among many cortical and subcortical regions. However, the spatiotemporal coordination across regions to generate a decision is less understood. Using a head-mounted widefield microscope, cortex-wide calcium dynamics were recorded in mice expressing GCaMP7f as they navigated an 8-maze using two paradigms. The first was an alternating pattern that required short term memory of the previous trial to make the correct decision and the second after a rule change to a fixed path in which rewards were delivered only on the left side. Identification of cortex-wide activation states revealed differences between the two paradigms. There was a higher probability for a visual/retrosplenial cortical state during the alternating paradigm and higher probability of a secondary motor and posterior parietal state during left-only. Three state sequences (motifs) illustrated both anterior and posterior activity propagations across the cortex. The anterior propagating motifs had the highest probability around the decision and posterior propagating motifs peaked following the decision. The latter, likely reflecting internal feedback to influence future actions, were more common in the left-only paradigm. Therefore, the probabilities and sequences of cortical states differ when working memory is required versus a fixed trajectory reward paradigm.
Collapse
|
9
|
Martins DM, Manda JM, Goard MJ, Parker PRL. Building egocentric models of local space from retinal input. Curr Biol 2024; 34:R1185-R1202. [PMID: 39626632 PMCID: PMC11620475 DOI: 10.1016/j.cub.2024.10.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2024]
Abstract
Determining the location of objects relative to ourselves is essential for interacting with the world. Neural activity in the retina is used to form a vision-independent model of the local spatial environment relative to the body. For example, when an animal navigates through a forest, it rapidly shifts its gaze to identify the position of important objects, such as a tree obstructing its path. This seemingly trivial behavior belies a sophisticated neural computation. Visual information entering the brain in a retinocentric reference frame must be transformed into an egocentric reference frame to guide motor planning and action. This, in turn, allows the animal to extract the location of the tree and plan a path around it. In this review, we explore the anatomical, physiological, and computational implementation of retinocentric-to-egocentric reference frame transformations - a research area undergoing rapid progress stimulated by an ever-expanding molecular, physiological, and computational toolbox for probing neural circuits. We begin by summarizing evidence for retinocentric and egocentric reference frames in the brains of diverse organisms, from insects to primates. Next, we cover how distance estimation contributes to creating a three-dimensional representation of local space. We then review proposed implementations of reference frame transformations across different biological and artificial neural networks. Finally, we discuss how an internal egocentric model of the environment is maintained independently of the sensory inputs from which it is derived. By comparing findings across a variety of nervous systems and behaviors, we aim to inspire new avenues for investigating the neural basis of reference frame transformation, a canonical computation critical for modeling the external environment and guiding goal-directed behavior.
Collapse
Affiliation(s)
- Dylan M Martins
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
| | - Joy M Manda
- Behavioral and Systems Neuroscience, Department of Psychology, Rutgers University, New Brunswick, NJ 08854, USA
| | - Michael J Goard
- Department of Psychological and Brain Sciences and Department of Molecular, Cellular, and Developmental Biology, University of California, Santa Barbara, Santa Barbara, CA 93106, USA.
| | - Philip R L Parker
- Behavioral and Systems Neuroscience, Department of Psychology, Rutgers University, New Brunswick, NJ 08854, USA.
| |
Collapse
|
10
|
Singh VP, Li J, Dawson K, Mitchell JF, Miller CT. Active vision in freely moving marmosets using head-mounted eye tracking. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.11.593707. [PMID: 38766147 PMCID: PMC11100783 DOI: 10.1101/2024.05.11.593707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Our understanding of how vision functions as primates actively navigate the real-world is remarkably sparse. As most data have been limited to chaired and typically head-restrained animals, the synergistic interactions of different motor actions/plans inherent to active sensing - e.g. eyes, head, posture, movement, etc. - on visual perception are largely unknown. To address this considerable gap in knowledge, we developed an innovative wireless head-mounted eye tracking system called CEREBRO for small mammals, such as marmoset monkeys. Our system performs Chair-free Eye-Recording using Backpack mounted micROcontrollers. Because eye illumination and environment lighting change continuously in natural contexts, we developed a segmentation artificial neural network to perform robust pupil tracking in these conditions. Leveraging this innovative system to investigate active vision, we demonstrate that although freely-moving marmosets exhibit frequent compensatory eye movements equivalent to other primates, including humans, the predictability of the visual behavior (gaze) is higher when animals are freely-moving relative to when they are head-fixed. Moreover, despite increases in eye/head-motion during locomotion, gaze stabilization remains steady because of an increase in VOR gain during locomotion. These results demonstrate the efficient, dynamic visuo-motor mechanisms and related behaviors that enable stable, high-resolution foveal vision in primates as they explore the natural world.
Collapse
Affiliation(s)
- Vikram Pal Singh
- Cortical Systems & Behavior Lab, University of California San Diego, San Diego, California, USA
| | - Jingwen Li
- Cortical Systems & Behavior Lab, University of California San Diego, San Diego, California, USA
| | - Kana Dawson
- Cortical Systems & Behavior Lab, University of California San Diego, San Diego, California, USA
| | - Jude F. Mitchell
- Department of Brain and Cognitive Science, University of Rochester, Rochester, New York, USA
| | - Cory T. Miller
- Cortical Systems & Behavior Lab, University of California San Diego, San Diego, California, USA
- Neurosciences Graduate Program, University of California San Diego, San Diego, California, USA
| |
Collapse
|
11
|
Stringer C, Pachitariu M. Analysis methods for large-scale neuronal recordings. Science 2024; 386:eadp7429. [PMID: 39509504 DOI: 10.1126/science.adp7429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 09/27/2024] [Indexed: 11/15/2024]
Abstract
Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.
Collapse
Affiliation(s)
- Carsen Stringer
- Howard Hughes Medical Institute (HHMI) Janelia Research Campus, Ashburn, VA, USA
| | - Marius Pachitariu
- Howard Hughes Medical Institute (HHMI) Janelia Research Campus, Ashburn, VA, USA
| |
Collapse
|
12
|
Li J, Aoi MC, Miller CT. Representing the dynamics of natural marmoset vocal behaviors in frontal cortex. Neuron 2024; 112:3542-3550.e3. [PMID: 39317185 PMCID: PMC11560606 DOI: 10.1016/j.neuron.2024.08.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 07/26/2024] [Accepted: 08/28/2024] [Indexed: 09/26/2024]
Abstract
Here, we tested the respective contributions of primate premotor and prefrontal cortex to support vocal behavior. We applied a model-based generalized linear model (GLM) analysis that better accounts for the inherent variance in natural, continuous behaviors to characterize the activity of neurons throughout the frontal cortex as freely moving marmosets engaged in conversational exchanges. While analyses revealed functional clusters of neural activity related to the different processes involved in the vocal behavior, these clusters did not map to subfields of prefrontal or premotor cortex, as has been observed in more conventional task-based paradigms. Our results suggest a distributed functional organization for the myriad neural mechanisms underlying natural social interactions and have implications for our concepts of the role that frontal cortex plays in governing ethological behaviors in primates.
Collapse
Affiliation(s)
- Jingwen Li
- Cortical Systems & Behavior Lab, University of California, San Diego, La Jolla, CA 92093, USA.
| | - Mikio C Aoi
- Department of Neurobiology, University of California, San Diego, La Jolla, CA 92093, USA; Halıcıoğlu Data Science Institute, University of California, San Diego, La Jolla, CA 92093, USA; Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA
| | - Cory T Miller
- Cortical Systems & Behavior Lab, University of California, San Diego, La Jolla, CA 92093, USA; Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
13
|
Forli A, Yartsev MM. Understanding the neural basis of natural intelligence. Cell 2024; 187:5833-5837. [PMID: 39423802 DOI: 10.1016/j.cell.2024.07.049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 07/26/2024] [Accepted: 07/26/2024] [Indexed: 10/21/2024]
Abstract
Understanding the neural basis of natural intelligence necessitates a paradigm shift: from strict reductionism toward embracing complexity and diversity. New tools and theories enable us to tackle this challenge, providing unprecedented access to neural dynamics and behavior across time, contexts, and species. Principles for intelligent behavior and learning in the natural world are now, more than ever, within reach.
Collapse
Affiliation(s)
- Angelo Forli
- Department of Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| | - Michael M Yartsev
- Department of Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA; Department of Neuroscience, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; Howard Hughes Medical Institute, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
14
|
Ding Z, Fahey PG, Papadopoulos S, Wang EY, Celii B, Papadopoulos C, Chang A, Kunin AB, Tran D, Fu J, Ding Z, Patel S, Ntanavara L, Froebe R, Ponder K, Muhammad T, Alexander Bae J, Bodor AL, Brittain D, Buchanan J, Bumbarger DJ, Castro MA, Cobos E, Dorkenwald S, Elabbady L, Halageri A, Jia Z, Jordan C, Kapner D, Kemnitz N, Kinn S, Lee K, Li K, Lu R, Macrina T, Mahalingam G, Mitchell E, Mondal SS, Mu S, Nehoran B, Popovych S, Schneider-Mizell CM, Silversmith W, Takeno M, Torres R, Turner NL, Wong W, Wu J, Yin W, Yu SC, Yatsenko D, Froudarakis E, Sinz F, Josić K, Rosenbaum R, Sebastian Seung H, Collman F, da Costa NM, Clay Reid R, Walker EY, Pitkow X, Reimer J, Tolias AS. Functional connectomics reveals general wiring rule in mouse visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.13.531369. [PMID: 36993398 PMCID: PMC10054929 DOI: 10.1101/2023.03.13.531369] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Understanding the relationship between circuit connectivity and function is crucial for uncovering how the brain implements computation. In the mouse primary visual cortex (V1), excitatory neurons with similar response properties are more likely to be synaptically connected, but previous studies have been limited to within V1, leaving much unknown about broader connectivity rules. In this study, we leverage the millimeter-scale MICrONS dataset to analyze synaptic connectivity and functional properties of individual neurons across cortical layers and areas. Our results reveal that neurons with similar responses are preferentially connected both within and across layers and areas - including feedback connections - suggesting the universality of the 'like-to-like' connectivity across the visual hierarchy. Using a validated digital twin model, we separated neuronal tuning into feature (what neurons respond to) and spatial (receptive field location) components. We found that only the feature component predicts fine-scale synaptic connections, beyond what could be explained by the physical proximity of axons and dendrites. We also found a higher-order rule where postsynaptic neuron cohorts downstream of individual presynaptic cells show greater functional similarity than predicted by a pairwise like-to-like rule. Notably, recurrent neural networks (RNNs) trained on a simple classification task develop connectivity patterns mirroring both pairwise and higher-order rules, with magnitude similar to those in the MICrONS data. Lesion studies in these RNNs reveal that disrupting 'like-to-like' connections has a significantly greater impact on performance compared to lesions of random connections. These findings suggest that these connectivity principles may play a functional role in sensory processing and learning, highlighting shared principles between biological and artificial systems.
Collapse
Affiliation(s)
- Zhuokun Ding
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, USA
- Stanford Bio-X, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Paul G Fahey
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, USA
- Stanford Bio-X, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Stelios Papadopoulos
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, USA
- Stanford Bio-X, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Eric Y Wang
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Brendan Celii
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, USA
| | - Christos Papadopoulos
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Andersen Chang
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Alexander B Kunin
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Mathematics, Creighton University, Omaha, USA
| | - Dat Tran
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Jiakun Fu
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Zhiwei Ding
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Saumil Patel
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, USA
- Stanford Bio-X, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Lydia Ntanavara
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, USA
- Stanford Bio-X, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Rachel Froebe
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, USA
- Stanford Bio-X, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Kayla Ponder
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Taliah Muhammad
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - J Alexander Bae
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Electrical and Computer Engineering Department, Princeton University, Princeton, USA
| | | | | | | | | | - Manuel A Castro
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Erick Cobos
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | | | - Akhilesh Halageri
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Zhen Jia
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | - Chris Jordan
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Dan Kapner
- Allen Institute for Brain Science, Seattle, USA
| | - Nico Kemnitz
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Sam Kinn
- Allen Institute for Brain Science, Seattle, USA
| | - Kisuk Lee
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Brain & Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, USA
| | - Kai Li
- Computer Science Department, Princeton University, Princeton, USA
| | - Ran Lu
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Thomas Macrina
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | | | - Eric Mitchell
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Shanka Subhra Mondal
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Electrical and Computer Engineering Department, Princeton University, Princeton, USA
| | - Shang Mu
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Barak Nehoran
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | - Sergiy Popovych
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | | | | | - Marc Takeno
- Allen Institute for Brain Science, Seattle, USA
| | | | - Nicholas L Turner
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | - William Wong
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Jingpeng Wu
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Wenjing Yin
- Allen Institute for Brain Science, Seattle, USA
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Dimitri Yatsenko
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- DataJoint Inc., Houston, TX, USA
| | - Emmanouil Froudarakis
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Basic Sciences, Faculty of Medicine, University of Crete, Heraklion, Greece
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology Hellas, Heraklion, Greece
| | - Fabian Sinz
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Institute for Bioinformatics and Medical Informatics, University Tübingen, Tübingen, Germany
- Institute for Computer Science and Campus Institute Data Science, University Göttingen, Göttingen, Germany
| | - Krešimir Josić
- Departments of Mathematics, Biology and Biochemistry, University of Houston, Houston, USA
| | - Robert Rosenbaum
- Departments of Applied and Computational Mathematics and Statistics and Biological Sciences, University of Notre Dame, Notre Dame, USA
| | - H Sebastian Seung
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | | | | | - R Clay Reid
- Allen Institute for Brain Science, Seattle, USA
| | - Edgar Y Walker
- Department of Neurobiology & Biophysics, University of Washington, Seattle, USA
- Computational Neuroscience Center, University of Washington, Seattle, USA
| | - Xaq Pitkow
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, USA
- Department of Computer Science, Rice University, Houston, TX, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Jacob Reimer
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Andreas S Tolias
- Department of Neuroscience & Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, CA, USA
- Stanford Bio-X, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| |
Collapse
|
15
|
Skyberg RJ, Niell CM. Natural visual behavior and active sensing in the mouse. Curr Opin Neurobiol 2024; 86:102882. [PMID: 38704868 PMCID: PMC11254345 DOI: 10.1016/j.conb.2024.102882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 04/05/2024] [Accepted: 04/10/2024] [Indexed: 05/07/2024]
Abstract
In the natural world, animals use vision for a wide variety of behaviors not reflected in most laboratory paradigms. Although mice have low-acuity vision, they use their vision for many natural behaviors, including predator avoidance, prey capture, and navigation. They also perform active sensing, moving their head and eyes to achieve behavioral goals and acquire visual information. These aspects of natural vision result in visual inputs and corresponding behavioral outputs that are outside the range of conventional vision studies but are essential aspects of visual function. Here, we review recent studies in mice that have tapped into natural behavior and active sensing to reveal the computational logic of neural circuits for vision.
Collapse
Affiliation(s)
- Rolf J Skyberg
- Department of Biology and Institute of Neuroscience, University of Oregon, Eugene OR 97403, USA. https://twitter.com/SkybergRolf
| | - Cristopher M Niell
- Department of Biology and Institute of Neuroscience, University of Oregon, Eugene OR 97403, USA.
| |
Collapse
|
16
|
Oesch LT, Ryan MB, Churchland AK. From innate to instructed: A new look at perceptual decision-making. Curr Opin Neurobiol 2024; 86:102871. [PMID: 38569230 PMCID: PMC11162954 DOI: 10.1016/j.conb.2024.102871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/07/2024] [Accepted: 03/08/2024] [Indexed: 04/05/2024]
Abstract
Understanding how subjects perceive sensory stimuli in their environment and use this information to guide appropriate actions is a major challenge in neuroscience. To study perceptual decision-making in animals, researchers use tasks that either probe spontaneous responses to stimuli (often described as "naturalistic") or train animals to associate stimuli with experimenter-defined responses. Spontaneous decisions rely on animals' pre-existing knowledge, while trained tasks offer greater versatility, albeit often at the cost of extensive training. Here, we review emerging approaches to investigate perceptual decision-making using both spontaneous and trained behaviors, highlighting their strengths and limitations. Additionally, we propose how trained decision-making tasks could be improved to achieve faster learning and a more generalizable understanding of task rules.
Collapse
Affiliation(s)
- Lukas T Oesch
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, United States
| | - Michael B Ryan
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, United States. https://twitter.com/NeuroMikeRyan
| | - Anne K Churchland
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, United States.
| |
Collapse
|
17
|
Ambrad Giovannetti E, Rancz E. Behind mouse eyes: The function and control of eye movements in mice. Neurosci Biobehav Rev 2024; 161:105671. [PMID: 38604571 DOI: 10.1016/j.neubiorev.2024.105671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/12/2024] [Accepted: 04/08/2024] [Indexed: 04/13/2024]
Abstract
The mouse visual system has become the most popular model to study the cellular and circuit mechanisms of sensory processing. However, the importance of eye movements only started to be appreciated recently. Eye movements provide a basis for predictive sensing and deliver insights into various brain functions and dysfunctions. A plethora of knowledge on the central control of eye movements and their role in perception and behaviour arose from work on primates. However, an overview of various eye movements in mice and a comparison to primates is missing. Here, we review the eye movement types described to date in mice and compare them to those observed in primates. We discuss the central neuronal mechanisms for their generation and control. Furthermore, we review the mounting literature on eye movements in mice during head-fixed and freely moving behaviours. Finally, we highlight gaps in our understanding and suggest future directions for research.
Collapse
Affiliation(s)
| | - Ede Rancz
- INMED, INSERM, Aix-Marseille University, Marseille, France.
| |
Collapse
|
18
|
Clayton KK, Stecyk KS, Guo AA, Chambers AR, Chen K, Hancock KE, Polley DB. Sound elicits stereotyped facial movements that provide a sensitive index of hearing abilities in mice. Curr Biol 2024; 34:1605-1620.e5. [PMID: 38492568 PMCID: PMC11043000 DOI: 10.1016/j.cub.2024.02.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 01/02/2024] [Accepted: 02/23/2024] [Indexed: 03/18/2024]
Abstract
Sound elicits rapid movements of muscles in the face, ears, and eyes that protect the body from injury and trigger brain-wide internal state changes. Here, we performed quantitative facial videography from mice resting atop a piezoelectric force plate and observed that broadband sounds elicited rapid and stereotyped facial twitches. Facial motion energy (FME) adjacent to the whisker array was 30 dB more sensitive than the acoustic startle reflex and offered greater inter-trial and inter-animal reliability than sound-evoked pupil dilations or movement of other facial and body regions. FME tracked the low-frequency envelope of broadband sounds, providing a means to study behavioral discrimination of complex auditory stimuli, such as speech phonemes in noise. Approximately 25% of layer 5-6 units in the auditory cortex (ACtx) exhibited firing rate changes during facial movements. However, FME facilitation during ACtx photoinhibition indicated that sound-evoked facial movements were mediated by a midbrain pathway and modulated by descending corticofugal input. FME and auditory brainstem response (ABR) thresholds were closely aligned after noise-induced sensorineural hearing loss, yet FME growth slopes were disproportionately steep at spared frequencies, reflecting a central plasticity that matched commensurate changes in ABR wave 4. Sound-evoked facial movements were also hypersensitive in Ptchd1 knockout mice, highlighting the use of FME for identifying sensory hyper-reactivity phenotypes after adult-onset hyperacusis and inherited deficiencies in autism risk genes. These findings present a sensitive and integrative measure of hearing while also highlighting that even low-intensity broadband sounds can elicit a complex mixture of auditory, motor, and reafferent somatosensory neural activity.
Collapse
Affiliation(s)
- Kameron K Clayton
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA.
| | - Kamryn S Stecyk
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA
| | - Anna A Guo
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA
| | - Anna R Chambers
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| | - Ke Chen
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| | - Kenneth E Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| |
Collapse
|
19
|
Li J, Aoi MC, Miller CT. Representing the dynamics of natural marmoset vocal behaviors in frontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.17.585423. [PMID: 38559173 PMCID: PMC10979968 DOI: 10.1101/2024.03.17.585423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Here we tested the respective contributions of primate premotor and prefrontal cortex to support vocal behavior. We applied a model-based GLM analysis that better accounts for the inherent variance in natural, continuous behaviors to characterize the activity of neurons throughout frontal cortex as freely-moving marmosets engaged in conversational exchanges. While analyses revealed functional clusters of neural activity related to the different processes involved in the vocal behavior, these clusters did not map to subfields of prefrontal or premotor cortex, as has been observed in more conventional task-based paradigms. Our results suggest a distributed functional organization for the myriad neural mechanisms underlying natural social interactions and has implications for our concepts of the role that frontal cortex plays in governing ethological behaviors in primates.
Collapse
|
20
|
Megemont M, Tortorelli LS, McBurney-Lin J, Cohen JY, O'Connor DH, Yang H. Simultaneous recordings of pupil size variation and locus coeruleus activity in mice. STAR Protoc 2024; 5:102785. [PMID: 38127625 PMCID: PMC10772391 DOI: 10.1016/j.xpro.2023.102785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/03/2023] [Accepted: 12/01/2023] [Indexed: 12/23/2023] Open
Abstract
An extensive literature describes how pupil size reflects neuromodulatory activity, including the noradrenergic system. Here, we present a protocol for the simultaneous recording of optogenetically identified locus coeruleus (LC) units and pupil diameter in mice under different conditions. We describe steps for building an optrode, performing surgery to implant the optrode and headpost, searching for opto-tagged LC units, and performing dual LC-pupil recording. We then detail procedures for data processing and analysis. For complete details on the use and execution of this protocol, please refer to Megemont et al.1.
Collapse
Affiliation(s)
- Marine Megemont
- Department of Molecular, Cell and Systems Biology, University of California, Riverside, Riverside, CA 92521, USA.
| | - Lucas S Tortorelli
- Department of Molecular, Cell and Systems Biology, University of California, Riverside, Riverside, CA 92521, USA
| | - Jim McBurney-Lin
- Department of Molecular, Cell and Systems Biology, University of California, Riverside, Riverside, CA 92521, USA; Neuroscience Graduate Program, University of California, Riverside, Riverside, CA 92521, USA
| | - Jeremiah Y Cohen
- Solomon H. Snyder Department of Neuroscience & Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Daniel H O'Connor
- Solomon H. Snyder Department of Neuroscience & Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Hongdian Yang
- Department of Molecular, Cell and Systems Biology, University of California, Riverside, Riverside, CA 92521, USA; Neuroscience Graduate Program, University of California, Riverside, Riverside, CA 92521, USA.
| |
Collapse
|
21
|
Xu A, Hou Y, Niell CM, Beyeler M. Multimodal Deep Learning Model Unveils Behavioral Dynamics of V1 Activity in Freely Moving Mice. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2023; 36:15341-15357. [PMID: 39005944 PMCID: PMC11242920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Despite their immense success as a model of macaque visual cortex, deep convolutional neural networks (CNNs) have struggled to predict activity in visual cortex of the mouse, which is thought to be strongly dependent on the animal's behavioral state. Furthermore, most computational models focus on predicting neural responses to static images presented under head fixation, which are dramatically different from the dynamic, continuous visual stimuli that arise during movement in the real world. Consequently, it is still unknown how natural visual input and different behavioral variables may integrate over time to generate responses in primary visual cortex (V1). To address this, we introduce a multimodal recurrent neural network that integrates gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice. We show that the model achieves state-of-the-art predictions of V1 activity during free exploration and demonstrate the importance of each component in an extensive ablation study. Analyzing our model using maximally activating stimuli and saliency maps, we reveal new insights into cortical function, including the prevalence of mixed selectivity for behavioral variables in mouse V1. In summary, our model offers a comprehensive deep-learning framework for exploring the computational principles underlying V1 neurons in freely-moving animals engaged in natural behavior.
Collapse
Affiliation(s)
- Aiwen Xu
- Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93117
| | - Yuchen Hou
- Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93117
| | - Cristopher M Niell
- Department of Biology, Institute of Neuroscience University of Oregon Eugene, OR 97403
| | - Michael Beyeler
- Department of Computer Science Department of Psychological & Brain Sciences University of California, Santa Barbara Santa Barbara, CA 93117
| |
Collapse
|
22
|
Parker PRL, Martins DM, Leonard ESP, Casey NM, Sharp SL, Abe ETT, Smear MC, Yates JL, Mitchell JF, Niell CM. A dynamic sequence of visual processing initiated by gaze shifts. Nat Neurosci 2023; 26:2192-2202. [PMID: 37996524 PMCID: PMC11270614 DOI: 10.1038/s41593-023-01481-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 10/04/2023] [Indexed: 11/25/2023]
Abstract
Animals move their head and eyes as they explore the visual scene. Neural correlates of these movements have been found in rodent primary visual cortex (V1), but their sources and computational roles are unclear. We addressed this by combining head and eye movement measurements with neural recordings in freely moving mice. V1 neurons responded primarily to gaze shifts, where head movements are accompanied by saccadic eye movements, rather than to head movements where compensatory eye movements stabilize gaze. A variety of activity patterns followed gaze shifts and together these formed a temporal sequence that was absent in darkness. Gaze-shift responses resembled those evoked by sequentially flashed stimuli, suggesting a large component corresponds to onset of new visual input. Notably, neurons responded in a sequence that matches their spatial frequency bias, consistent with coarse-to-fine processing. Recordings in freely gazing marmosets revealed a similar sequence following saccades, also aligned to spatial frequency preference. Our results demonstrate that active vision in both mice and marmosets consists of a dynamic temporal sequence of neural activity associated with visual sampling.
Collapse
Affiliation(s)
- Philip R L Parker
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
- Behavioral and Systems Neuroscience, Department of Psychology, Rutgers University, New Brunswick, NJ, USA
| | - Dylan M Martins
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Emmalyn S P Leonard
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Nathan M Casey
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Shelby L Sharp
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Elliott T T Abe
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA
| | - Matthew C Smear
- Institute of Neuroscience and Department of Psychology, University of Oregon, Eugene, OR, USA
| | - Jacob L Yates
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, CA, USA
| | - Jude F Mitchell
- Department of Brain and Cognitive Sciences and Center for Visual Sciences, University of Rochester, Rochester, NY, USA.
| | - Cristopher M Niell
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, OR, USA.
| |
Collapse
|
23
|
Solbach MD, Tsotsos JK. The psychophysics of human three-dimensional active visuospatial problem-solving. Sci Rep 2023; 13:19967. [PMID: 37968501 PMCID: PMC10651907 DOI: 10.1038/s41598-023-47188-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 11/09/2023] [Indexed: 11/17/2023] Open
Abstract
Our understanding of how visual systems detect, analyze and interpret visual stimuli has advanced greatly. However, the visual systems of all animals do much more; they enable visual behaviours. How well the visual system performs while interacting with the visual environment and how vision is used in the real world is far from fully understood, especially in humans. It has been suggested that comparison is the most primitive of psychophysical tasks. Thus, as a probe into these active visual behaviours, we use a same-different task: Are two physical 3D objects visually the same? This task is a fundamental cognitive ability. We pose this question to human subjects who are free to move about and examine two real objects in a physical 3D space. The experimental design is such that all behaviours are directed to viewpoint change. Without any training, our participants achieved a mean accuracy of 93.82%. No learning effect was observed on accuracy after many trials, but some effect was seen for response time, number of fixations and extent of head movement. Our probe task, even though easily executed at high-performance levels, uncovered a surprising variety of complex strategies for viewpoint control, suggesting that solutions were developed dynamically and deployed in a seemingly directed hypothesize-and-test manner tailored to the specific task. Subjects need not acquire task-specific knowledge; instead, they formulate effective solutions right from the outset, and as they engage in a series of attempts, those solutions progressively refine, becoming more efficient without compromising accuracy.
Collapse
Affiliation(s)
- Markus D Solbach
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, M3J 1P3, Canada.
| | - John K Tsotsos
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
24
|
Li JY, Glickfeld LL. Input-specific synaptic depression shapes temporal integration in mouse visual cortex. Neuron 2023; 111:3255-3269.e6. [PMID: 37543037 PMCID: PMC10592405 DOI: 10.1016/j.neuron.2023.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 06/07/2023] [Accepted: 07/06/2023] [Indexed: 08/07/2023]
Abstract
Efficient sensory processing requires the nervous system to adjust to ongoing features of the environment. In primary visual cortex (V1), neuronal activity strongly depends on recent stimulus history. Existing models can explain effects of prolonged stimulus presentation but remain insufficient for explaining effects observed after shorter durations commonly encountered under natural conditions. We investigated the mechanisms driving adaptation in response to brief (100 ms) stimuli in L2/3 V1 neurons by performing in vivo whole-cell recordings to measure membrane potential and synaptic inputs. We find that rapid adaptation is generated by stimulus-specific suppression of excitatory and inhibitory synaptic inputs. Targeted optogenetic experiments reveal that these synaptic effects are due to input-specific short-term depression of transmission between layers 4 and 2/3. Thus, brief stimulus presentation engages a distinct adaptation mechanism from that previously reported in response to prolonged stimuli, enabling flexible control of sensory encoding across a wide range of timescales.
Collapse
Affiliation(s)
- Jennifer Y Li
- Department of Neurobiology, Duke University Medical Center, Durham, NC 27701, USA
| | - Lindsey L Glickfeld
- Department of Neurobiology, Duke University Medical Center, Durham, NC 27701, USA.
| |
Collapse
|
25
|
Pennartz CMA, Oude Lohuis MN, Olcese U. How 'visual' is the visual cortex? The interactions between the visual cortex and other sensory, motivational and motor systems as enabling factors for visual perception. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220336. [PMID: 37545313 PMCID: PMC10404929 DOI: 10.1098/rstb.2022.0336] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/13/2023] [Indexed: 08/08/2023] Open
Abstract
The definition of the visual cortex is primarily based on the evidence that lesions of this area impair visual perception. However, this does not exclude that the visual cortex may process more information than of retinal origin alone, or that other brain structures contribute to vision. Indeed, research across the past decades has shown that non-visual information, such as neural activity related to reward expectation and value, locomotion, working memory and other sensory modalities, can modulate primary visual cortical responses to retinal inputs. Nevertheless, the function of this non-visual information is poorly understood. Here we review recent evidence, coming primarily from studies in rodents, arguing that non-visual and motor effects in visual cortex play a role in visual processing itself, for instance disentangling direct auditory effects on visual cortex from effects of sound-evoked orofacial movement. These findings are placed in a broader framework casting vision in terms of predictive processing under control of frontal, reward- and motor-related systems. In contrast to the prevalent notion that vision is exclusively constructed by the visual cortical system, we propose that visual percepts are generated by a larger network-the extended visual system-spanning other sensory cortices, supramodal areas and frontal systems. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Cyriel M. A. Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
| | - Matthijs N. Oude Lohuis
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
- Champalimaud Research, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Umberto Olcese
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
| |
Collapse
|
26
|
Saleem AB, Busse L. Interactions between rodent visual and spatial systems during navigation. Nat Rev Neurosci 2023; 24:487-501. [PMID: 37380885 DOI: 10.1038/s41583-023-00716-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/31/2023] [Indexed: 06/30/2023]
Abstract
Many behaviours that are critical for animals to survive and thrive rely on spatial navigation. Spatial navigation, in turn, relies on internal representations about one's spatial location, one's orientation or heading direction and the distance to objects in the environment. Although the importance of vision in guiding such internal representations has long been recognized, emerging evidence suggests that spatial signals can also modulate neural responses in the central visual pathway. Here, we review the bidirectional influences between visual and navigational signals in the rodent brain. Specifically, we discuss reciprocal interactions between vision and the internal representations of spatial position, explore the effects of vision on representations of an animal's heading direction and vice versa, and examine how the visual and navigational systems work together to assess the relative distances of objects and other features. Throughout, we consider how technological advances and novel ethological paradigms that probe rodent visuo-spatial behaviours allow us to advance our understanding of how brain areas of the central visual pathway and the spatial systems interact and enable complex behaviours.
Collapse
Affiliation(s)
- Aman B Saleem
- UCL Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, UK.
| | - Laura Busse
- Division of Neuroscience, Faculty of Biology, LMU Munich, Munich, Germany.
- Bernstein Centre for Computational Neuroscience Munich, Munich, Germany.
| |
Collapse
|
27
|
Keshavarzi S, Velez-Fort M, Margrie TW. Cortical Integration of Vestibular and Visual Cues for Navigation, Visual Processing, and Perception. Annu Rev Neurosci 2023; 46:301-320. [PMID: 37428601 PMCID: PMC7616138 DOI: 10.1146/annurev-neuro-120722-100503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.
Collapse
Affiliation(s)
- Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Mateo Velez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| |
Collapse
|
28
|
Yates JL, Coop SH, Sarch GH, Wu RJ, Butts DA, Rucci M, Mitchell JF. Detailed characterization of neural selectivity in free viewing primates. Nat Commun 2023; 14:3656. [PMID: 37339973 PMCID: PMC10282080 DOI: 10.1038/s41467-023-38564-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 05/08/2023] [Indexed: 06/22/2023] Open
Abstract
Fixation constraints in visual tasks are ubiquitous in visual and cognitive neuroscience. Despite its widespread use, fixation requires trained subjects, is limited by the accuracy of fixational eye movements, and ignores the role of eye movements in shaping visual input. To overcome these limitations, we developed a suite of hardware and software tools to study vision during natural behavior in untrained subjects. We measured visual receptive fields and tuning properties from multiple cortical areas of marmoset monkeys who freely viewed full-field noise stimuli. The resulting receptive fields and tuning curves from primary visual cortex (V1) and area MT match reported selectivity from the literature which was measured using conventional approaches. We then combined free viewing with high-resolution eye tracking to make the first detailed 2D spatiotemporal measurements of foveal receptive fields in V1. These findings demonstrate the power of free viewing to characterize neural responses in untrained animals while simultaneously studying the dynamics of natural behavior.
Collapse
Affiliation(s)
- Jacob L Yates
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA.
- Herbert Wertheim School of Optometry and Vision Science, UC Berkeley, Berkeley, CA, USA.
| | - Shanna H Coop
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
- Neurobiology, Stanford University, Stanford, CA, USA
| | - Gabriel H Sarch
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Ruei-Jr Wu
- Center for Visual Science, University of Rochester, Rochester, NY, USA
- Institute of Optics, University of Rochester, Rochester, NY, USA
| | - Daniel A Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | - Michele Rucci
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Jude F Mitchell
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| |
Collapse
|
29
|
Shaw L, Wang KH, Mitchell J. Fast prediction in marmoset reach-to-grasp movements for dynamic prey. Curr Biol 2023; 33:2557-2565.e4. [PMID: 37279754 PMCID: PMC10330526 DOI: 10.1016/j.cub.2023.05.032] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 03/31/2023] [Accepted: 05/15/2023] [Indexed: 06/08/2023]
Abstract
Primates have evolved sophisticated, visually guided reaching behaviors for interacting with dynamic objects, such as insects, during foraging.1,2,3,4,5 Reaching control in dynamic natural conditions requires active prediction of the target's future position to compensate for visuo-motor processing delays and to enhance online movement adjustments.6,7,8,9,10,11,12 Past reaching research in non-human primates mainly focused on seated subjects engaged in repeated ballistic arm movements to either stationary targets or targets that instantaneously change position during the movement.13,14,15,16,17 However, those approaches impose task constraints that limit the natural dynamics of reaching. A recent field study in marmoset monkeys highlights predictive aspects of visually guided reaching during insect prey capture among wild marmoset monkeys.5 To examine the complementary dynamics of similar natural behavior within a laboratory context, we developed an ecologically motivated, unrestrained reach-to-grasp task involving live crickets. We used multiple high-speed video cameras to capture the movements of common marmosets (Callithrix jacchus) and crickets stereoscopically and applied machine vision algorithms for marker-free object and hand tracking. Contrary to estimates under traditional constrained reaching paradigms, we find that reaching for dynamic targets can operate at incredibly short visuo-motor delays around 80 ms, rivaling the speeds that are typical of the oculomotor systems during closed-loop visual pursuit.18 Multivariate linear regression modeling of the kinematic relationships between the hand and cricket velocity revealed that predictions of the expected future location can compensate for visuo-motor delays during fast reaching. These results suggest a critical role of visual prediction facilitating online movement adjustments for dynamic prey.
Collapse
Affiliation(s)
- Luke Shaw
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY 14642, USA
| | - Kuan Hong Wang
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY 14642, USA.
| | - Jude Mitchell
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14611, USA.
| |
Collapse
|
30
|
St-Amand D, Baker CL. Model-Based Approach Shows ON Pathway Afferents Elicit a Transient Decrease of V1 Responses. J Neurosci 2023; 43:1920-1932. [PMID: 36759194 PMCID: PMC10027028 DOI: 10.1523/jneurosci.1220-22.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 01/29/2023] [Accepted: 01/30/2023] [Indexed: 02/11/2023] Open
Abstract
Neurons in the primary visual cortex (V1) receive excitation and inhibition from distinct parallel pathways processing lightness (ON) and darkness (OFF). V1 neurons overall respond more strongly to dark than light stimuli, consistent with a preponderance of darker regions in natural images, as well as human psychophysics. However, it has been unclear whether this "dark-dominance" is because of more excitation from the OFF pathway or more inhibition from the ON pathway. To understand the mechanisms behind dark-dominance, we record electrophysiological responses of individual simple-type V1 neurons to natural image stimuli and then train biologically inspired convolutional neural networks to predict the neurons' responses. Analyzing a sample of 71 neurons (in anesthetized, paralyzed cats of either sex) has revealed their responses to be more driven by dark than light stimuli, consistent with previous investigations. We show that this asymmetry is predominantly because of slower inhibition to dark stimuli rather than to stronger excitation from the thalamocortical OFF pathway. Consistent with dark-dominant neurons having faster responses than light-dominant neurons, we find dark-dominance to solely occur in the early latencies of neurons' responses. Neurons that are strongly dark-dominated also tend to be less orientation-selective. This novel approach gives us new insight into the dark-dominance phenomenon and provides an avenue to address new questions about excitatory and inhibitory integration in cortical neurons.SIGNIFICANCE STATEMENT Neurons in the early visual cortex respond on average more strongly to dark than to light stimuli, but the mechanisms behind this bias have been unclear. Here we address this issue by combining single-unit electrophysiology with a novel machine learning model to analyze neurons' responses to natural image stimuli in primary visual cortex. Using these techniques, we find slower inhibition to light than to dark stimuli to be the leading mechanism behind stronger dark responses. This slower inhibition to light might help explain other empirical findings, such as why orientation selectivity is weaker at earlier response latencies. These results demonstrate how imbalances in excitation versus inhibition can give rise to response asymmetries in cortical neuron responses.
Collapse
Affiliation(s)
- David St-Amand
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Quebec H3G 1A4, Canada
| | - Curtis L Baker
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Quebec H3G 1A4, Canada
| |
Collapse
|
31
|
Li JY, Glickfeld LL. Input-specific synaptic depression shapes temporal integration in mouse visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.30.526211. [PMID: 36778279 PMCID: PMC9915496 DOI: 10.1101/2023.01.30.526211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Efficient sensory processing requires the nervous system to adjust to ongoing features of the environment. In primary visual cortex (V1), neuronal activity strongly depends on recent stimulus history. Existing models can explain effects of prolonged stimulus presentation, but remain insufficient for explaining effects observed after shorter durations commonly encountered under natural conditions. We investigated the mechanisms driving adaptation in response to brief (100 ms) stimuli in L2/3 V1 neurons by performing in vivo whole-cell recordings to measure membrane potential and synaptic inputs. We find that rapid adaptation is generated by stimulus-specific suppression of excitatory and inhibitory synaptic inputs. Targeted optogenetic experiments reveal that these synaptic effects are due to input-specific short-term depression of transmission between layers 4 and 2/3. Thus, distinct mechanisms are engaged following brief and prolonged stimulus presentation and together enable flexible control of sensory encoding across a wide range of time scales.
Collapse
Affiliation(s)
- Jennifer Y Li
- Department of Neurobiology, Duke University Medical Center, Durham, NC 27701, USA
| | - Lindsey L Glickfeld
- Department of Neurobiology, Duke University Medical Center, Durham, NC 27701, USA
| |
Collapse
|
32
|
Noel JP, Balzani E, Avila E, Lakshminarasimhan KJ, Bruni S, Alefantis P, Savin C, Angelaki DE. Coding of latent variables in sensory, parietal, and frontal cortices during closed-loop virtual navigation. eLife 2022; 11:e80280. [PMID: 36282071 PMCID: PMC9668339 DOI: 10.7554/elife.80280] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 10/24/2022] [Indexed: 11/13/2022] Open
Abstract
We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to 'catch fireflies'. This task requires animals to actively sample from a closed-loop virtual environment while concurrently computing continuous latent variables: (i) the distance and angle travelled (i.e., path integration) and (ii) the distance and angle to a memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding for latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye movements. However, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating path integration and vector coding of hidden spatial goals. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals' gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the monkeys' natural and adaptive task strategy wherein they continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain functional subnetworks may be dynamically established to subserve (embodied) task strategies.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Edoardo Balzani
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Eric Avila
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Kaushik J Lakshminarasimhan
- Center for Neural Science, New York UniversityNew York CityUnited States
- Center for Theoretical Neuroscience, Columbia UniversityNew YorkUnited States
| | - Stefania Bruni
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Panos Alefantis
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Cristina Savin
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew York CityUnited States
| |
Collapse
|