1
|
Zhu Z, Kim B, Doudlah R, Chang TY, Rosenberg A. Differential clustering of visual and choice- and saccade-related activity in macaque V3A and CIP. J Neurophysiol 2024; 131:709-722. [PMID: 38478896 PMCID: PMC11305645 DOI: 10.1152/jn.00285.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 04/11/2024] Open
Abstract
Neurons in sensory and motor cortices tend to aggregate in clusters with similar functional properties. Within the primate dorsal ("where") pathway, an important interface between three-dimensional (3-D) visual processing and motor-related functions consists of two hierarchically organized areas: V3A and the caudal intraparietal (CIP) area. In these areas, 3-D visual information, choice-related activity, and saccade-related activity converge, often at the single-neuron level. Characterizing the clustering of functional properties in areas with mixed selectivity, such as these, may help reveal organizational principles that support sensorimotor transformations. Here we quantified the clustering of visual feature selectivity, choice-related activity, and saccade-related activity by performing correlational and parametric comparisons of the responses of well-isolated, simultaneously recorded neurons in macaque monkeys. Each functional domain showed statistically significant clustering in both areas. However, there were also domain-specific differences in the strength of clustering across the areas. Visual feature selectivity and saccade-related activity were more strongly clustered in V3A than in CIP. In contrast, choice-related activity was more strongly clustered in CIP than in V3A. These differences in clustering may reflect the areas' roles in sensorimotor processing. Stronger clustering of visual and saccade-related activity in V3A may reflect a greater role in within-domain processing, as opposed to cross-domain synthesis. In contrast, stronger clustering of choice-related activity in CIP may reflect a greater role in synthesizing information across functional domains to bridge perception and action.NEW & NOTEWORTHY The occipital and parietal cortices of macaque monkeys are bridged by hierarchically organized areas V3A and CIP. These areas support 3-D visual transformations, carry choice-related activity during 3-D perceptual tasks, and possess saccade-related activity. This study quantifies the functional clustering of neuronal response properties within V3A and CIP for each of these domains. The findings reveal domain-specific cross-area differences in clustering that may reflect the areas' roles in sensorimotor processing.
Collapse
Affiliation(s)
- Zikang Zhu
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Ting-Yu Chang
- School of Medicine, National Defense Medical Center, Taipei, Taiwan
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| |
Collapse
|
2
|
Thompson LW, Kim B, Rokers B, Rosenberg A. Hierarchical computation of 3D motion across macaque areas MT and FST. Cell Rep 2023; 42:113524. [PMID: 38064337 PMCID: PMC10791528 DOI: 10.1016/j.celrep.2023.113524] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/25/2023] [Accepted: 11/15/2023] [Indexed: 12/30/2023] Open
Abstract
Computing behaviorally relevant representations of three-dimensional (3D) motion from two-dimensional (2D) retinal signals is critical for survival. To ascertain where and how the primate visual system performs this computation, we recorded from the macaque middle temporal (MT) area and its downstream target, the fundus of the superior temporal sulcus (area FST). Area MT is a key site of 2D motion processing, but its role in 3D motion processing is controversial. The functions of FST remain highly underexplored. To distinguish representations of 3D motion from those of 2D retinal motion, we contrast responses to multiple motion cues during a motion discrimination task. The results reveal a hierarchical transformation whereby many FST but not MT neurons are selective for 3D motion. Modeling results further show how generalized, cue-invariant representations of 3D motion in FST may be created by selectively integrating the output of 2D motion selective MT neurons.
Collapse
Affiliation(s)
- Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Bas Rokers
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA.
| |
Collapse
|
3
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
4
|
Doudlah R, Chang TY, Thompson LW, Kim B, Sunkara A, Rosenberg A. Parallel processing, hierarchical transformations, and sensorimotor associations along the 'where' pathway. eLife 2022; 11:78712. [PMID: 35950921 PMCID: PMC9439678 DOI: 10.7554/elife.78712] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 08/10/2022] [Indexed: 11/13/2022] Open
Abstract
Visually guided behaviors require the brain to transform ambiguous retinal images into object-level spatial representations and implement sensorimotor transformations. These processes are supported by the dorsal ‘where’ pathway. However, the specific functional contributions of areas along this pathway remain elusive due in part to methodological differences across studies. We previously showed that macaque caudal intraparietal (CIP) area neurons possess robust 3D visual representations, carry choice- and saccade-related activity, and exhibit experience-dependent sensorimotor associations (Chang et al., 2020b). Here, we used a common experimental design to reveal parallel processing, hierarchical transformations, and the formation of sensorimotor associations along the ‘where’ pathway by extending the investigation to V3A, a major feedforward input to CIP. Higher-level 3D representations and choice-related activity were more prevalent in CIP than V3A. Both areas contained saccade-related activity that predicted the direction/timing of eye movements. Intriguingly, the time course of saccade-related activity in CIP aligned with the temporally integrated V3A output. Sensorimotor associations between 3D orientation and saccade direction preferences were stronger in CIP than V3A, and moderated by choice signals in both areas. Together, the results explicate parallel representations, hierarchical transformations, and functional associations of visual and saccade-related signals at a key juncture in the ‘where’ pathway.
Collapse
Affiliation(s)
- Raymond Doudlah
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| | - Ting-Yu Chang
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| | - Lowell W Thompson
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| | - Byounghoon Kim
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| | | | - Ari Rosenberg
- Department of Neuroscience, University of Wisconsin-Madison, Madison, United States
| |
Collapse
|
5
|
Akam T, Lustig A, Rowland JM, Kapanaiah SKT, Esteve-Agraz J, Panniello M, Márquez C, Kohl MM, Kätzel D, Costa RM, Walton ME. Open-source, Python-based, hardware and software for controlling behavioural neuroscience experiments. eLife 2022; 11:e67846. [PMID: 35043782 PMCID: PMC8769647 DOI: 10.7554/elife.67846] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 01/03/2022] [Indexed: 01/05/2023] Open
Abstract
Laboratory behavioural tasks are an essential research tool. As questions asked of behaviour and brain activity become more sophisticated, the ability to specify and run richly structured tasks becomes more important. An increasing focus on reproducibility also necessitates accurate communication of task logic to other researchers. To these ends, we developed pyControl, a system of open-source hardware and software for controlling behavioural experiments comprising a simple yet flexible Python-based syntax for specifying tasks as extended state machines, hardware modules for building behavioural setups, and a graphical user interface designed for efficiently running high-throughput experiments on many setups in parallel, all with extensive online documentation. These tools make it quicker, easier, and cheaper to implement rich behavioural tasks at scale. As important, pyControl facilitates communication and reproducibility of behavioural experiments through a highly readable task definition syntax and self-documenting features. Here, we outline the system's design and rationale, present validation experiments characterising system performance, and demonstrate example applications in freely moving and head-fixed mouse behaviour.
Collapse
Affiliation(s)
- Thomas Akam
- Department of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Champalimaud Neuroscience Program, Champalimaud Centre for the UnknownLisbonPortugal
| | - Andy Lustig
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - James M Rowland
- Department of Physiology Anatomy & Genetics, University of OxfordOxfordUnited Kingdom
| | | | - Joan Esteve-Agraz
- Instituto de Neurociencias (Universidad Miguel Hernández-Consejo Superior de Investigaciones Científicas)Sant Joan d’AlacantSpain
| | - Mariangela Panniello
- Department of Physiology Anatomy & Genetics, University of OxfordOxfordUnited Kingdom
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Cristina Márquez
- Instituto de Neurociencias (Universidad Miguel Hernández-Consejo Superior de Investigaciones Científicas)Sant Joan d’AlacantSpain
| | - Michael M Kohl
- Department of Physiology Anatomy & Genetics, University of OxfordOxfordUnited Kingdom
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Dennis Kätzel
- Institute of Applied Physiology, Ulm UniversityUlmGermany
| | - Rui M Costa
- Champalimaud Neuroscience Program, Champalimaud Centre for the UnknownLisbonPortugal
- Department of Neuroscience and Neurology, Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Mark E Walton
- Department of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Wellcome Centre for Integrative Neuroimaging, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
6
|
Thompson LW, Kim B, Zhu Z, Rokers B, Rosenberg A. Perspective Cues Make Eye-specific Contributions to 3-D Motion Perception. J Cogn Neurosci 2021; 34:192-208. [PMID: 34813655 PMCID: PMC8692976 DOI: 10.1162/jocn_a_01781] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Robust 3-D visual perception is achieved by integrating stereoscopic and perspective cues. The canonical model describing the integration of these cues assumes that perspective signals sensed by the left and right eyes are indiscriminately pooled into a single representation that contributes to perception. Here, we show that this model fails to account for 3-D motion perception. We measured the sensitivity of male macaque monkeys to 3-D motion signaled by left-eye perspective cues, right-eye perspective cues, stereoscopic cues, and all three cues combined. The monkeys exhibited idiosyncratic differences in their biases and sensitivities for each cue, including left- and right-eye perspective cues, suggesting that the signals undergo at least partially separate neural processing. Importantly, sensitivity to combined cue stimuli was greater than predicted by the canonical model, which previous studies found to account for the perception of 3-D orientation in both humans and monkeys. Instead, 3-D motion sensitivity was best explained by a model in which stereoscopic cues were integrated with left- and right-eye perspective cues whose representations were at least partially independent. These results indicate that the integration of perspective and stereoscopic cues is a shared computational strategy across 3-D processing domains. However, they also reveal a fundamental difference in how left- and right-eye perspective signals are represented for 3-D orientation versus motion perception. This difference results in more effective use of available sensory information in the processing of 3-D motion than orientation and may reflect the temporal urgency of avoiding and intercepting moving objects.
Collapse
|
7
|
Kapanaiah SKT, van der Veen B, Strahnen D, Akam T, Kätzel D. A low-cost open-source 5-choice operant box system optimized for electrophysiology and optophysiology in mice. Sci Rep 2021; 11:22279. [PMID: 34782697 PMCID: PMC8593009 DOI: 10.1038/s41598-021-01717-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 11/01/2021] [Indexed: 11/16/2022] Open
Abstract
Operant boxes enable the application of complex behavioural paradigms to support circuit neuroscience and drug discovery research. However, commercial operant box systems are expensive and often not optimised for combining behaviour with neurophysiology. Here we introduce a fully open-source Python-based operant-box system in a 5-choice design (pyOS-5) that enables assessment of multiple cognitive and affective functions. It is optimized for fast turn-over between animals, and for testing of tethered mice for simultaneous physiological recordings or optogenetic manipulation. For reward delivery, we developed peristaltic and syringe pumps based on a stepper motor and 3D-printed parts. Tasks are specified using a Python-based syntax implemented on custom-designed printed circuit boards that are commercially available at low cost. We developed an open-source graphical user interface (GUI) and task definition scripts to conduct assays assessing operant learning, attention, impulsivity, working memory, or cognitive flexibility, alleviating the need for programming skills of the end user. All behavioural events are recorded with millisecond resolution, and TTL-outputs and -inputs allow straightforward integration with physiological recordings and closed-loop manipulations. This combination of features realizes a cost-effective, nose-poke-based operant box system that allows reliable circuit-neuroscience experiments investigating correlates of cognition and emotion in large cohorts of subjects.
Collapse
Affiliation(s)
| | | | - Daniel Strahnen
- Institute of Applied Physiology, Ulm University, Ulm, Germany
| | - Thomas Akam
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Dennis Kätzel
- Institute of Applied Physiology, Ulm University, Ulm, Germany.
| |
Collapse
|
8
|
Optimized but Not Maximized Cue Integration for 3D Visual Perception. eNeuro 2020; 7:ENEURO.0411-19.2019. [PMID: 31836597 PMCID: PMC6948924 DOI: 10.1523/eneuro.0411-19.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 12/05/2019] [Accepted: 12/08/2019] [Indexed: 02/02/2023] Open
Abstract
Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-posed problem. Despite this, 3D perception of the world based on 2D retinal images is seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D perception in humans, but it is unclear whether this is true for non-human primates (NHPs). Here, we assessed 3D perception in macaque monkeys using a planar surface orientation discrimination task. Perception was accurate across a wide range of spatial poses (orientations and distances), but precision was highly dependent on the plane's pose. The monkeys achieved robust 3D perception by dynamically reweighting the integration of stereoscopic and perspective cues according to their pose-dependent reliabilities. Errors in performance could be explained by a prior resembling the 3D orientation statistics of natural scenes. We used neural network simulations based on 3D orientation-selective neurons recorded from the same monkeys to assess how neural computation might constrain perception. The perceptual data were consistent with a model in which the responses of two independent neuronal populations representing stereoscopic cues and perspective cues (with perspective signals from the two eyes combined using nonlinear canonical computations) were optimally integrated through linear summation. Perception of combined-cue stimuli was optimal given this architecture. However, an alternative architecture in which stereoscopic cues, left eye perspective cues, and right eye perspective cues were represented by three independent populations yielded two times greater precision than the monkeys. This result suggests that, due to canonical computations, cue integration for 3D perception is optimized but not maximized.
Collapse
|
9
|
Chang TY, Doudlah R, Kim B, Sunkara A, Thompson LW, Lowe ME, Rosenberg A. Functional links between sensory representations, choice activity, and sensorimotor associations in parietal cortex. eLife 2020; 9:57968. [PMID: 33078705 PMCID: PMC7641584 DOI: 10.7554/elife.57968] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 10/19/2020] [Indexed: 02/02/2023] Open
Abstract
Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.
Collapse
Affiliation(s)
- Ting-Yu Chang
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | | | - Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Meghan E Lowe
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin–MadisonMadisonUnited States
| |
Collapse
|