1
|
Pickard K, Davidson MJ, Kim S, Alais D. Incongruent active head rotations increase visual motion detection thresholds. Neurosci Conscious 2024; 2024:niae019. [PMID: 38757119 PMCID: PMC11097904 DOI: 10.1093/nc/niae019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/18/2024] [Accepted: 04/24/2024] [Indexed: 05/18/2024] Open
Abstract
Attributing a visual motion signal to its correct source-be that external object motion, self-motion, or some combination of both-seems effortless, and yet often involves disentangling a complex web of motion signals. Existing literature focuses on either translational motion (heading) or eye movements, leaving much to be learnt about the influence of a wider range of self-motions, such as active head rotations, on visual motion perception. This study investigated how active head rotations affect visual motion detection thresholds, comparing conditions where visual motion and head-turn direction were either congruent or incongruent. Participants judged the direction of a visual motion stimulus while rotating their head or remaining stationary, using a fixation-locked Virtual Reality display with integrated head-movement recordings. Thresholds to perceive visual motion were higher in both active-head rotation conditions compared to stationary, though no differences were found between congruent or incongruent conditions. Participants also showed a significant bias to report seeing visual motion travelling in the same direction as the head rotation. Together, these results demonstrate active head rotations increase visual motion perceptual thresholds, particularly in cases of incongruent visual and active vestibular stimulation.
Collapse
Affiliation(s)
- Kate Pickard
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Matthew J Davidson
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Sujin Kim
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
2
|
Liu Y, Tian J, Martin-Gomez A, Arshad Q, Armand M, Kheradmand A. Autokinesis Reveals a Threshold for Perception of Visual Motion. Neuroscience 2024; 543:101-107. [PMID: 38432549 PMCID: PMC10965040 DOI: 10.1016/j.neuroscience.2024.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 03/05/2024]
Abstract
In natural viewing conditions, the brain can optimally integrate retinal and extraretinal signals to maintain a stable visual perception. These mechanisms, however, may fail in circumstances where extraction of a motion signal is less viable such as impoverished visual scenes. This can result in a phenomenon known as autokinesis in which one may experience apparent motion of a small visual stimulus in an otherwise completely dark environment. In this study, we examined the effect of autokinesis on visual perception of motion in human observers. We used a novel method with optical tracking in which the visual motion was reported manually by the observer. Experiment results show at lower speeds of motion, the perceived direction of motion was more aligned with the effect of autokinesis, whereas in the light or at higher speeds in the dark, it was more aligned with the actual direction of motion. These findings have important implications for understanding how the stability of visual representation in the brain can affect accurate perception of motion signals.
Collapse
Affiliation(s)
- Yihao Liu
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA; Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, MD 21218, USA
| | - Jing Tian
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Alejandro Martin-Gomez
- Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, MD 21218, USA
| | - Qadeer Arshad
- Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Mehran Armand
- Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, MD 21218, USA
| | - Amir Kheradmand
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA; Department of Neuroscience, The Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
3
|
Frattini D, Rosén N, Wibble T. A Proposed Mechanism for Visual Vertigo: Post-Concussion Patients Have Higher Gain From Visual Input Into Subcortical Gaze Stabilization. Invest Ophthalmol Vis Sci 2024; 65:26. [PMID: 38607620 PMCID: PMC11018265 DOI: 10.1167/iovs.65.4.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 03/20/2024] [Indexed: 04/13/2024] Open
Abstract
Purpose Post-concussion syndrome (PCS) is commonly associated with dizziness and visual motion sensitivity. This case-control study set out to explore altered motion processing in PCS by measuring gaze stabilization as a reflection of the capacity of the brain to integrate motion, and it aimed to uncover mechanisms of injury where invasive subcortical recordings are not feasible. Methods A total of 554 eye movements were analyzed in 10 PCS patients and nine healthy controls across 171 trials. Optokinetic and vestibulo-ocular reflexes were recorded using a head-mounted eye tracker while participants were exposed to visual, vestibular, and visuo-vestibular motion stimulations in the roll plane. Torsional and vergence eye movements were analyzed in terms of slow-phase velocities, gain, nystagmus frequency, and sensory-specific contributions toward gaze stabilization. Results Participants expressed eye-movement responses consistent with expected gaze stabilization; slow phases were fastest for visuo-vestibular trials and slowest for visual stimulations (P < 0.001) and increased with stimulus acceleration (P < 0.001). Concussed patients demonstrated increased gain from visual input to gaze stabilization (P = 0.005), faster slow phases (P = 0.013), earlier nystagmus beats (P = 0.003), and higher relative visual influence over the gaze-stabilizing response (P = 0.001), presenting robust effect sizes despite the limited population size. Conclusions The enhanced neural responsiveness to visual motion in PCS, combined with semi-intact visuo-vestibular integration, presented a subcortical hierarchy for altered gaze stabilization. Drawing on comparable animal trials, findings suggest that concussed patients may suffer from diffuse injuries to inhibiting pathways for optokinetic information, likely early in the visuo-vestibular hierarchy of sensorimotor integration. These findings offer context for common but elusive symptoms, presenting a neurological explanation for motion sensitivity and visual vertigo in PCS.
Collapse
Affiliation(s)
- Davide Frattini
- Department of Clinical Neuroscience, Division of Eye and Vision, Marianne Bernadotte Centrum, St. Erik Eye Hospital, Karolinska Institutet, Stockholm, Sweden
| | - Niklas Rosén
- Department of Clinical Neuroscience, Division of Eye and Vision, Marianne Bernadotte Centrum, St. Erik Eye Hospital, Karolinska Institutet, Stockholm, Sweden
| | - Tobias Wibble
- Department of Clinical Neuroscience, Division of Eye and Vision, Marianne Bernadotte Centrum, St. Erik Eye Hospital, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
4
|
Hu Y, Wang H, Joshua M, Yang Y. Sensorimotor-linked reward modulates smooth pursuit eye movements in monkeys. Front Neurosci 2024; 17:1297914. [PMID: 38264498 PMCID: PMC10803645 DOI: 10.3389/fnins.2023.1297914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 12/18/2023] [Indexed: 01/25/2024] Open
Abstract
Reward is essential for shaping behavior. Using sensory cues to imply forthcoming rewards, previous studies have demonstrated powerful effects of rewards on behavior. Nevertheless, the impact of reward on the sensorimotor transformation, particularly when reward is linked to behavior remains uncertain. In this study, we investigated how reward modulates smooth pursuit eye movements in monkeys. Three distinct associations between reward and eye movements were conducted in independent blocks. Results indicated that reward increased eye velocity during the steady-state pursuit, rather than during the initiation. The influence depended on the particular association between behavior and reward: a faster eye velocity was linked with reward. Neither rewarding slower eye movements nor randomizing rewards had a significant effect on behavior. The findings support the existence of distinct mechanisms involved in the initiation and steady-state phases of pursuit, and contribute to a deeper understanding of how reward interacts with these two periods of pursuit.
Collapse
Affiliation(s)
- Yongxiang Hu
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Huan Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Mati Joshua
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yan Yang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
5
|
Rowe EG, Zhang Y, Garrido MI. Evidence for adaptive myelination of subcortical shortcuts for visual motion perception in healthy adults. Hum Brain Mapp 2023; 44:5641-5654. [PMID: 37608684 PMCID: PMC10619379 DOI: 10.1002/hbm.26467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 05/27/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023] Open
Abstract
Conscious visual motion information follows a cortical pathway from the retina to the lateral geniculate nucleus (LGN) and on to the primary visual cortex (V1) before arriving at the middle temporal visual area (MT/V5). Alternative subcortical pathways that bypass V1 are thought to convey unconscious visual information. One flows from the retina to the pulvinar (PUL) and on to medial temporal visual area (MT); while the other directly connects the LGN to MT. Evidence for these pathways comes from non-human primates and modest-sized studies in humans with brain lesions. Thus, the aim of the current study was to reconstruct these pathways in a large sample of neurotypical individuals and to determine the degree to which these pathways are myelinated, suggesting information flow is rapid. We used the publicly available 7T (N = 98; 'discovery') and 3T (N = 381; 'validation') diffusion magnetic resonance imaging datasets from the Human Connectome Project to reconstruct the PUL-MT (including all subcompartments of the PUL) and LGN-MT pathways. We found more fibre tracts with greater density in the left hemisphere. Although the left PUL-MT path was denser, the bilateral LGN-MT tracts were more heavily myelinated, suggesting faster signal transduction. We suggest that this apparent discrepancy may be due to 'adaptive myelination' caused by more frequent use of the LGN-MT pathway that leads to greater myelination and faster overall signal transmission.
Collapse
Affiliation(s)
- Elise G. Rowe
- Melbourne School of Psychological SciencesThe University of MelbourneParkvilleVictoriaAustralia
| | - Yubing Zhang
- Melbourne School of Psychological SciencesThe University of MelbourneParkvilleVictoriaAustralia
| | - Marta I. Garrido
- Melbourne School of Psychological SciencesThe University of MelbourneParkvilleVictoriaAustralia
- Graeme Clark Institute for Biomedical EngineeringThe University of MelbourneParkvilleVictoriaAustralia
| |
Collapse
|
6
|
Mano O, Choi M, Tanaka R, Creamer MS, Matos NCB, Shomar JW, Badwan BA, Clandinin TR, Clark DA. Long-timescale anti-directional rotation in Drosophila optomotor behavior. eLife 2023; 12:e86076. [PMID: 37751469 PMCID: PMC10522332 DOI: 10.7554/elife.86076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 09/12/2023] [Indexed: 09/28/2023] Open
Abstract
Locomotor movements cause visual images to be displaced across the eye, a retinal slip that is counteracted by stabilizing reflexes in many animals. In insects, optomotor turning causes the animal to turn in the direction of rotating visual stimuli, thereby reducing retinal slip and stabilizing trajectories through the world. This behavior has formed the basis for extensive dissections of motion vision. Here, we report that under certain stimulus conditions, two Drosophila species, including the widely studied Drosophila melanogaster, can suppress and even reverse the optomotor turning response over several seconds. Such 'anti-directional turning' is most strongly evoked by long-lasting, high-contrast, slow-moving visual stimuli that are distinct from those that promote syn-directional optomotor turning. Anti-directional turning, like the syn-directional optomotor response, requires the local motion detecting neurons T4 and T5. A subset of lobula plate tangential cells, CH cells, show involvement in these responses. Imaging from a variety of direction-selective cells in the lobula plate shows no evidence of dynamics that match the behavior, suggesting that the observed inversion in turning direction emerges downstream of the lobula plate. Further, anti-directional turning declines with age and exposure to light. These results show that Drosophila optomotor turning behaviors contain rich, stimulus-dependent dynamics that are inconsistent with simple reflexive stabilization responses.
Collapse
Affiliation(s)
- Omer Mano
- Department of Molecular, Cellular, and Developmental Biology, Yale UniversityNew HavenUnited States
| | - Minseung Choi
- Department of Neurobiology, Stanford UniversityStanfordUnited States
| | - Ryosuke Tanaka
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
| | - Matthew S Creamer
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
| | - Natalia CB Matos
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
| | - Joseph W Shomar
- Department of Physics, Yale UniversityNew HavenUnited States
| | - Bara A Badwan
- Department of Chemical Engineering, Yale UniversityNew HavenUnited States
| | | | - Damon A Clark
- Department of Molecular, Cellular, and Developmental Biology, Yale UniversityNew HavenUnited States
- Interdepartmental Neuroscience Program, Yale UniversityNew HavenUnited States
- Department of Physics, Yale UniversityNew HavenUnited States
- Department of Neuroscience, Yale UniversityNew HavenUnited States
| |
Collapse
|
7
|
Sheliga BM, FitzGibbon EJ. Manipulating the Fourier spectra of stimuli comprising a two-frame kinematogram to study early visual motion-detecting mechanisms: Perception versus short latency ocular-following responses. J Vis 2023; 23:11. [PMID: 37725387 PMCID: PMC10513114 DOI: 10.1167/jov.23.10.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/20/2023] [Indexed: 09/21/2023] Open
Abstract
Two-frame kinematograms have been extensively used to study motion perception in human vision. Measurements of the direction-discrimination performance limits (Dmax) have been the primary subject of such studies, whereas surprisingly little research has asked how the variability in the spatial frequency content of individual frames affects motion processing. Here, we used two-frame one-dimensional vertical pink noise kinematograms, in which images in both frames were bandpass filtered, with the central spatial frequency of the filter manipulated independently for each image. To avoid spatial aliasing, there was no actual leftward-rightward shift of the image: instead, the phases of all Fourier components of the second image were shifted by ±¼ wavelength with respect to those of the first. We recorded ocular-following responses (OFRs) and perceptual direction discrimination in human subjects. OFRs were in the direction of the Fourier components' shift and showed a smooth decline in amplitude, well fit by Gaussian functions, as the difference between the central spatial frequencies of the first and second images increased. In sharp contrast, 100% correct perceptual direction-discrimination performance was observed when the difference between the central spatial frequencies of the first and second images was small, deteriorating rapidly to chance when increased further. Perceptual dependencies moved closer to the OFR ones when subjects were allowed to grade the strength of perceived motion. Response asymmetries common for perceptual judgments and the OFRs suggest that they rely on the same early visual processing mechanisms. The OFR data were quantitatively well described by a model which combined two factors: (1) an excitatory drive determined by a power law sum of stimulus Fourier components' contributions, scaled by (2) a contrast normalization mechanism. Thus, in addition to traditional studies relying on perceptual reports, the OFRs represent a valuable behavioral tool for studying early motion processing on a fine scale.
Collapse
Affiliation(s)
- Boris M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
8
|
Wallisch P, Mackey WE, Karlovich MW, Heeger DJ. The visible gorilla: Unexpected fast-not physically salient-Objects are noticeable. Proc Natl Acad Sci U S A 2023; 120:e2214930120. [PMID: 37216543 PMCID: PMC10235989 DOI: 10.1073/pnas.2214930120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 03/31/2023] [Indexed: 05/24/2023] Open
Abstract
It is widely believed that observers can fail to notice clearly visible unattended objects, even if they are moving. Here, we created parametric tasks to test this belief and report the results of three high-powered experiments (total n = 4,493) indicating that this effect is strongly modulated by the speed of the unattended object. Specifically, fast-but not slow-objects are readily noticeable, whether they are attended or not. These results suggest that fast motion serves as a potent exogenous cue that overrides task-focused attention, showing that fast speeds, not long exposure duration or physical salience, strongly diminish inattentional blindness effects.
Collapse
Affiliation(s)
- Pascal Wallisch
- Department of Psychology, New York University, New York, NY10003
| | - Wayne E. Mackey
- Department of Psychology, New York University, New York, NY10003
| | | | - David J. Heeger
- Department of Psychology, New York University, New York, NY10003
| |
Collapse
|
9
|
Gaglianese A, Fracasso A, Fernandes FG, Harvey B, Dumoulin SO, Petridou N. Mechanisms of speed encoding in the human middle temporal cortex measured by 7T fMRI. Hum Brain Mapp 2023; 44:2050-2061. [PMID: 36637226 PMCID: PMC9980888 DOI: 10.1002/hbm.26193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 11/28/2022] [Accepted: 12/11/2022] [Indexed: 01/14/2023] Open
Abstract
Perception of dynamic scenes in our environment results from the evaluation of visual features such as the fundamental spatial and temporal frequency components of a moving object. The ratio between these two components represents the object's speed of motion. The human middle temporal cortex hMT+ has a crucial biological role in the direct encoding of object speed. However, the link between hMT+ speed encoding and the spatiotemporal frequency components of a moving object is still under explored. Here, we recorded high resolution 7T blood oxygen level-dependent BOLD responses to different visual motion stimuli as a function of their fundamental spatial and temporal frequency components. We fitted each hMT+ BOLD response with a 2D Gaussian model allowing for two different speed encoding mechanisms: (1) distinct and independent selectivity for the spatial and temporal frequencies of the visual motion stimuli; (2) pure tuning for the speed of motion. We show that both mechanisms occur but in different neuronal groups within hMT+, with the largest subregion of the complex showing separable tuning for the spatial and temporal frequency of the visual stimuli. Both mechanisms were highly reproducible within participants, reconciling single cell recordings from MT in animals that have showed both encoding mechanisms. Our findings confirm that a more complex process is involved in the perception of speed than initially thought and suggest that hMT+ plays a primary role in the evaluation of the spatial features of the moving visual input.
Collapse
Affiliation(s)
- Anna Gaglianese
- The Laboratory for Investigative Neurophysiology (The LINE), Department of RadiologyUniversity Hospital Center and University of LausanneLausanneSwitzerland
- Department of Neurosurgery and Neurology, UMC Utrecht Brain CenterUniversity Medical CenterUtrechtNetherlands
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
| | - Alessio Fracasso
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
- University of GlasgowSchool of Psychology and NeuroscienceGlasgowUK
- Spinoza Center for NeuroimagingAmsterdamNetherlands
| | - Francisco G. Fernandes
- Department of Neurosurgery and Neurology, UMC Utrecht Brain CenterUniversity Medical CenterUtrechtNetherlands
| | - Ben Harvey
- Experimental Psychology, Helmholtz InstituteUtrecht UniversityUtrechtNetherlands
| | - Serge O. Dumoulin
- Experimental Psychology, Helmholtz InstituteUtrecht UniversityUtrechtNetherlands
| | - Natalia Petridou
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
| |
Collapse
|
10
|
Abstract
Self-motion perception involves an interaction between vestibular and visual brain regions. In the lateral brain, it includes the parieto-insular vestibular cortex and the posterior insular cortex. In the medial cortex, the cingulate sulcus visual (CSv) area is known to process visual-vestibular cues. Here, we show that the vestibular-visual network of the medial cortex extends beyond area CSv. We examined brain activations of 36 healthy right-handed participants by functional magnetic resonance imaging (fMRI) during stimulation with caloric vestibular, thermal or visual motion cues. Consistent with previous research, we found that area CSv responded to both vestibular and visual cues but not to thermal cues. Moreover, the V6 complex and the precuneus motion (PcM) area responded primarily to (laminar-translational) visual motion cues. However, we also observed a region inferior to CSv within the pericallosal sulcus (vicinity of anterior retrosplenial) that primarily responded to vestibular cues. This vestibular pericallosal sulcus (vPCS) region did not respond to either visual or thermal cues. It was also distinct from a more posterior motion-sensitive region in the retrosplenial complex (mRSC) that responded to (radial) visual motion but not to vestibular and thermal cues. Together, our results suggest that the vestibular-visual network in the medial cortex not only includes areas CSv, PcM, and the V6 complex, but two additional brain regions adjacent to the callosum. These two brain regions exhibit similarities in terms of their locations and responses to vestibular and visual cues with self-motion related brain regions recently described in non-human primates.
Collapse
Affiliation(s)
- Anton L Beer
- Psychology, University of Regensburg, Regensburg, Germany
| | - Markus Becker
- Psychology, University of Regensburg, Regensburg, Germany
| | | | | |
Collapse
|
11
|
Levi AJ, Zhao Y, Park IM, Huk AC. Sensory and Choice Responses in MT Distinct from Motion Encoding. J Neurosci 2023; 43:2090-2103. [PMID: 36781221 PMCID: PMC10042117 DOI: 10.1523/jneurosci.0267-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 01/16/2023] [Accepted: 01/19/2023] [Indexed: 02/15/2023] Open
Abstract
The macaque middle temporal (MT) area is well known for its visual motion selectivity and relevance to motion perception, but the possibility of it also reflecting higher-level cognitive functions has largely been ignored. We tested for effects of task performance distinct from sensory encoding by manipulating subjects' temporal evidence-weighting strategy during a direction discrimination task while performing electrophysiological recordings from groups of MT neurons in rhesus macaques (one male, one female). This revealed multiple components of MT responses that were, surprisingly, not interpretable as behaviorally relevant modulations of motion encoding, or as bottom-up consequences of the readout of motion direction from MT. The time-varying motion-driven responses of MT were strongly affected by our strategic manipulation-but with time courses opposite the subjects' temporal weighting strategies. Furthermore, large choice-correlated signals were represented in population activity distinct from its motion responses, with multiple phases that lagged psychophysical readout and even continued after the stimulus (but which preceded motor responses). In summary, a novel experimental manipulation of strategy allowed us to control the time course of readout to challenge the correlation between sensory responses and choices, and population-level analyses of simultaneously recorded ensembles allowed us to identify strong signals that were so distinct from direction encoding that conventional, single-neuron-centric analyses could not have revealed or properly characterized them. Together, these approaches revealed multiple cognitive contributions to MT responses that are task related but not functionally relevant to encoding or decoding of motion for psychophysical direction discrimination, providing a new perspective on the assumed status of MT as a simple sensory area.SIGNIFICANCE STATEMENT This study extends understanding of the middle temporal (MT) area beyond its representation of visual motion. Combining multineuron recordings, population-level analyses, and controlled manipulation of task strategy, we exposed signals that depended on changes in temporal weighting strategy, but did not manifest as feedforward effects on behavior. This was demonstrated by (1) an inverse relationship between temporal dynamics of behavioral readout and sensory encoding, (2) a choice-correlated signal that always lagged the stimulus time points most correlated with decisions, and (3) a distinct choice-correlated signal after the stimulus. These findings invite re-evaluation of MT for functions outside of its established sensory role and highlight the power of experimenter-controlled changes in temporal strategy, coupled with recording and analysis approaches that transcend the single-neuron perspective.
Collapse
Affiliation(s)
- Aaron J Levi
- Center for Perceptual Systems, Departments of Neuroscience and Psychology, The University of Texas at Austin, Austin, Texas 78705
| | - Yuan Zhao
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794
| | - Il Memming Park
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794
| | - Alexander C Huk
- Center for Perceptual Systems, Departments of Neuroscience and Psychology, The University of Texas at Austin, Austin, Texas 78705
- Fuster Laboratory, University of California Los Angeles, Los Angeles CA 90095
| |
Collapse
|
12
|
Yurt P, Calapai A, Mundry R, Treue S. Assessing cognitive flexibility in humans and rhesus macaques with visual motion and neutral distractors. Front Psychol 2022; 13:1047292. [PMID: 36605264 PMCID: PMC9807625 DOI: 10.3389/fpsyg.2022.1047292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction Cognitive flexibility is the ability of an individual to make behavioral adjustments in response to internal and/or external changes. While it has been reported in a wide variety of species, established paradigms to assess cognitive flexibility vary between humans and non-human animals, making systematic comparisons difficult to interpret. Methods We developed a computer-based paradigm to assess cognitive flexibility in humans and non-human primates. Our paradigm (1) uses a classical reversal learning structure in combination with a set-shifting approach (4 stimuli and 3 rules) to assess flexibility at various levels; (2) it employs the use of motion as one of three possible contextual rules; (3) it comprises elements that allow a foraging-like and random interaction, i.e., instances where the animals operate the task without following a strategy, to potentially minimize frustration in favor of a more positive engagement. Results and Discussion We show that motion can be used as a feature dimension (in addition to commonly used shape and color) to assess cognitive flexibility. Due to the way motion is processed in the primate brain, we argue that this dimension is an ideal candidate in situations where a non-binary rule set is needed and where participants might not be able to fully grasp other visual information of the stimulus (e.g., quantity in Wisconsin Card Sorting Test). All participants in our experiment flexibly shifted to and from motion-based rules as well as color- and shape-based rules, but did so with different proficiencies. Overall, we believe that with such approach it is possible to better characterize the evolution of cognitive flexibility in primates, as well as to develop more efficient tools to diagnose and treat various executive function deficits.
Collapse
Affiliation(s)
- Pinar Yurt
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany,Georg-August University School of Science, Goettingen, Germany
| | - Antonino Calapai
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany,LeibnizScienceCampus Primate Cognition, Goettingen, Germany,*Correspondence: Antonino Calapai,
| | - Roger Mundry
- LeibnizScienceCampus Primate Cognition, Goettingen, Germany,Cognitive Ethology Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany,Department for Primate Cognition, Georg-August University, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Goettingen, Germany,LeibnizScienceCampus Primate Cognition, Goettingen, Germany
| |
Collapse
|
13
|
Yu Y, Stirman JN, Dorsett CR, Smith SL. Selective representations of texture and motion in mouse higher visual areas. Curr Biol 2022; 32:2810-2820.e5. [PMID: 35609609 DOI: 10.1016/j.cub.2022.04.091] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 03/22/2022] [Accepted: 04/28/2022] [Indexed: 10/18/2022]
Abstract
The mouse visual cortex contains interconnected higher visual areas, but their functional specializations are unclear. Here, we used a data-driven approach to examine the representations of complex visual stimuli by L2/3 neurons across mouse higher visual areas, measured using large-field-of-view two-photon calcium imaging. Using specialized stimuli, we found higher fidelity representations of texture in area LM, compared to area AL. Complementarily, we found higher fidelity representations of motion in area AL, compared to area LM. We also observed this segregation of information in response to naturalistic videos. Finally, we explored how receptive field models of visual cortical neurons could produce the segregated representations of texture and motion we observed. These selective representations could aid in behaviors such as visually guided navigation.
Collapse
Affiliation(s)
- Yiyi Yu
- Department of Electrical & Computer Engineering, Center for BioEngineering, Neuroscience Research Institute, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
| | - Jeffrey N Stirman
- Neuroscience Research Institute, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Christopher R Dorsett
- Neuroscience Research Institute, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Spencer L Smith
- Department of Electrical & Computer Engineering, Center for BioEngineering, Neuroscience Research Institute, University of California, Santa Barbara, Santa Barbara, CA 93106, USA.
| |
Collapse
|
14
|
Abstract
Online motor control is often required to correct errors in rapid adjustments during reaching movements. It has been established that the initial arm trajectory during reaching is corrected by a target displacement. Since this corrective response occurs without perception of target perturbation, this is regarded as an automatic response. However, an object rarely "jumps" in daily life, rather it often "moves" as a chronological change of the position that causes visual motion. Therefore, the purpose of this study was to investigate whether the implicit visuomotor response is induced by target motion stimuli and to clarify the effects of target motion velocity on initial arm trajectory. Participants were asked to move a cursor from a start circle to a visual target. The target moved either leftward or rightward when the cursor passed 20 mm from the start circle. Four target velocities (10, 20, 30, 40 deg/s) were randomly presented. Our results showed that the initial velocity (first 50 ms) of the fast corrective response increased with the target velocity. Therefore, it is indicated that the fast corrective response is induced by the target motion stimulus with a short latency and its amplitude is dependent on the target velocity.
Collapse
Affiliation(s)
- Kosuke Numasawa
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, Tsukuba, Japan
| | - Tomohiro Kizuka
- Faculty of Health and Sport Sciences, University of Tsukuba, Tsukuba, Japan
| | - Seiji Ono
- Faculty of Health and Sport Sciences, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
15
|
Gaede AH, Baliga VB, Smyth G, Gutiérrez-Ibáñez C, Altshuler DL, Wylie DR. Response properties of optic flow neurons in the accessory optic system of hummingbirds versus zebra finches and pigeons. J Neurophysiol 2022; 127:130-144. [PMID: 34851761 DOI: 10.1152/jn.00437.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Optokinetic responses function to maintain retinal image stabilization by minimizing optic flow that occurs during self-motion. The hovering ability of hummingbirds is an extreme example of this behavior. Optokinetic responses are mediated by direction-selective neurons with large receptive fields in the accessory optic system (AOS) and pretectum. Recent studies in hummingbirds showed that, compared with other bird species, 1) the pretectal nucleus lentiformis mesencephali (LM) is hypertrophied, 2) LM has a unique distribution of direction preferences, and 3) LM neurons are more tightly tuned to stimulus velocity. In this study, we sought to determine if there are concomitant changes in the nucleus of the basal optic root (nBOR) of the AOS. We recorded the visual response properties of nBOR neurons to large-field-drifting random dot patterns and sine-wave gratings in Anna's hummingbirds and zebra finches and compared these with archival data from pigeons. We found no differences with respect to the distribution of direction preferences: Neurons responsive to upward, downward, and nasal-to-temporal motion were equally represented in all three species, and neurons responsive to temporal-to-nasal motion were rare or absent (<5%). Compared with zebra finches and pigeons, however, hummingbird nBOR neurons were more tightly tuned to stimulus velocity of random dot stimuli. Moreover, in response to drifting gratings, hummingbird nBOR neurons are more tightly tuned in the spatiotemporal domain. These results, in combination with specialization in LM, support a hypothesis that hummingbirds have evolved to be "optic flow specialists" to cope with the optomotor demands of sustained hovering flight.NEW & NOTEWORTHY Hummingbirds have specialized response properties to optic flow in the pretectal nucleus lentiformis mesencephali (LM). The LM works with the nucleus of the basal optic root (nBOR) of the accessory optic system (AOS) to process global visual motion, but whether the neural response specializations observed in the LM extend to the nBOR is unknown. Hummingbird nBOR neurons are more tightly tuned to visual stimulus velocity, and in the spatiotemporal domain, compared with two nonhovering species.
Collapse
Affiliation(s)
- Andrea H Gaede
- Structure and Motion Laboratory, Royal Veterinary College, University of London, Hertfordshire, United Kingdom.,Department of Zoology, University of British Columbia, Vancouver, British Columbia, Canada.,Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - Vikram B Baliga
- Department of Zoology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Graham Smyth
- Department of Zoology, University of British Columbia, Vancouver, British Columbia, Canada
| | | | - Douglas L Altshuler
- Department of Zoology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Douglas R Wylie
- Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
16
|
Ding J, Chen A, Chung J, Acaron Ledesma H, Wu M, Berson DM, Palmer SE, Wei W. Spatially displaced excitation contributes to the encoding of interrupted motion by a retinal direction-selective circuit. eLife 2021; 10:e68181. [PMID: 34096504 PMCID: PMC8211448 DOI: 10.7554/elife.68181] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 06/06/2021] [Indexed: 12/19/2022] Open
Abstract
Spatially distributed excitation and inhibition collectively shape a visual neuron's receptive field (RF) properties. In the direction-selective circuit of the mammalian retina, the role of strong null-direction inhibition of On-Off direction-selective ganglion cells (On-Off DSGCs) on their direction selectivity is well-studied. However, how excitatory inputs influence the On-Off DSGC's visual response is underexplored. Here, we report that On-Off DSGCs have a spatially displaced glutamatergic receptive field along their horizontal preferred-null motion axes. This displaced receptive field contributes to DSGC null-direction spiking during interrupted motion trajectories. Theoretical analyses indicate that population responses during interrupted motion may help populations of On-Off DSGCs signal the spatial location of moving objects in complex, naturalistic visual environments. Our study highlights that the direction-selective circuit exploits separate sets of mechanisms under different stimulus conditions, and these mechanisms may help encode multiple visual features.
Collapse
Affiliation(s)
- Jennifer Ding
- Committee on Neurobiology Graduate Program, The University of ChicagoChicagoUnited States
- Department of Neurobiology, The University of ChicagoChicagoUnited States
| | - Albert Chen
- Department of Organismal Biology, The University of ChicagoChicagoUnited States
| | - Janet Chung
- Department of Neurobiology, The University of ChicagoChicagoUnited States
| | - Hector Acaron Ledesma
- Graduate Program in Biophysical Sciences, The University of ChicagoChicagoUnited States
| | - Mofei Wu
- Department of Neurobiology, The University of ChicagoChicagoUnited States
| | - David M Berson
- Department of Neuroscience and Carney Institute for Brain Science, Brown UniversityProvidenceUnited States
| | - Stephanie E Palmer
- Committee on Neurobiology Graduate Program, The University of ChicagoChicagoUnited States
- Department of Organismal Biology, The University of ChicagoChicagoUnited States
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, The University of ChicagoChicagoUnited States
| | - Wei Wei
- Committee on Neurobiology Graduate Program, The University of ChicagoChicagoUnited States
- Department of Neurobiology, The University of ChicagoChicagoUnited States
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, The University of ChicagoChicagoUnited States
| |
Collapse
|
17
|
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
18
|
Abstract
The processing of visual motion is conducted by dedicated pathways in the primate brain. These pathways originate with populations of direction-selective neurons in the primary visual cortex, which projects to dorsal structures like the middle temporal (MT) and medial superior temporal (MST) areas. Anatomical and imaging studies have suggested that area V3A might also be specialized for motion processing, but there have been very few studies of single-neuron direction selectivity in this area. We have therefore performed electrophysiological recordings from V3A neurons in two macaque monkeys (one male and one female) and measured responses to a large battery of motion stimuli that includes translation motion, as well as more complex optic flow patterns. For comparison, we simultaneously recorded the responses of MT neurons to the same stimuli. Surprisingly, we find that overall levels of direction selectivity are similar in V3A and MT and moreover that the population of V3A neurons exhibits somewhat greater selectivity for optic flow patterns. These results suggest that V3A should be considered as part of the motion processing machinery of the visual cortex, in both human and non-human primates.
Collapse
|
19
|
Abstract
Previously, we found that in the mammalian retina, inhibitory inputs onto starburst amacrine cells (SACs) are required for robust direction selectivity of On-Off direction-selective ganglion cells (On-Off DSGCs) against noisy backgrounds (Chen et al., 2016). However, the source of the inhibitory inputs to SACs and how this inhibition confers noise resilience of DSGCs are unknown. Here, we show that when visual noise is present in the background, the motion-evoked inhibition to an On-Off DSGC is preserved by a disinhibitory motif consisting of a serially connected network of neighboring SACs presynaptic to the DSGC. This preservation of inhibition by a disinhibitory motif arises from the interaction between visually evoked network dynamics and short-term synaptic plasticity at the SAC-DSGC synapse. Although the disinhibitory microcircuit is well studied for its disinhibitory function in brain circuits, our results highlight the algorithmic flexibility of this motif beyond disinhibition due to the mutual influence between network and synaptic plasticity mechanisms.
Collapse
Affiliation(s)
- Qiang Chen
- Committee on Computational Neuroscience, University of Chicago, Chicago, United States
| | - Robert G Smith
- Department of Neuroscience, University of Pennsylvania, Philadelphia, United States
| | - Xiaolin Huang
- Committee on Neurobiology, University of Chicago, Chicago, United States
| | - Wei Wei
- Committee on Computational Neuroscience, University of Chicago, Chicago, United States.,Committee on Neurobiology, University of Chicago, Chicago, United States.,Department of Neurobiology, the University of Chicago, Chicago, United States.,Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, University of Chicago, Chicago, United States
| |
Collapse
|
20
|
Abstract
Real-world tasks, such as avoiding obstacles, require a sequence of interdependent choices to reach accurate motor actions. Yet, most studies on primate decision making involve simple one-step choices. Here we analyze motor actions to investigate how sensorimotor decisions develop over time. In a go/no-go interception task human observers (n = 42) judged whether a briefly presented moving target would pass (interceptive hand movement required) or miss (no hand movement required) a strike box while their eye and hand movements were recorded. Go/no-go decision formation had to occur within the first few hundred milliseconds to allow time-critical interception. We found that the earliest time point at which eye movements started to differentiate actions (go versus no-go) preceded hand movement onset. Moreover, eye movements were related to different stages of decision making. Whereas higher eye velocity during smooth pursuit initiation was related to more accurate interception decisions (whether or not to act), faster pursuit maintenance was associated with more accurate timing decisions (when to act). These results indicate that pursuit initiation and maintenance are continuously linked to ongoing sensorimotor decision formation.NEW & NOTEWORTHY Here we show that eye movements are a continuous indicator of decision processes underlying go/no-go actions. We link different stages of decision formation to distinct oculomotor events during open- and closed-loop smooth pursuit. Critically, the earliest time point at which eye movements differentiate actions preceded hand movement onset, suggesting shared sensorimotor processing for eye and hand movements. These results emphasize the potential of studying eye movements as a readout of cognitive processes.
Collapse
Affiliation(s)
- Jolande Fooken
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.,Center for Brain Health, University of British Columbia, Vancouver, Canada.,Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
21
|
Abstract
Smooth pursuit eye movements are used by primates to track moving objects. They are initiated by sensory estimates of target speed represented in the middle temporal (MT) area of extrastriate visual cortex and then supported by motor feedback to maintain steady-state eye speed at target speed. Here, we show that reducing the coherence in a patch of dots for a tracking target degrades the eye speed both at the initiation of pursuit and during steady-state tracking, when eye speed reaches an asymptote well below target speed. The deficits are quantitatively different between the motor-supported steady-state of pursuit and the sensory-driven initiation of pursuit, suggesting separate mechanisms. The deficit in visually guided pursuit initiation could not explain the deficit in steady-state tracking. Pulses of target speed during steady-state tracking revealed lower sensitivities to image motion across the retina for lower values of dot coherence. However, sensitivity was not zero, implying that visual motion should still be driving eye velocity toward target velocity. When we changed dot coherence from 100% to lower values during accurate steady-state pursuit, we observed larger eye decelerations for lower coherences, as expected if motor feedback was reduced in gain. A simple pursuit model accounts for our data based on separate modulation of the strength of visual-motor transmission and motor feedback. We suggest that reduced dot coherence allows us to observe evidence for separate modulations of the gain of visual-motor transmission during pursuit initiation and of the motor corollary discharges that comprise eye velocity memory and support steady-state tracking.NEW & NOTEWORTHY We exploit low-coherence patches of dots to control the initiation and steady state of smooth pursuit eye movements and show that these two phases of movement are modulated separately by the reliability of visual motion signals. We conclude that the neural circuit for pursuit includes separate modulation of the strength of visual-motor transmission for movement initiation and of eye velocity positive feedback to support steady-state tracking.
Collapse
Affiliation(s)
- Stuart Behling
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina
| | - Stephen G Lisberger
- Department of Neurobiology, Duke University School of Medicine, Durham, North Carolina
| |
Collapse
|
22
|
Sheliga BM, Quaia C, FitzGibbon EJ, Cumming BG. Short-latency ocular-following responses: Weighted nonlinear summation predicts the outcome of a competition between two sine wave gratings moving in opposite directions. J Vis 2020; 20:1. [PMID: 31995136 PMCID: PMC7239641 DOI: 10.1167/jov.20.1.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 11/29/2019] [Indexed: 11/24/2022] Open
Abstract
We recorded horizontal ocular-following responses to pairs of superimposed vertical sine wave gratings moving in opposite directions in human subjects. This configuration elicits a nonlinear interaction: when the relative contrast of the gratings is changed, the response transitions abruptly between the responses elicited by either grating alone. We explore this interaction in pairs of gratings that differ in spatial and temporal frequency and show that all cases can be described as a weighted sum of the responses to each grating presented alone, where the weights are a nonlinear function of stimulus contrast: a nonlinear weighed summation model. The weights depended on the spatial and temporal frequency of the component grating. In many cases the dominant component was not the one that produced the strongest response when presented alone, implying that the neuronal circuits assigning weights precede the stages at which motor responses to visual motion are generated. When the stimulus area was reduced, the relationship between spatial frequency and weight shifted to higher frequencies. This finding may reflect a contribution from surround suppression. The nonlinear interaction is strongest when the two components have similar spatial frequencies, suggesting that the nonlinearity may reflect interactions within single spatial frequency channels. This framework can be extended to stimuli composed of more than two components: our model was able to predict the responses to stimuli composed of three gratings. That this relatively simple model successfully captures the ocular-following responses over a wide range of spatial/temporal frequency and contrast parameters suggests that these interactions reflect a simple mechanism.
Collapse
|
23
|
Abstract
Humans are rather poor in judging the right speed of video scenes. For example, a soccer match may be sped up so as to last only 80 min without observers noticing it. However, both adults and children seem to have a systematic, though often biased, notion of what should be the right speed of a given video scene. We therefore explored cortical responsiveness to video speed manipulations in search of possible differences between explicit and implicit speed processing. We applied sinusoidal speed modulations to a video clip depicting a naturalistic scene as well as a traditional laboratory visual stimulus (random dot kinematogram, RDK), and measured both perceptual sensitivity and cortical responses (steady-state visual evoked potentials, SSVEPs) to speed modulations. In five observers, we found a clear perceptual sensitivity increase and a moderate SSVEP amplitude increase with increasing speed modulation strength. Cortical responses were also found with weak, undetected speed modulations. These preliminary findings suggest that the cortex responds globally to periodic video speed modulations, even when observers do not notice them. This entrainment mechanism may be the basis of automatic resonance to the rhythms of the external world.
Collapse
|
24
|
Abstract
Motion discrimination is a well-established model system for investigating how sensory signals are used to form perceptual decisions. Classic studies relating single-neuron activity in the middle temporal area (MT) to perceptual decisions have suggested that a simple linear readout could underlie motion discrimination behavior. A theoretically optimal readout, in contrast, would take into account the correlations between neurons and the sensitivity of individual neurons at each time point. However, it remains unknown how sophisticated the readout needs to be to support actual motion-discrimination behavior or to approach optimal performance. In this study, we evaluated the performance of various neurally plausible decoders, trained to discriminate motion direction from small ensembles of simultaneously recorded MT neurons. We found that decoding the stimulus without knowledge of the interneuronal correlations was sufficient to match an optimal (correlation aware) decoder. Additionally, a decoder could match the psychophysical performance of the animals with flat integration of up to half the stimulus and inherited temporal dynamics from the time-varying MT responses. These results demonstrate that simple, linear decoders operating on small ensembles of neurons can match both psychophysical performance and optimal sensitivity without taking correlations into account and that such simple read-out mechanisms can exhibit complex temporal properties inherited from the sensory dynamics themselves.NEW & NOTEWORTHY Motion perception depends on the ability to decode the activity of neurons in the middle temporal area. Theoretically optimal decoding requires knowledge of the sensitivity of neurons and interneuronal correlations. We report that a simple correlation-blind decoder performs as well as the optimal decoder for coarse motion discrimination. Additionally, the decoder could match the psychophysical performance with moderate temporal integration and dynamics inherited from sensory responses.
Collapse
Affiliation(s)
- Jacob L Yates
- Brain and Cognitive Science, University of Rochester, Rochester, New York.,Center for Perceptual Systems, University of Texas at Austin, Austin, Texas.,Department of Neuroscience, University of Texas at Austin, Austin, Texas
| | - Leor N Katz
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas.,Department of Neuroscience, University of Texas at Austin, Austin, Texas.,Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Aaron J Levi
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas.,Department of Neuroscience, University of Texas at Austin, Austin, Texas.,Department of Psychology, University of Texas at Austin, Austin, Texas
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey.,Department of Psychology, Princeton University, Princeton, New Jersey
| | - Alexander C Huk
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas.,Department of Neuroscience, University of Texas at Austin, Austin, Texas.,Department of Psychology, University of Texas at Austin, Austin, Texas
| |
Collapse
|
25
|
Abstract
Perception is produced by "reading out" the representation of a sensory stimulus contained in the activity of a population of neurons. To examine experimentally how populations code information, a common approach is to decode a linearly weighted sum of the neurons' spike counts. This approach is popular because of the biological plausibility of weighted, nonlinear integration. For neurons recorded in vivo, weights are highly variable when derived through optimization methods, but it is unclear how the variability affects decoding performance in practice. To address this, we recorded from neurons in the middle temporal area (MT) of anesthetized marmosets (Callithrix jacchus) viewing stimuli comprising a sheet of dots that moved coherently in 1 of 12 different directions. We found that high peak response and direction selectivity both predicted that a neuron would be weighted more highly in an optimized decoding model. Although learned weights differed markedly from weights chosen according to a priori rules based on a neuron's tuning profile, decoding performance was only marginally better for the learned weights. In the models with a priori rules, selectivity is the best predictor of weighting, and defining weights according to a neuron's preferred direction and selectivity improves decoding performance to very near the maximum level possible, as defined by the learned weights. NEW & NOTEWORTHY We examined which aspects of a neuron's tuning account for its contribution to sensory coding. Strongly direction-selective neurons are weighted most highly by optimal decoders trained to discriminate motion direction. Models with predefined decoding weights demonstrate that this weighting scheme causally improved direction representation by a neuronal population. Optimizing decoders (using a generalized linear model or Fisher's linear discriminant) led to only marginally better performance than decoders based purely on a neuron's preferred direction and selectivity.
Collapse
Affiliation(s)
- Elizabeth Zavitz
- Department of Physiology, Monash University , Clayton, Victoria , Australia.,Biomedicine Discovery Institute, Monash University , Clayton, Victoria , Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University , Clayton, Victoria , Australia
| | - Nicholas S C Price
- Department of Physiology, Monash University , Clayton, Victoria , Australia.,Biomedicine Discovery Institute, Monash University , Clayton, Victoria , Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University , Clayton, Victoria , Australia
| |
Collapse
|
26
|
Orekhova EV, Stroganova TA, Schneiderman JF, Lundström S, Riaz B, Sarovic D, Sysoeva OV, Brant G, Gillberg C, Hadjikhani N. Neural gain control measured through cortical gamma oscillations is associated with sensory sensitivity. Hum Brain Mapp 2019; 40:1583-1593. [PMID: 30549144 DOI: 10.1002/hbm.24469] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 10/21/2018] [Accepted: 10/29/2018] [Indexed: 12/17/2022] Open
Abstract
Gamma oscillations facilitate information processing by shaping the excitatory input/output of neuronal populations. Recent studies in humans and nonhuman primates have shown that strong excitatory drive to the visual cortex leads to suppression of induced gamma oscillations, which may reflect inhibitory-based gain control of network excitation. The efficiency of the gain control measured through gamma oscillations may in turn affect sensory sensitivity in everyday life. To test this prediction, we assessed the link between self-reported sensitivity and changes in magneto-encephalographic gamma oscillations as a function of motion velocity of high-contrast visual gratings. The induced gamma oscillations increased in frequency and decreased in power with increasing stimulation intensity. As expected, weaker suppression of the gamma response correlated with sensory hypersensitivity. Robustness of this result was confirmed by its replication in the two samples: neurotypical subjects and people with autism, who had generally elevated sensory sensitivity. We conclude that intensity-related suppression of gamma response is a promising biomarker of homeostatic control of the excitation-inhibition balance in the visual cortex.
Collapse
Affiliation(s)
- Elena V Orekhova
- Gillberg Neuropsychiatry Centre (GNC), University of Gothenburg, Gothenburg, Sweden.,Moscow State University of Psychology and Education, Center for Neurocognitive Research (MEG Center), Moscow, Russia.,Autism Research Laboratory, Moscow State University of Psychology and Education, Moscow, Russia
| | - Tatiana A Stroganova
- Moscow State University of Psychology and Education, Center for Neurocognitive Research (MEG Center), Moscow, Russia.,Autism Research Laboratory, Moscow State University of Psychology and Education, Moscow, Russia
| | - Justin F Schneiderman
- Department of Clinical Neurophysiology, University of Gothenburg, Institute of Neuroscience & Physiology, Gothenburg, Sweden.,Chalmers University of Technology and MedTech West, Gothenburg, Sweden
| | - Sebastian Lundström
- Gillberg Neuropsychiatry Centre (GNC), University of Gothenburg, Gothenburg, Sweden
| | - Bushra Riaz
- Department of Clinical Neurophysiology, University of Gothenburg, Institute of Neuroscience & Physiology, Gothenburg, Sweden
| | - Darko Sarovic
- Gillberg Neuropsychiatry Centre (GNC), University of Gothenburg, Gothenburg, Sweden
| | - Olga V Sysoeva
- Moscow State University of Psychology and Education, Center for Neurocognitive Research (MEG Center), Moscow, Russia.,Autism Research Laboratory, Moscow State University of Psychology and Education, Moscow, Russia
| | - Georg Brant
- Chalmers University of Technology and MedTech West, Gothenburg, Sweden
| | - Christopher Gillberg
- Gillberg Neuropsychiatry Centre (GNC), University of Gothenburg, Gothenburg, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Centre (GNC), University of Gothenburg, Gothenburg, Sweden.,MGH/MIT/HST Martinos Center for Biomedical Imaging, Harvard Medical School, Charlestown, Massachusetts
| |
Collapse
|
27
|
Song Y, Wang H. Motion-induced position mis-localization predicts the severity of Alzheimer's disease. J Neuropsychol 2019; 14:333-345. [PMID: 30859737 DOI: 10.1111/jnp.12181] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Revised: 01/31/2019] [Indexed: 01/16/2023]
Abstract
Patients with Alzheimer's disease (AD) often exhibit motion processing deficits. It is unclear whether the localization of moving objects - a perceptual process tightly linked to motion - is impaired or intact in AD. In this study, we used the phenomenon of illusory shift of position induced by motion as a behavioural paradigm to probe how the spatial representation differs between AD patients and healthy elderly controls. We measured the magnitudes of motion-induced position shift in a group of AD participants (N = 24) and age-matched elderly observers (N = 24). We found that AD patients showed weakened position mis-localization, but only for motion stimuli of slow speeds. For fast motion, the position mis-localization did not differ significantly between groups. Furthermore, we showed that the magnitudes of position mis-localization can predict the severity of AD; that is, patients with more severe symptoms had less preserved position mis-localization. Our results suggest that AD pathology impacts not only motion processing per se, but also the perceptual process related to motion such as the localization of moving objects.
Collapse
Affiliation(s)
- Yamin Song
- Department of Neurology, Liaocheng People's Hospital, China
| | - Huiting Wang
- Department of Neurology, Liaocheng People's Hospital, China
| |
Collapse
|
28
|
La Scaleia B, Lacquaniti F, Zago M. Body orientation contributes to modelling the effects of gravity for target interception in humans. J Physiol 2019; 597:2021-2043. [PMID: 30644996 DOI: 10.1113/jp277469] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 01/09/2019] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS It is known that interception of targets accelerated by gravity involves internal models coupled with visual signals. Non-visual signals related to head and body orientation relative to gravity may also contribute, although their role is poorly understood. In a novel experiment, we asked pitched observers to hit a virtual target approaching with an acceleration that was either coherent or incoherent with their pitch-tilt. Initially, the timing errors were large and independent of the coherence between target acceleration and observer's pitch. With practice, however, the timing errors became substantially smaller in the coherent conditions. The results show that information about head and body orientation can contribute to modelling the effects of gravity on a moving target. Orientation cues from vestibular and somatosensory signals might be integrated with visual signals in the vestibular cortex, where the internal model of gravity is assumed to be encoded. ABSTRACT Interception of moving targets relies on visual signals and internal models. Less is known about the additional contribution of non-visual cues about head and body orientation relative to gravity. We took advantage of Galileo's law of motion along an incline to demonstrate the effects of vestibular and somatosensory cues about head and body orientation on interception timing. Participants were asked to hit a ball rolling in a gutter towards the eyes, resulting in image expansion. The scene was presented in a head-mounted display, without any visual information about gravity direction. In separate blocks of trials participants were pitched backwards by 20° or 60°, whereas ball acceleration was randomized across trials so as to be compatible with rolling down a slope of 20° or 60°. Initially, the timing errors were large, independently of the coherence between ball acceleration and pitch angle, consistent with responses based exclusively on visual information because visual stimuli were identical at both tilts. At the end of the experiment, however, the timing errors were systematically smaller in the coherent conditions than the incoherent ones. Moreover, the responses were significantly (P = 0.007) earlier when participants were pitched by 60° than when they were pitched by 20°. Therefore, practice with the task led to incorporation of information about head and body orientation relative to gravity for response timing. Instead, posture did not affect response timing in a control experiment in which participants hit a static target in synchrony with the last of a predictable series of stationary audiovisual stimuli.
Collapse
Affiliation(s)
- Barbara La Scaleia
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy.,Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy
| | - Myrka Zago
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Civil Engineering and Computer Science Engineering, University of Rome Tor Vergata, Rome, Italy
| |
Collapse
|
29
|
Lange-Malecki B, Treue S, Rothenberger A, Albrecht B. Cognitive Control Over Visual Motion Processing - Are Children With ADHD Especially Compromised? A Pilot Study of Flanker Task Event-Related Potentials. Front Hum Neurosci 2018; 12:491. [PMID: 30568588 PMCID: PMC6290085 DOI: 10.3389/fnhum.2018.00491] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 11/21/2018] [Indexed: 11/26/2022] Open
Abstract
Performance deficits and diminished brain activity during cognitive control and error processing are frequently reported in attention deficit/hyperactivity disorder (ADHD), indicating a “top-down” deficit in executive attention. So far, these findings are almost exclusively based on the processing of static visual forms, neglecting the importance of visual motion processing in everyday life as well as important attentional and neuroanatomical differences between processing static forms and visual motion. For the current study, we contrasted performance and electrophysiological parameters associated with cognitive control from two Flanker-Tasks using static stimuli and moving random dot patterns. Behavioral data and event-related potentials were recorded from 16 boys with ADHD (combined type) and 26 controls (aged 8–15 years). The ADHD group showed less accuracy especially for moving stimuli, and prolonged response times for both stimulus types. Analyses of electrophysiological parameters of cognitive control revealed trends for diminished N2-enhancements and smaller error-negativities (indicating medium effect sizes), and we detected significantly lower error positivities (large effect sizes) compared to controls, similarly for both static and moving stimuli. Taken together, the study supports evidence that motion processing is not fully developed in childhood and that the cognitive control deficit in ADHD is of higher order and independent of stimulus type.
Collapse
Affiliation(s)
| | - Stefan Treue
- German Primate Center - Leibniz Institute for Primate Research, Göttingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Göttingen, Germany.,Bernstein Center for Computational Neuroscience, Göttingen, Germany.,Faculty for Biology and Psychology, University of Göttingen, Göttingen, Germany
| | - Aribert Rothenberger
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Medical Center Göttingen, Göttingen, Germany
| | - Björn Albrecht
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
30
|
Chaplin TA, Rosa MGP, Lui LL. Auditory and Visual Motion Processing and Integration in the Primate Cerebral Cortex. Front Neural Circuits 2018; 12:93. [PMID: 30416431 PMCID: PMC6212655 DOI: 10.3389/fncir.2018.00093] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/08/2018] [Indexed: 11/13/2022] Open
Abstract
The ability of animals to detect motion is critical for survival, and errors or even delays in motion perception may prove costly. In the natural world, moving objects in the visual field often produce concurrent sounds. Thus, it can highly advantageous to detect motion elicited from sensory signals of either modality, and to integrate them to produce more reliable motion perception. A great deal of progress has been made in understanding how visual motion perception is governed by the activity of single neurons in the primate cerebral cortex, but far less progress has been made in understanding both auditory motion and audiovisual motion integration. Here we, review the key cortical regions for motion processing, focussing on translational motion. We compare the representations of space and motion in the visual and auditory systems, and examine how single neurons in these two sensory systems encode the direction of motion. We also discuss the way in which humans integrate of audio and visual motion cues, and the regions of the cortex that may mediate this process.
Collapse
Affiliation(s)
- Tristan A Chaplin
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Marcello G P Rosa
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Leo L Lui
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| |
Collapse
|
31
|
Yu X, Gu Y. Probing Sensory Readout via Combined Choice-Correlation Measures and Microstimulation Perturbation. Neuron 2018; 100:715-727.e5. [PMID: 30244884 DOI: 10.1016/j.neuron.2018.08.034] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Revised: 01/19/2018] [Accepted: 08/22/2018] [Indexed: 12/18/2022]
Abstract
It is controversial whether covariation between neuronal activity and perceptual choice (i.e., choice correlation) reflects the functional readout of sensory signals. Here, we combined choice-correlation measures and electrical microstimulation on a site-to-site basis in the medial superior temporal area (MST), middle temporal area (MT), and ventral intraparietal area (VIP) when macaques discriminated between motion directions in both fine and coarse tasks. Microstimulation generated comparable effects between tasks but heterogeneous effects across and within brain regions. Within the MST and MT, microstimulation significantly biased an animal's choice toward the sensory preference instead of choice-related signals of the stimulated units. This was particularly evident for sites with conflict preference of sensory and choice-related signals. In the VIP, microstimulation failed to produce significant effects in either task despite strong choice correlations presented in this area. Our results suggest that sensory readout may not be inferred from choice-related signals during perceptual decision-making tasks.
Collapse
Affiliation(s)
- Xuefei Yu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
32
|
Handa T, Mikami A. Neuronal correlates of motion-defined shape perception in primate dorsal and ventral streams. Eur J Neurosci 2018; 48:3171-3185. [PMID: 30118167 DOI: 10.1111/ejn.14121] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 07/24/2018] [Accepted: 07/27/2018] [Indexed: 11/30/2022]
Abstract
Human and non-human primates can readily perceive the shape of objects using visual motion. Classically, shape, and motion are considered to be separately processed via ventral and dorsal cortical pathways, respectively. However, many lines of anatomical and physiological evidence have indicated that these two pathways are likely to be interconnected at some stage. For motion-defined shape perception, these two pathways should interact with each other because the ventral pathway must utilize motion, which the dorsal pathway processes, to extract shape signal. However, it is unknown how interactions between cortical pathways are involved in neural mechanisms underlying motion-defined shape perception. We review evidence from psychophysical, lesion, neuroimaging and physiological research on motion-defined shape perception and then discuss the effects of behavioral demands on neural activity in ventral and dorsal cortical areas. Further, we discuss functions of two candidate sets of levels: early and higher-order cortical areas. The extrastriate area V4 and middle temporal (MT) area, which are reciprocally connected, at the early level are plausible areas for extracting the shape and/or constituent parts of shape from motion cues because neural dynamics are different from those during luminance-defined shape perception. On the other hand, among other higher-order visual areas, the anterior superior temporal sulcus likely contributes to the processing of cue-invariant shape recognition rather than cue-dependent shape processing. We suggest that sharing information about motion and shape between the early visual areas in the dorsal and ventral pathways is dependent on visual cues and behavioral requirements, indicating the interplay between the pathways.
Collapse
Affiliation(s)
- Takashi Handa
- Department of Behavioral and Brain Sciences, Primate Research Institute, Kyoto University, Inuyama, Japan.,Department of Behavior and Brain Organization, Center of Advanced European Studies and Research (CAESAR), Bonn, Germany
| | - Akichika Mikami
- Department of Behavioral and Brain Sciences, Primate Research Institute, Kyoto University, Inuyama, Japan.,Faculty of Nursing and Rehabilitation, Chubu Gakuin University, Seki, Japan
| |
Collapse
|
33
|
Shuffrey LC, Levinson L, Becerra A, Pak G, Moya Sepulveda D, Montgomery AK, Green HL, Froud K. Visually Evoked Response Differences to Contrast and Motion in Children with Autism Spectrum Disorder. Brain Sci 2018; 8:E160. [PMID: 30149500 DOI: 10.3390/brainsci8090160] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 08/09/2018] [Accepted: 08/21/2018] [Indexed: 12/26/2022] Open
Abstract
High-density electroencephalography (EEG) was used to examine the utility of the P1 event-related potential (ERP) as a marker of visual motion sensitivity to luminance defined low-spatial frequency drifting gratings in 16 children with autism and 16 neurotypical children. Children with autism displayed enhanced sensitivity to large, high-contrast low-spatial frequency stimuli as indexed by significantly shorter P1 response latencies to large vs. small gratings. The current study also found that children with autism had larger amplitude responses to large gratings irrespective of contrast. A linear regression established that P1 adaptive mean amplitude for large, high-contrast sinusoidal gratings significantly predicted hyperresponsiveness item mean scores on the Sensory Experiences Questionnaire for children with autism, but not for neurotypical children. We conclude that children with autism have differences in the mechanisms that underlie low-level visual processing potentially related to altered visual spatial suppression or contrast gain control.
Collapse
|
34
|
Abstract
Direction selectivity is a fundamental computation in the visual system and is first computed by the direction-selective circuit in the mammalian retina. Although landmark discoveries on the neural basis of direction selectivity have been made in the rabbit, many technological advances designed for the mouse have emerged, making this organism a favored model for investigating the direction-selective circuit at the molecular, synaptic, and network levels. Studies using diverse motion stimuli in the mouse retina demonstrate that retinal direction selectivity is implemented by multilayered mechanisms. This review begins with a set of central mechanisms that are engaged under a wide range of visual conditions and then focuses on additional layers of mechanisms that are dynamically recruited under different visual stimulus conditions. Together, recent findings allude to an emerging theme: robust motion detection in the natural environment requires flexible neural mechanisms.
Collapse
Affiliation(s)
- Qiang Chen
- Department of Neurobiology, The University of Chicago , Chicago, Illinois.,Committee on Computational Neuroscience, The University of Chicago , Chicago, Illinois
| | - Wei Wei
- Department of Neurobiology, The University of Chicago , Chicago, Illinois.,Committee on Computational Neuroscience, The University of Chicago , Chicago, Illinois
| |
Collapse
|
35
|
Chang YCC, Khan S, Taulu S, Kuperberg G, Brown EN, Hämäläinen MS, Temereanca S. Left-Lateralized Contributions of Saccades to Cortical Activity During a One-Back Word Recognition Task. Front Neural Circuits 2018; 12:38. [PMID: 29867372 PMCID: PMC5964218 DOI: 10.3389/fncir.2018.00038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Accepted: 04/20/2018] [Indexed: 01/08/2023] Open
Abstract
Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150-350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition.
Collapse
Affiliation(s)
- Yu-Cherng C Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States.,Harvard Medical School, Harvard University, Boston, MA, United States
| | - Samu Taulu
- Institute for Learning and Brain Sciences, University of Washington, Seattle, WA, United States.,Department of Physics, University of Washington, Seattle, WA, United States
| | - Gina Kuperberg
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States.,Harvard Medical School, Harvard University, Boston, MA, United States.,Department of Psychology, Tufts University, Medford, MA, United States
| | - Emery N Brown
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA, United States.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States.,Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, United States.,Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Matti S Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States.,Harvard Medical School, Harvard University, Boston, MA, United States
| | - Simona Temereanca
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States.,Harvard Medical School, Harvard University, Boston, MA, United States.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States.,Department of Neuroscience, Brown University, Providence, RI, United States
| |
Collapse
|
36
|
Abstract
Over the past two decades, neurophysiological responses in the lateral intraparietal area (LIP) have received extensive study for insight into decision making. In a parallel manner, inferred cognitive processes have enriched interpretations of LIP activity. Because of this bidirectional interplay between physiology and cognition, LIP has served as fertile ground for developing quantitative models that link neural activity with decision making. These models stand as some of the most important frameworks for linking brain and mind, and they are now mature enough to be evaluated in finer detail and integrated with other lines of investigation of LIP function. Here, we focus on the relationship between LIP responses and known sensory and motor events in perceptual decision-making tasks, as assessed by correlative and causal methods. The resulting sensorimotor-focused approach offers an account of LIP activity as a multiplexed amalgam of sensory, cognitive, and motor-related activity, with a complex and often indirect relationship to decision processes. Our data-driven focus on multiplexing (and de-multiplexing) of various response components can complement decision-focused models and provides more detailed insight into how neural signals might relate to cognitive processes such as decision making.
Collapse
Affiliation(s)
- Alexander C Huk
- Center for Perceptual Systems, Departments of Neuroscience and Psychology, The University of Texas at Austin, Austin, Texas 78712; , ,
| | - Leor N Katz
- Center for Perceptual Systems, Departments of Neuroscience and Psychology, The University of Texas at Austin, Austin, Texas 78712; , ,
| | - Jacob L Yates
- Center for Perceptual Systems, Departments of Neuroscience and Psychology, The University of Texas at Austin, Austin, Texas 78712; , ,
| |
Collapse
|
37
|
Raharijaona T, Mawonou R, Nguyen TV, Colonnier F, Boyron M, Diperi J, Viollet S. Local Positioning System Using Flickering Infrared LEDs. Sensors (Basel) 2017; 17:s17112518. [PMID: 29099743 PMCID: PMC5713101 DOI: 10.3390/s17112518] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Revised: 10/25/2017] [Accepted: 10/26/2017] [Indexed: 11/23/2022]
Abstract
A minimalistic optical sensing device for the indoor localization is proposed to estimate the relative position between the sensor and active markers using amplitude modulated infrared light. The innovative insect-based sensor can measure azimuth and elevation angles with respect to two small and cheap active infrared light emitting diodes (LEDs) flickering at two different frequencies. In comparison to a previous lensless visual sensor that we proposed for proximal localization (less than 30 cm), we implemented: (i) a minimalistic sensor in terms of small size (10 cm3), light weight (6 g) and low power consumption (0.4 W); (ii) an Arduino-compatible demodulator for fast analog signal processing requiring low computational resources; and (iii) an indoor positioning system for a mobile robotic application. Our results confirmed that the proposed sensor was able to estimate the position at a distance of 2 m with an accuracy as small as 2-cm at a sampling frequency of 100 Hz. Our sensor can be also suitable to be implemented in a position feedback loop for indoor robotic applications in GPS-denied environment.
Collapse
Affiliation(s)
| | | | - Thanh Vu Nguyen
- Aix Marseille University, CNRS, ISM, Marseille 13009 , France.
| | - Fabien Colonnier
- Aix Marseille University, CNRS, ISM, Marseille 13009 , France.
- Temasek Labs, National University of Singapore, Singapore 117411, Singapore.
| | - Marc Boyron
- Aix Marseille University, CNRS, ISM, Marseille 13009 , France.
| | - Julien Diperi
- Aix Marseille University, CNRS, ISM, Marseille 13009 , France.
| | | |
Collapse
|
38
|
Abstract
We presented optic flow and real movement heading stimuli while recording MSTd neuronal activity. Monkeys were alternately engaged in three tasks: visual detection of optic flow heading perturbations, vestibular detection of real movement heading perturbations, and auditory detection of brief tones. Push-button RTs were fastest for tones and slower for visual and vestibular heading perturbations, suggesting that the tone detection task was easier. Neuronal heading selectivity was strongest during the tone detection task, and weaker during the visual and vestibular heading perturbation detection tasks. Heading selectivity was weaker during visual and vestibular path perturbation detection, despite our presented heading cues only in the visual and vestibular modalities. We conclude that focusing on the self-movement transients of path perturbation distracted the monkeys from their heading and reduced neuronal responsiveness to heading direction. NEW & NOTEWORTHY Heading analysis is critical for steering and navigation. We recorded the activity of monkey cortical heading neurons during naturalistic self-movement. When the monkeys were required to respond to transient changes in their path, neuronal responses to heading direction were diminished. This suggests that the need to respond to momentary path perturbations reduces your ability to process your heading direction.
Collapse
Affiliation(s)
- William K Page
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center , Rochester, New York
| | - Charles J Duffy
- Departments of Neurology, Neurobiology and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center , Rochester, New York
| |
Collapse
|
39
|
Yousif N, Fu RZ, Abou-El-Ela Bourquin B, Bhrugubanda V, Schultz SR, Seemungal BM. Dopamine Activation Preserves Visual Motion Perception Despite Noise Interference of Human V5/MT. J Neurosci 2016; 36:9303-12. [PMID: 27605607 DOI: 10.1523/JNEUROSCI.4452-15.2016] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2015] [Accepted: 06/27/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED When processing sensory signals, the brain must account for noise, both noise in the stimulus and that arising from within its own neuronal circuitry. Dopamine receptor activation is known to enhance both visual cortical signal-to-noise-ratio (SNR) and visual perceptual performance; however, it is unknown whether these two dopamine-mediated phenomena are linked. To assess this, we used single-pulse transcranial magnetic stimulation (TMS) applied to visual cortical area V5/MT to reduce the SNR focally and thus disrupt visual motion discrimination performance to visual targets located in the same retinotopic space. The hypothesis that dopamine receptor activation enhances perceptual performance by improving cortical SNR predicts that dopamine activation should antagonize TMS disruption of visual perception. We assessed this hypothesis via a double-blinded, placebo-controlled study with the dopamine receptor agonists cabergoline (a D2 agonist) and pergolide (a D1/D2 agonist) administered in separate sessions (separated by 2 weeks) in 12 healthy volunteers in a William's balance-order design. TMS degraded visual motion perception when the evoked phosphene and the visual stimulus overlapped in time and space in the placebo and cabergoline conditions, but not in the pergolide condition. This suggests that dopamine D1 or combined D1 and D2 receptor activation enhances cortical SNR to boost perceptual performance. That local visual cortical excitability was unchanged across drug conditions suggests the involvement of long-range intracortical interactions in this D1 effect. Because increased internal noise (and thus lower SNR) can impair visual perceptual learning, improving visual cortical SNR via D1/D2 agonist therapy may be useful in boosting rehabilitation programs involving visual perceptual training. SIGNIFICANCE STATEMENT In this study, we address the issue of whether dopamine activation improves visual perception despite increasing sensory noise in the visual cortex. We show specifically that dopamine D1 (or combined D1/D2) receptor activation enhances the cortical signal-to-noise-ratio to boost perceptual performance. Together with the previously reported effects of dopamine upon brain plasticity and learning (Wolf et al., 2003; Hansen and Manahan-Vaughan, 2014), our results suggest that combining rehabilitation with dopamine agonists could enhance both the saliency of the training signal and the long-term effects on brain plasticity to boost rehabilitation regimens for brain injury.
Collapse
|
40
|
Liu LD, Pack CC. The Contribution of Area MT to Visual Motion Perception Depends on Training. Neuron 2017; 95:436-446.e3. [PMID: 28689980 DOI: 10.1016/j.neuron.2017.06.024] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 04/24/2017] [Accepted: 06/15/2017] [Indexed: 10/19/2022]
Abstract
Perceptual decisions require the transformation of raw sensory inputs into cortical representations suitable for stimulus discrimination. One of the best-known examples of this transformation involves the middle temporal area (MT) of the primate visual cortex. Area MT provides a robust representation of stimulus motion, and previous work has shown that it contributes causally to performance on motion discrimination tasks. Here we report that the strength of this contribution can be highly plastic: depending on the recent training history, pharmacological inactivation of MT can severely impair motion discrimination, or it can have little detectable influence. Further analysis of neural and behavioral data suggests that training moves the readout of motion information between MT and lower-level cortical areas. These results show that the contribution of individual brain regions to conscious perception can shift flexibly depending on sensory experience.
Collapse
Affiliation(s)
- Liu D Liu
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada
| | - Christopher C Pack
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.
| |
Collapse
|
41
|
Goddard E, Solomon SG, Carlson TA. Dynamic population codes of multiplexed stimulus features in primate area MT. J Neurophysiol 2017; 118:203-218. [PMID: 28381492 DOI: 10.1152/jn.00954.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2016] [Revised: 02/27/2017] [Accepted: 03/30/2017] [Indexed: 11/22/2022] Open
Abstract
The middle-temporal area (MT) of primate visual cortex is critical in the analysis of visual motion. Single-unit studies suggest that the response dynamics of neurons within area MT depend on stimulus features, but how these dynamics emerge at the population level, and how feature representations interact, is not clear. Here, we used multivariate classification analysis to study how stimulus features are represented in the spiking activity of populations of neurons in area MT of marmoset monkey. Using representational similarity analysis we distinguished the emerging representations of moving grating and dot field stimuli. We show that representations of stimulus orientation, spatial frequency, and speed are evident near the onset of the population response, while the representation of stimulus direction is slower to emerge and sustained throughout the stimulus-evoked response. We further found a spatiotemporal asymmetry in the emergence of direction representations. Representations for high spatial frequencies and low temporal frequencies are initially orientation dependent, while those for high temporal frequencies and low spatial frequencies are more sensitive to motion direction. Our analyses reveal a complex interplay of feature representations in area MT population response that may explain the stimulus-dependent dynamics of motion vision.NEW & NOTEWORTHY Simultaneous multielectrode recordings can measure population-level codes that previously were only inferred from single-electrode recordings. However, many multielectrode recordings are analyzed using univariate single-electrode analysis approaches, which fail to fully utilize the population-level information. Here, we overcome these limitations by applying multivariate pattern classification analysis and representational similarity analysis to large-scale recordings from middle-temporal area (MT) in marmoset monkeys. Our analyses reveal a dynamic interplay of feature representations in area MT population response.
Collapse
Affiliation(s)
- Erin Goddard
- School of Psychology, University of Sydney, Sydney, New South Wales, Australia; .,ARC Centre of Excellence in Cognition and its Disorders (CCD), Macquarie University, Sydney, New South Wales, Australia; and
| | - Samuel G Solomon
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Thomas A Carlson
- School of Psychology, University of Sydney, Sydney, New South Wales, Australia.,ARC Centre of Excellence in Cognition and its Disorders (CCD), Macquarie University, Sydney, New South Wales, Australia; and
| |
Collapse
|
42
|
Duarte JV, Costa GN, Martins R, Castelo-Branco M. Pivotal role of hMT+ in long-range disambiguation of interhemispheric bistable surface motion. Hum Brain Mapp 2017; 38:4882-4897. [PMID: 28660667 DOI: 10.1002/hbm.23701] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2017] [Revised: 06/12/2017] [Accepted: 06/14/2017] [Indexed: 11/07/2022] Open
Abstract
It remains an open question whether long-range disambiguation of ambiguous surface motion can be achieved in early visual cortex or instead in higher level regions, which concerns object/surface segmentation/integration mechanisms. We used a bistable moving stimulus that can be perceived as a pattern comprehending both visual hemi-fields moving coherently downward or as two widely segregated nonoverlapping component objects (in each visual hemi-field) moving separately inward. This paradigm requires long-range integration across the vertical meridian leading to interhemispheric binding. Our fMRI study (n = 30) revealed a close relation between activity in hMT+ and perceptual switches involving interhemispheric segregation/integration of motion signals, crucially under nonlocal conditions where components do not overlap and belong to distinct hemispheres. Higher signal changes were found in hMT+ in response to spatially segregated component (incoherent) percepts than to pattern (coherent) percepts. This did not occur in early visual cortex, unlike apparent motion, which does not entail surface segmentation. We also identified a role for top-down mechanisms in state transitions. Deconvolution analysis of switch-related changes revealed prefrontal, insula, and cingulate areas, with the right superior parietal lobule (SPL) being particularly involved. We observed that directed influences could emerge either from left or right hMT+ during bistable motion integration/segregation. SPL also exhibited significant directed functional connectivity with hMT+, during perceptual state maintenance (Granger causality analysis). Our results suggest that long-range interhemispheric binding of ambiguous motion representations mainly reflect bottom-up processes from hMT+ during perceptual state maintenance. In contrast, state transitions maybe influenced by high-level regions such as the SPL. Hum Brain Mapp 38:4882-4897, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- João Valente Duarte
- CiBIT, ICNAS, Institute for Biomedical Imaging in Life Sciences (IBILI) - Faculty of Medicine, University of Coimbra, Portugal
| | - Gabriel Nascimento Costa
- CiBIT, ICNAS, Institute for Biomedical Imaging in Life Sciences (IBILI) - Faculty of Medicine, University of Coimbra, Portugal
| | - Ricardo Martins
- CiBIT, ICNAS, Institute for Biomedical Imaging in Life Sciences (IBILI) - Faculty of Medicine, University of Coimbra, Portugal
| | - Miguel Castelo-Branco
- CiBIT, ICNAS, Institute for Biomedical Imaging in Life Sciences (IBILI) - Faculty of Medicine, University of Coimbra, Portugal
| |
Collapse
|
43
|
Abstract
The perceived speed of a ring of equally spaced dots moving around a circular path appears faster as the number of dots increases (Ho & Anstis, 2013, Best Illusion of the Year contest). We measured this "spinner" effect with radial sinusoidal gratings, using a 2AFC procedure where participants selected the faster one between two briefly presented gratings of different spatial frequencies (SFs) rotating at various angular speeds. Compared with the reference stimulus with 4 c/rev (0.64 c/rad), participants consistently overestimated the angular speed for test stimuli of higher radial SFs but underestimated that for a test stimulus of lower radial SFs. The spinner effect increased in magnitude but saturated rapidly as the test radial SF increased. Similar effects were observed with translating linear sinusoidal gratings of different SFs. Our results support the idea that human speed perception is biased by temporal frequency, which physically goes up as SF increases when the speed is held constant. Hence, the more dots or lines, the greater the perceived speed when they are moving coherently in a defined area.
Collapse
|
44
|
Venezia JH, Vaden KI, Rong F, Maddox D, Saberi K, Hickok G. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus. Front Hum Neurosci 2017; 11:174. [PMID: 28439236 PMCID: PMC5383672 DOI: 10.3389/fnhum.2017.00174] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 03/24/2017] [Indexed: 11/30/2022] Open
Abstract
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.
Collapse
Affiliation(s)
| | - Kenneth I Vaden
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South CarolinaCharleston, SC, USA
| | - Feng Rong
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Dale Maddox
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Kourosh Saberi
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, Center for Cognitive Neuroscience and Engineering, University of CaliforniaIrvine, CA, USA
| |
Collapse
|
45
|
Gaglianese A, Harvey BM, Vansteensel MJ, Dumoulin SO, Ramsey NF, Petridou N. Separate spatial and temporal frequency tuning to visual motion in human MT+ measured with ECoG. Hum Brain Mapp 2016; 38:293-307. [PMID: 27647579 DOI: 10.1002/hbm.23361] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Revised: 08/10/2016] [Accepted: 08/21/2016] [Indexed: 11/11/2022] Open
Abstract
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. Here, we characterized how neuronal populations in hMT+ encode the speed of moving visual stimuli. We evaluated human intracranial electrocorticography (ECoG) responses elicited by square-wave dartboard moving stimuli with different spatial and temporal frequency to investigate whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. We extracted two components from the ECoG responses: (1) the power in the high-frequency band (HFB: 65-95 Hz) as a measure of the neuronal population spiking activity and (2) a specific spectral component that followed the frequency of the stimulus's contrast reversals (SCR responses). Our results revealed that HFB neuronal population responses to visual motion stimuli exhibit distinct and independent selectivity for spatial and temporal frequencies of the visual stimuli rather than direct speed tuning. The SCR responses did not encode the speed or the spatiotemporal frequency of the visual stimuli. We conclude that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Hum Brain Mapp 38:293-307, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Anna Gaglianese
- Department of Neurology and Neurosurgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, 3584 CX, The Netherlands.,Department of Radiology, University Medical Center Utrecht, Utrecht, 3584 CX, The Netherlands
| | - Ben M Harvey
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, 3584 CS, The Netherlands
| | - Mariska J Vansteensel
- Department of Neurology and Neurosurgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, 3584 CX, The Netherlands
| | - Serge O Dumoulin
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, 3584 CS, The Netherlands
| | - Nick F Ramsey
- Department of Neurology and Neurosurgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, 3584 CX, The Netherlands
| | - Natalia Petridou
- Department of Radiology, University Medical Center Utrecht, Utrecht, 3584 CX, The Netherlands
| |
Collapse
|
46
|
Forsberg LE, Bonde LH, Harvey MA, Roland PE. The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex. Front Syst Neurosci 2016; 10:65. [PMID: 27582693 PMCID: PMC4987378 DOI: 10.3389/fnsys.2016.00065] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 07/12/2016] [Indexed: 11/30/2022] Open
Abstract
Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states were slow, frequently changing direction. In single trials, sharp as well as smooth and slow transients transform the trajectories to be outward directed, fast and crossing the threshold to become evoked. Although the speeds of the evolution of the evoked states differ, the same domain of the state space is explored indicating uniformity of the evoked states. All evoked states return to the spontaneous evoked spiking state as in a typical mono-stable dynamical system. In single trials, neither the original spiking rates, nor the temporal evolution in state space could distinguish simple visual scenes.
Collapse
Affiliation(s)
- Lars E Forsberg
- Brain Research, Department of Neuroscience, Karolinska Institute Solna, Sweden
| | - Lars H Bonde
- Signalling Lab, Department of Neuroscience, Faculty of Health Sciences, University of Copenhagen Denmark
| | - Michael A Harvey
- Brain Research, Department of Neuroscience, Karolinska Institute Solna, Sweden
| | - Per E Roland
- Signalling Lab, Department of Neuroscience, Faculty of Health Sciences, University of Copenhagen Denmark
| |
Collapse
|
47
|
Abstract
UNLABELLED Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. SIGNIFICANCE STATEMENT To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception.
Collapse
|
48
|
Abstract
It is well established that ongoing cognitive functions affect the trajectories of limb movements mediated by corticospinal circuits, suggesting an interaction between cognition and motor action. Although there are also many demonstrations that decision formation is reflected in the ongoing neural activity in oculomotor brain circuits, it is not known whether the decision-related activity in those oculomotor structures interacts with eye movements that are decision irrelevant. Here we tested for an interaction between decisions and instructed saccades unrelated to the perceptual decision. Observers performed a direction-discrimination decision-making task, but made decision-irrelevant saccades before registering their motion decision with a button press. Probing the oculomotor circuits with these decision-irrelevant saccades during decision making revealed that saccade reaction times and peak velocities were influenced in proportion to motion strength, and depended on the directional congruence between decisions about visual motion and decision-irrelevant saccades. These interactions disappeared when observers passively viewed the motion stimulus but still made the same instructed saccades, and when manual reaction times were measured instead of saccade reaction times, confirming that these interactions result from decision formation as opposed to visual stimulation, and are specific to the oculomotor system. Our results demonstrate that oculomotor function can be affected by decision formation, even when decisions are communicated without eye movements, and that this interaction has a directionally specific component. These results not only imply a continuous and interactive mixture of motor and decision signals in oculomotor structures, but also suggest nonmotor recruitment of oculomotor machinery in decision making.
Collapse
|
49
|
Schwiedrzik CM, Bernstein B, Melloni L. Motion along the mental number line reveals shared representations for numerosity and space. eLife 2016; 5. [PMID: 26771249 PMCID: PMC4764558 DOI: 10.7554/elife.10806] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2015] [Accepted: 01/14/2016] [Indexed: 12/04/2022] Open
Abstract
Perception of number and space are tightly intertwined. It has been proposed that this is due to ‘cortical recycling’, where numerosity processing takes over circuits originally processing space. Do such ‘recycled’ circuits retain their original functionality? Here, we investigate interactions between numerosity and motion direction, two functions that both localize to parietal cortex. We describe a new phenomenon in which visual motion direction adapts nonsymbolic numerosity perception, giving rise to a repulsive aftereffect: motion to the left adapts small numbers, leading to overestimation of numerosity, while motion to the right adapts large numbers, resulting in underestimation. The reference frame of this effect is spatiotopic. Together with the tuning properties of the effect this suggests that motion direction-numerosity cross-adaptation may occur in a homolog of area LIP. ‘Cortical recycling’ thus expands but does not obliterate the functions originally performed by the recycled circuit, allowing for shared computations across domains. DOI:http://dx.doi.org/10.7554/eLife.10806.001 Our sense of number is thought to have emerged from the circuits of cortical neurons in the brain that originally represent space, a process known as 'cortical recycling'. Accordingly, our perception of space and number are tightly intertwined: for example, people think about numbers on a mental number line, where smaller numbers are mapped to the left, and larger numbers are mapped to the right. Also, damage to a brain region called the parietal cortex disrupts both space and number processing. If number processing recycles the neurons that encode space, which form does this appropriation take? Recycling could preserve the original behavior of the neurons (processing space), thus enriching the neurons’ functional repertoire with a new capacity (processing number). Alternatively, the newly developed role could replace the original one, such that space and number cohabitate the same brain area but use separate neurons. To disentangle these hypotheses, Schwiedrzik et al. used a technique called 'perceptual adaptation'. Here, continuously showing someone a particular feature eventually exhausts the neurons that respond to that feature. Neurons that respond to the opposite feature are however less exhausted and dominate perception. Consequently, people perceive the opposite of what they are adapted to. For example, after continuously seeing dots moving to the right, people perceive stationary dots as moving to the left. Similarly, after being exposed to large numbers of dots they underestimate how many dots they see. If the same neurons process numbers and space, then adapting to movement in a particular direction should influence number perception. During Schwiedrzik et al.’s experiments, volunteers watched moving dots on a computer screen. After seeing dots move to the right, they underestimated the number of dots that then appeared on the screen. This is likely to be because larger numbers are mentally mapped to the right, and seeing rightward motion for a long time exhausted these neurons. This means that neurons that respond to smaller numbers (mentally mapped to the left) were more active when the new dots were presented, leading the volunteers to underestimate how many dots they saw. Adapting to leftward motion led to the opposite effect, with volunteers overestimating the number of dots. Thus, motion can literally move us up and down the number line. These results indicate that the same neurons encode both space and numbers. Cortical recycling does not erase the neurons’ original behavior: instead, neurons may carry out the same computations when processing numbers or space. This would allow the brain to add new functionality without sacrificing any of the computational resources for processing space. DOI:http://dx.doi.org/10.7554/eLife.10806.002
Collapse
Affiliation(s)
- Caspar M Schwiedrzik
- Laboratory of Neural Systems, The Rockefeller University, New York, United States
| | - Benjamin Bernstein
- Department of Psychology, Northwestern University, Evanston, United States
| | - Lucia Melloni
- Department of Neurophysiology, Max Planck Institute for Brain Research, Frankfurt am Main, Germany.,Department of Neurosurgery, Columbia University College of Physicians and Surgeons, New York, United States.,Department of Neurology, New York University Langone Medical Center, New York, United States
| |
Collapse
|
50
|
Abstract
UNLABELLED When an object moves in the visual field, its motion evokes a streak of activity on the retina and the incoming retinal signals lead to robust oculomotor commands because corrections are observed if the trajectory of the interceptive saccade is perturbed by a microstimulation in the superior colliculus. The present study complements a previous perturbation study by investigating, in the head-restrained monkey, the generation of saccades toward a transient moving target (100-200 ms). We tested whether the saccades land on the average of antecedent target positions or beyond the location where the target disappeared. Using target motions with different speed profiles, we also examined the sensitivity of the process that converts time-varying retinal signals into saccadic oculomotor commands. The results show that, for identical overall target displacements on the visual display, saccades toward a faster target land beyond the endpoint of saccades toward a target moving slower. The rate of change in speed matters in the visuomotor transformation. Indeed, in response to identical overall target displacements and durations, the saccades have smaller amplitude when they are made in response to an accelerating target than to a decelerating one. Moreover, the motion-related signals have different weights depending upon their timing relative to the target onset: early signals are more influential in the specification of saccade amplitude than later signals. We discuss the "predictive" properties of the visuo-saccadic system and the nature of this location where the saccades land, after providing some critical comments to the "hic-et-nunc" hypothesis (Fleuriet and Goffart, 2012). SIGNIFICANCE STATEMENT Complementing the work of Fleuriet and Goffart (2012), this study is a contribution to the more general scientific research aimed at understanding how ongoing action is dynamically and adaptively adjusted to the current spatiotemporal aspects of its goal. Using the saccadic eye movement as a probe, we provide results that are critical for investigating and understanding the neural basis of motion extrapolation and prediction.
Collapse
|