1
|
A Proposed Mechanism for Visual Vertigo: Post-Concussion Patients Have Higher Gain From Visual Input Into Subcortical Gaze Stabilization. Invest Ophthalmol Vis Sci 2024; 65:26. [PMID: 38607620 PMCID: PMC11018265 DOI: 10.1167/iovs.65.4.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 03/20/2024] [Indexed: 04/13/2024] Open
Abstract
Purpose Post-concussion syndrome (PCS) is commonly associated with dizziness and visual motion sensitivity. This case-control study set out to explore altered motion processing in PCS by measuring gaze stabilization as a reflection of the capacity of the brain to integrate motion, and it aimed to uncover mechanisms of injury where invasive subcortical recordings are not feasible. Methods A total of 554 eye movements were analyzed in 10 PCS patients and nine healthy controls across 171 trials. Optokinetic and vestibulo-ocular reflexes were recorded using a head-mounted eye tracker while participants were exposed to visual, vestibular, and visuo-vestibular motion stimulations in the roll plane. Torsional and vergence eye movements were analyzed in terms of slow-phase velocities, gain, nystagmus frequency, and sensory-specific contributions toward gaze stabilization. Results Participants expressed eye-movement responses consistent with expected gaze stabilization; slow phases were fastest for visuo-vestibular trials and slowest for visual stimulations (P < 0.001) and increased with stimulus acceleration (P < 0.001). Concussed patients demonstrated increased gain from visual input to gaze stabilization (P = 0.005), faster slow phases (P = 0.013), earlier nystagmus beats (P = 0.003), and higher relative visual influence over the gaze-stabilizing response (P = 0.001), presenting robust effect sizes despite the limited population size. Conclusions The enhanced neural responsiveness to visual motion in PCS, combined with semi-intact visuo-vestibular integration, presented a subcortical hierarchy for altered gaze stabilization. Drawing on comparable animal trials, findings suggest that concussed patients may suffer from diffuse injuries to inhibiting pathways for optokinetic information, likely early in the visuo-vestibular hierarchy of sensorimotor integration. These findings offer context for common but elusive symptoms, presenting a neurological explanation for motion sensitivity and visual vertigo in PCS.
Collapse
|
2
|
Pathway-selective optogenetics reveals the functional anatomy of top-down attentional modulation in the macaque visual cortex. Proc Natl Acad Sci U S A 2024; 121:e2304511121. [PMID: 38194453 PMCID: PMC10801865 DOI: 10.1073/pnas.2304511121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Accepted: 10/07/2023] [Indexed: 01/11/2024] Open
Abstract
Spatial attention represents a powerful top-down influence on sensory responses in primate visual cortical areas. The frontal eye field (FEF) has emerged as a key candidate area for the source of this modulation. However, it is unclear whether the FEF exerts its effects via its direct axonal projections to visual areas or indirectly through other brain areas and whether the FEF affects both the enhancement of attended and the suppression of unattended sensory responses. We used pathway-selective optogenetics in rhesus macaques performing a spatial attention task to inhibit the direct input from the FEF to area MT, an area along the dorsal visual pathway specialized for the processing of visual motion information. Our results show that the optogenetic inhibition of the FEF input specifically reduces attentional modulation in MT by about a third without affecting the neurons' sensory response component. We find that the direct FEF-to-MT pathway contributes to both the enhanced processing of target stimuli and the suppression of distractors. The FEF, thus, selectively modulates firing rates in visual area MT, and it does so via its direct axonal projections.
Collapse
|
3
|
Visual Information Is Predictively Encoded in Occipital Alpha/Low-Beta Oscillations. J Neurosci 2023; 43:5537-5545. [PMID: 37344235 PMCID: PMC10376931 DOI: 10.1523/jneurosci.0135-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/23/2023] Open
Abstract
Hierarchical predictive coding networks are a general model of sensory processing in the brain. Under neural delays, these networks have been suggested to naturally generate oscillatory activity in approximately the α frequency range (∼8-12 Hz). This suggests that α oscillations, a prominent feature of EEG recordings, may be a spectral "fingerprint" of predictive sensory processing. Here, we probed this possibility by investigating whether oscillations over the visual cortex predictively encode visual information. Specifically, we examined whether their power carries information about the position of a moving stimulus, in a temporally predictive fashion. In two experiments (N = 32, 18 female; N = 34, 17 female), participants viewed an apparent-motion stimulus moving along a circular path while EEG was recorded. To investigate the encoding of stimulus-position information, we developed a method of deriving probabilistic spatial maps from oscillatory power estimates. With this method, we demonstrate that it is possible to reconstruct the trajectory of a moving stimulus from α/low-β oscillations, tracking its position even across unexpected motion reversals. We also show that future position representations are activated in the absence of direct visual input, demonstrating that temporally predictive mechanisms manifest in α/β band oscillations. In a second experiment, we replicate these findings and show that the encoding of information in this range is not driven by visual entrainment. By demonstrating that occipital α/β oscillations carry stimulus-related information, in a temporally predictive fashion, we provide empirical evidence of these rhythms as a spectral "fingerprint" of hierarchical predictive processing in the human visual system.SIGNIFICANCE STATEMENT "Hierarchical predictive coding" is a general model of sensory information processing in the brain. When in silico predictive coding models are constrained by neural transmission delays, their activity naturally oscillates in roughly the α range (∼8-12 Hz). Using time-resolved EEG decoding, we show that neural rhythms in this approximate range (α/low-β) over the human visual cortex predictively encode the position of a moving stimulus. From the amplitude of these oscillations, we are able to reconstruct the stimulus' trajectory, revealing signatures of temporally predictive processing. This provides direct neural evidence linking occipital α/β rhythms to predictive visual processing, supporting the emerging view of such oscillations as a potential spectral "fingerprint" of hierarchical predictive processing in the human visual system.
Collapse
|
4
|
Modeling the development of cortical responses in primate dorsal ("where") pathway to optic flow using hierarchical neural field models. Front Neurosci 2023; 17:1154252. [PMID: 37284658 PMCID: PMC10239834 DOI: 10.3389/fnins.2023.1154252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 04/26/2023] [Indexed: 06/08/2023] Open
Abstract
Although there is a plethora of modeling literature dedicated to the object recognition processes of the ventral ("what") pathway of primate visual systems, modeling studies on the motion-sensitive regions like the Medial superior temporal area (MST) of the dorsal ("where") pathway are relatively scarce. Neurons in the MST area of the macaque monkey respond selectively to different types of optic flow sequences such as radial and rotational flows. We present three models that are designed to simulate the computation of optic flow performed by the MST neurons. Model-1 and model-2 each composed of three stages: Direction Selective Mosaic Network (DSMN), Cell Plane Network (CPNW) or the Hebbian Network (HBNW), and the Optic flow network (OF). The three stages roughly correspond to V1-MT-MST areas, respectively, in the primate motion pathway. Both these models are trained stage by stage using a biologically plausible variation of Hebbian rule. The simulation results show that, neurons in model-1 and model-2 (that are trained on translational, radial, and rotational sequences) develop responses that could account for MSTd cell properties found neurobiologically. On the other hand, model-3 consists of the Velocity Selective Mosaic Network (VSMN) followed by a convolutional neural network (CNN) which is trained on radial and rotational sequences using a supervised backpropagation algorithm. The quantitative comparison of response similarity matrices (RSMs), made out of convolution layer and last hidden layer responses, show that model-3 neuron responses are consistent with the idea of functional hierarchy in the macaque motion pathway. These results also suggest that the deep learning models could offer a computationally elegant and biologically plausible solution to simulate the development of cortical responses of the primate motion pathway.
Collapse
|
5
|
Smooth pursuit eye movements contribute to anticipatory force control during mechanical stopping of moving objects. J Neurophysiol 2023; 129:1293-1309. [PMID: 37099016 DOI: 10.1152/jn.00075.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/27/2023] Open
Abstract
When stopping a closing door or catching an object, humans process the motion of inertial objects and apply reactive limb force over short period to interact with them. One way in which the visual system processes motion is through extraretinal signals associated with smooth pursuit eye movements (SPEM). We conducted three experiments to investigate how SPEM contribute to anticipatory and reactive hand force modulation when interacting with a virtual object moving in the horizontal plane. We hypothesized that SPEM signals are critical for timing motor responses, anticipatory control of hand force, and task performance. Participants held a robotic manipulandum and attempted to stop an approaching simulated object by applying a force impulse (area under force-time curve) that matched the object's virtual momentum upon contact. We manipulated the object's momentum by varying either its virtual mass or its speed under free gaze or constrained gaze conditions. We examined gaze variables, timing of hand motor responses, anticipatory force control, and overall task performance. Our results show that when participants fixated at a designated location instead of following objects with SPEM, anticipatory modulation of hand force prior to contact decreased. However, constraining gaze by asking participants to fixate did not seem to affect the timing of the motor response or the task performance. Together, these results suggest that SPEM may be important for anticipatory control of hand force prior to contact and may also play a critical role in anticipatory stabilization of limb posture when humans interact with moving objects.
Collapse
|
6
|
Neurons in Primate Area MSTd Signal Eye Movement Direction Inferred from Dynamic Perspective Cues in Optic Flow. J Neurosci 2023; 43:1888-1904. [PMID: 36725323 PMCID: PMC10027048 DOI: 10.1523/jneurosci.1885-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/18/2023] [Accepted: 01/24/2023] [Indexed: 02/03/2023] Open
Abstract
Smooth eye movements are common during natural viewing; we frequently rotate our eyes to track moving objects or to maintain fixation on an object during self-movement. Reliable information about smooth eye movements is crucial to various neural computations, such as estimating heading from optic flow or judging depth from motion parallax. While it is well established that extraretinal signals (e.g., efference copies of motor commands) carry critical information about eye velocity, the rotational optic flow field produced by eye rotations also carries valuable information. Although previous work has shown that dynamic perspective cues in optic flow can be used in computations that require estimates of eye velocity, it has remained unclear where and how the brain processes these visual cues and how they are integrated with extraretinal signals regarding eye rotation. We examined how neurons in the dorsal region of the medial superior temporal area (MSTd) of two male rhesus monkeys represent the direction of smooth pursuit eye movements based on both visual cues (dynamic perspective) and extraretinal signals. We find that most MSTd neurons have matched preferences for the direction of eye rotation based on visual and extraretinal signals. Moreover, neural responses to combinations of these signals are well predicted by a weighted linear summation model. These findings demonstrate a neural substrate for representing the velocity of smooth eye movements based on rotational optic flow and establish area MSTd as a key node for integrating visual and extraretinal signals into a more generalized representation of smooth eye movements.SIGNIFICANCE STATEMENT We frequently rotate our eyes to smoothly track objects of interest during self-motion. Information about eye velocity is crucial for a variety of computations performed by the brain, including depth perception and heading perception. Traditionally, information about eye rotation has been thought to arise mainly from extraretinal signals, such as efference copies of motor commands. Previous work shows that eye velocity can also be inferred from rotational optic flow that accompanies smooth eye movements, but the neural origins of these visual signals about eye rotation have remained unknown. We demonstrate that macaque neurons signal the direction of smooth eye rotation based on visual signals, and that they integrate both visual and extraretinal signals regarding eye rotation in a congruent fashion.
Collapse
|
7
|
Dissociation in neuronal encoding of object versus surface motion in the primate brain. Curr Biol 2023; 33:711-719.e5. [PMID: 36738735 PMCID: PMC9992021 DOI: 10.1016/j.cub.2023.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 11/30/2022] [Accepted: 01/09/2023] [Indexed: 02/05/2023]
Abstract
A paradox exists in our understanding of motion processing in the primate visual system: neurons in the dorsal motion processing stream often strikingly fail to encode long-range and perceptually salient jumps of a moving stimulus. Psychophysical studies suggest that such long-range motion, which requires integration over more distant parts of the visual field, may be based on higher-order motion processing mechanisms that rely on feature or object tracking. Here, we demonstrate that ventral visual area V4, long recognized as critical for processing static scenes, includes neurons that maintain direction selectivity for long-range motion, even when conflicting local motion is present. These V4 neurons exhibit specific selectivity for the motion of objects, i.e., targets with defined boundaries, rather than the motion of surfaces behind apertures, and are selective for direction of motion over a broad range of spatial displacements and defined by a variety of features. Motion direction at a range of speeds can be accurately decoded on single trials from the activity of just a few V4 neurons. Thus, our results identify a novel motion computation in the ventral stream that is strikingly different from, and complementary to, the well-established system in the dorsal stream, and they support the hypothesis that the ventral stream system interacts with the dorsal stream to achieve the higher level of abstraction critical for tracking dynamic objects.
Collapse
|
8
|
Conserved circuits for direction selectivity in the primate retina. Curr Biol 2022; 32:2529-2538.e4. [PMID: 35588744 DOI: 10.1016/j.cub.2022.04.056] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 02/25/2022] [Accepted: 04/20/2022] [Indexed: 02/06/2023]
Abstract
The detection of motion direction is a fundamental visual function and a classic model for neural computation. In the non-primate retina, direction selectivity arises in starburst amacrine cell (SAC) dendrites, which provide selective inhibition to direction-selective retinal ganglion cells (dsRGCs). Although SACs are present in primates, their connectivity and the existence of dsRGCs remain open questions. Here, we present a connectomic reconstruction of the primate ON SAC circuit from a serial electron microscopy volume of the macaque central retina. We show that the structural basis for the SACs' ability to confer directional selectivity on postsynaptic neurons is conserved. SACs selectively target a candidate homolog to the mammalian ON-sustained dsRGCs that project to the accessory optic system (AOS) and contribute to gaze-stabilizing reflexes. These results indicate that the capacity to compute motion direction is present in the retina, which is earlier in the primate visual system than classically thought.
Collapse
|
9
|
Direction selectivity in retinal bipolar cell axon terminals. Neuron 2021; 109:2928-2942.e8. [PMID: 34390651 PMCID: PMC8478419 DOI: 10.1016/j.neuron.2021.07.008] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 06/18/2021] [Accepted: 07/09/2021] [Indexed: 12/20/2022]
Abstract
The ability to encode the direction of image motion is fundamental to our sense of vision. Direction selectivity along the four cardinal directions is thought to originate in direction-selective ganglion cells (DSGCs) because of directionally tuned GABAergic suppression by starburst cells. Here, by utilizing two-photon glutamate imaging to measure synaptic release, we reveal that direction selectivity along all four directions arises earlier than expected at bipolar cell outputs. Individual bipolar cells contained four distinct populations of axon terminal boutons with different preferred directions. We further show that this bouton-specific tuning relies on cholinergic excitation from starburst cells and GABAergic inhibition from wide-field amacrine cells. DSGCs received both tuned directionally aligned inputs and untuned inputs from among heterogeneously tuned glutamatergic bouton populations. Thus, directional tuning in the excitatory visual pathway is incrementally refined at the bipolar cell axon terminals and their recipient DSGC dendrites by two different neurotransmitters co-released from starburst cells. Cardinal direction selectivity emerges at types 7 and 2 bipolar cell axon terminals Starburst amacrine cells are necessary for direction selectivity in bipolar cells Cholinergic excitation and GABAergic inhibition are integrated at axon terminals Direction-selective ganglion cells receive directionally aligned glutamate inputs
Collapse
|
10
|
Prior information use and response caution in perceptual decision-making: No evidence for a relationship with autistic-like traits. Q J Exp Psychol (Hove) 2021; 74:1953-1965. [PMID: 33998332 PMCID: PMC8450985 DOI: 10.1177/17470218211019939] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Interpreting the world around us requires integrating incoming sensory signals with prior information. Autistic individuals have been proposed to rely less on prior information and make more cautious responses than non-autistic individuals. Here, we investigated whether these purported features of autistic perception vary as a function of autistic-like traits in the general population. We used a diffusion model framework, whereby decisions are modelled as noisy evidence accumulation processes towards one of two bounds. Within this framework, prior information can bias the starting point of the evidence accumulation process. Our pre-registered hypotheses were that higher autistic-like traits would relate to reduced starting point bias caused by prior information and increased response caution (wider boundary separation). 222 participants discriminated the direction of coherent motion stimuli as quickly and accurately as possible. Stimuli were preceded with a neutral cue (square) or a directional cue (arrow). 80% of the directional cues validly predicted the upcoming motion direction. We modelled accuracy and response time data using a hierarchical Bayesian model in which starting point varied with cue condition. We found no evidence for our hypotheses, with starting point bias and response caution seemingly unrelated to Adult Autism Spectrum Quotient (AQ) scores. Alongside future research applying this paradigm to autistic individuals, our findings will help refine theories regarding the role of prior information and altered decision-making strategies in autistic perception. Our study also has implications for models of bias in perceptual decision-making, as the most plausible model was one that incorporated bias in both decision-making and sensory processing.
Collapse
|
11
|
Predictive Visual Motion Extrapolation Emerges Spontaneously and without Supervision at Each Layer of a Hierarchical Neural Network with Spike-Timing-Dependent Plasticity. J Neurosci 2021; 41:4428-4438. [PMID: 33888603 PMCID: PMC8152614 DOI: 10.1523/jneurosci.2017-20.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 03/28/2021] [Accepted: 03/31/2021] [Indexed: 11/21/2022] Open
Abstract
The fact that the transmission and processing of visual information in the brain takes time presents a problem for the accurate real-time localization of a moving object. One way this problem might be solved is extrapolation: using an object's past trajectory to predict its location in the present moment. Here, we investigate how a simulated in silico layered neural network might implement such extrapolation mechanisms, and how the necessary neural circuits might develop. We allowed an unsupervised hierarchical network of velocity-tuned neurons to learn its connectivity through spike-timing-dependent plasticity (STDP). We show that the temporal contingencies between the different neural populations that are activated by an object as it moves causes the receptive fields of higher-level neurons to shift in the direction opposite to their preferred direction of motion. The result is that neural populations spontaneously start to represent moving objects as being further along their trajectory than where they were physically detected. Because of the inherent delays of neural transmission, this effectively compensates for (part of) those delays by bringing the represented position of a moving object closer to its instantaneous position in the world. Finally, we show that this model accurately predicts the pattern of perceptual mislocalization that arises when human observers are required to localize a moving object relative to a flashed static object (the flash-lag effect; FLE).SIGNIFICANCE STATEMENT Our ability to track and respond to rapidly changing visual stimuli, such as a fast-moving tennis ball, indicates that the brain is capable of extrapolating the trajectory of a moving object to predict its current position, despite the delays that result from neural transmission. Here, we show how the neural circuits underlying this ability can be learned through spike-timing-dependent synaptic plasticity and that these circuits emerge spontaneously and without supervision. This demonstrates how the neural transmission delays can, in part, be compensated to implement the extrapolation mechanisms required to predict where a moving object is at the present moment.
Collapse
|
12
|
Direct Structural Connections between Auditory and Visual Motion-Selective Regions in Humans. J Neurosci 2021; 41:2393-2405. [PMID: 33514674 DOI: 10.1523/jneurosci.1552-20.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 11/21/2022] Open
Abstract
In humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the planum temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. Here we investigated, for the first time in humans (male and female), the presence of direct white matter connections between visual and auditory motion-selective regions using a combined fMRI and diffusion MRI approach. We found evidence supporting the potential existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles, such as the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Moreover, we did not find evidence suggesting the presence of projections between the fusiform face area and hPT, supporting the functional specificity of hMT+/V5-hPT connections. Finally, the potential presence of hMT+/V5-hPT connections was corroborated in a large sample of participants (n = 114) from the human connectome project. Together, this study provides a first indication for potential direct occipitotemporal projections between hMT+/V5 and hPT, which may support the exchange of motion information between functionally specialized auditory and visual regions.SIGNIFICANCE STATEMENT Perceiving and integrating moving signal across the senses is arguably one of the most important perceptual skills for the survival of living organisms. In order to create a unified representation of movement, the brain must therefore integrate motion information from separate senses. Our study provides support for the potential existence of direct connections between motion-selective regions in the occipital/visual (hMT+/V5) and temporal/auditory (hPT) cortices in humans. This connection could represent the structural scaffolding for the rapid and optimal exchange and integration of multisensory motion information. These findings suggest the existence of computationally specific pathways that allow information flow between areas that share a similar computational goal.
Collapse
|
13
|
Assessing visual search performance using a novel dynamic naturalistic scene. J Vis 2021; 21:5. [PMID: 33427871 PMCID: PMC7804579 DOI: 10.1167/jov.21.1.5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 12/01/2020] [Indexed: 11/24/2022] Open
Abstract
Daily activities require the constant searching and tracking of visual targets in dynamic and complex scenes. Classic work assessing visual search performance has been dominated by the use of simple geometric shapes, patterns, and static backgrounds. Recently, there has been a shift toward investigating visual search in more naturalistic dynamic scenes using virtual reality (VR)-based paradigms. In this direction, we have developed a first-person perspective VR environment combined with eye tracking for the capture of a variety of objective measures. Participants were instructed to search for a preselected human target walking in a crowded hallway setting. Performance was quantified based on saccade and smooth pursuit ocular motor behavior. To assess the effect of task difficulty, we manipulated factors of the visual scene, including crowd density (i.e., number of surrounding distractors) and the presence of environmental clutter. In general, results showed a pattern of worsening performance with increasing crowd density. In contrast, the presence of visual clutter had no effect. These results demonstrate how visual search performance can be investigated using VR-based naturalistic dynamic scenes and with high behavioral relevance. This engaging platform may also have utility in assessing visual search in a variety of clinical populations of interest.
Collapse
|
14
|
An Optical Illusion Pinpoints an Essential Circuit Node for Global Motion Processing. Neuron 2020; 108:722-734.e5. [PMID: 32966764 DOI: 10.1016/j.neuron.2020.08.027] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 07/21/2020] [Accepted: 08/26/2020] [Indexed: 11/16/2022]
Abstract
Direction-selective (DS) neurons compute the direction of motion in a visual scene. Brain-wide imaging in larval zebrafish has revealed hundreds of DS neurons scattered throughout the brain. However, the exact population that causally drives motion-dependent behaviors-e.g., compensatory eye and body movements-remains largely unknown. To identify the behaviorally relevant population of DS neurons, here we employ the motion aftereffect (MAE), which causes the well-known "waterfall illusion." Together with region-specific optogenetic manipulations and cellular-resolution functional imaging, we found that MAE-responsive neurons represent merely a fraction of the entire population of DS cells in larval zebrafish. They are spatially clustered in a nucleus in the ventral lateral pretectal area and are necessary and sufficient to steer the entire cycle of optokinetic eye movements. Thus, our illusion-based behavioral paradigm, combined with optical imaging and optogenetics, identified key circuit elements of global motion processing in the vertebrate brain.
Collapse
|
15
|
Late Development of Navigationally Relevant Motion Processing in the Occipital Place Area. Curr Biol 2020; 30:544-550.e3. [PMID: 31956027 PMCID: PMC7730705 DOI: 10.1016/j.cub.2019.12.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 10/18/2019] [Accepted: 12/03/2019] [Indexed: 10/25/2022]
Abstract
Human adults flawlessly and effortlessly navigate boundaries and obstacles in the immediately visible environment, a process we refer to as "visually guided navigation." Neuroimaging work in adults suggests this ability involves the occipital place area (OPA) [1, 2]-a scene-selective region in the dorsal stream that selectively represents information necessary for visually guided navigation [3-9]. Despite progress in understanding the neural basis of visually guided navigation, however, little is known about how this system develops. Is navigationally relevant information processing present in the first few years of life? Or does this information processing only develop after many years of experience? Although a handful of studies have found selective responses to scenes (relative to objects) in OPA in childhood [10-13], no study has explored how more specific navigationally relevant information processing emerges in this region. Here, we do just that by measuring OPA responses to first-person perspective motion information-a proxy for the visual experience of actually navigating the immediate environment-using fMRI in 5- and 8-year-old children. We found that, although OPA already responded more to scenes than objects by age 5, responses to first-person perspective motion were not yet detectable at this same age and rather only emerged by age 8. This protracted development was specific to first-person perspective motion through scenes, not motion on faces or objects, and was not found in other scene-selective regions (the parahippocampal place area or retrosplenial complex) or a motion-selective region (MT). These findings therefore suggest that navigationally relevant information processing in OPA undergoes prolonged development across childhood.
Collapse
|
16
|
Spatiotemporally Asymmetric Excitation Supports Mammalian Retinal Motion Sensitivity. Curr Biol 2019; 29:3277-3288.e5. [PMID: 31564498 PMCID: PMC6865067 DOI: 10.1016/j.cub.2019.08.048] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2019] [Revised: 08/15/2019] [Accepted: 08/20/2019] [Indexed: 11/20/2022]
Abstract
The detection of visual motion is a fundamental function of the visual system. How motion speed and direction are computed together at the cellular level, however, remains largely unknown. Here, we suggest a circuit mechanism by which excitatory inputs to direction-selective ganglion cells in the mouse retina become sensitive to the motion speed and direction of image motion. Electrophysiological, imaging, and connectomic analyses provide evidence that the dendrites of ON direction-selective cells receive spatially offset and asymmetrically filtered glutamatergic inputs along motion-preference axis from asymmetrically wired bipolar and amacrine cell types with distinct release dynamics. A computational model shows that, with this spatiotemporal structure, the input amplitude becomes sensitive to speed and direction by a preferred direction enhancement mechanism. Our results highlight the role of an excitatory mechanism in retinal motion computation by which feature selectivity emerges from non-selective inputs.
Collapse
|
17
|
A Review of Motion and Orientation Processing in Migraine. Vision (Basel) 2019; 3:E12. [PMID: 31735813 PMCID: PMC6802770 DOI: 10.3390/vision3020012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 03/07/2019] [Indexed: 11/24/2022] Open
Abstract
Visual tests can be used as noninvasive tools to test models of the pathophysiology underlying neurological conditions, such as migraine. They may also be used to track changes in performance that vary with the migraine cycle or can track the efficacy of prophylactic treatments. This article reviews the literature on performance differences on two visual tasks, global motion discrimination and orientation, which, of the many visual tasks that have been used to compare differences between migraine and control groups, have yielded the most consistent patterns of group differences. The implications for understanding the underlying pathophysiology in migraine are discussed, but the main focus is on bringing together disparate areas of research and suggesting those that can reveal practical uses of visual tests to treat and manage migraine.
Collapse
|
18
|
The direction after-effect is a global motion phenomenon. ROYAL SOCIETY OPEN SCIENCE 2019; 6:190114. [PMID: 31032060 PMCID: PMC6458423 DOI: 10.1098/rsos.190114] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 02/20/2019] [Indexed: 06/09/2023]
Abstract
Prior experience influences visual perception. For example, extended viewing of a moving stimulus results in the misperception of a subsequent stimulus's motion direction-the direction after-effect (DAE). There has been an ongoing debate regarding the locus of the neural mechanisms underlying the DAE. We know the mechanisms are cortical, but there is uncertainty about where in the visual cortex they are located-at relatively early local motion processing stages, or at later global motion stages. We used a unikinetic plaid as an adapting stimulus, then measured the DAE experienced with a drifting random dot test stimulus. A unikinetic plaid comprises a static grating superimposed on a drifting grating of a different orientation. Observers cannot see the true motion direction of the moving component; instead they see pattern motion running parallel to the static component. The pattern motion of unikinetic plaids is encoded at the global processing level-specifically, in cortical areas MT and MST-and the local motion component is encoded earlier. We measured the direction after-effect as a function of the plaid's local and pattern motion directions. The DAE was induced by the plaid's pattern motion, but not by its component motion. This points to the neural mechanisms underlying the DAE being located at the global motion processing level, and no earlier than area MT.
Collapse
|
19
|
Auditory modulation of spiking activity and local field potentials in area MT does not appear to underlie an audiovisual temporal illusion. J Neurophysiol 2018; 120:1340-1355. [PMID: 29924710 DOI: 10.1152/jn.00835.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The timing of brief stationary sounds has been shown to alter the perceived speed of visual apparent motion (AM), presumably by altering the perceived timing of the individual frames of the AM stimuli and/or the duration of the interstimulus intervals (ISIs) between those frames. To investigate the neural correlates of this "temporal ventriloquism" illusion, we recorded spiking and local field potential (LFP) activity from the middle temporal area (area MT) in awake, fixating macaques. We found that the spiking activity of most MT neurons (but not the LFP) was tuned for the ISI/speed (these parameters covaried) of our AM stimuli but that auditory timing had no effect on that tuning. We next asked whether the predicted changes in perceived timing were reflected in the timing of neuronal responses to the individual frames of the AM stimuli. Although spiking dynamics were significantly, if weakly, affected by auditory timing in a minority of neurons, the timing of spike responses did not systematically mirror the predicted perception of stimuli. Conversely, the duration of LFP responses in β- and γ-frequency bands was qualitatively consistent with human perceptual reports. We discovered, however, that LFP responses to auditory stimuli presented alone were robust and that responses to audiovisual stimuli were predicted by the linear sum of responses to auditory and visual stimuli presented individually. In conclusion, we find evidence of auditory input into area MT but not of the nonlinear audiovisual interactions we had hypothesized to underlie the illusion. NEW & NOTEWORTHY We utilized a set of audiovisual stimuli that elicit an illusion demonstrating "temporal ventriloquism" in visual motion and that have spatiotemporal intervals for which neurons within the middle temporal area are selective. We found evidence of auditory input into the middle temporal area but not of the nonlinear audiovisual interactions underlying this illusion. Our findings suggest that either the illusion was absent in our nonhuman primate subjects or the neuronal correlates of this illusion lie within other areas.
Collapse
|
20
|
Linear Summation Underlies Direction Selectivity in Drosophila. Neuron 2018; 99:680-688.e4. [PMID: 30057202 DOI: 10.1016/j.neuron.2018.07.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 05/24/2018] [Accepted: 07/02/2018] [Indexed: 11/28/2022]
Abstract
While linear mechanisms lay the foundations of feature selectivity in many brain areas, direction selectivity in the elementary motion detector (EMD) of the fly has become a paradigm of nonlinear neuronal computation. We have bridged this divide by demonstrating that linear spatial summation can generate direction selectivity in the fruit fly Drosophila. Using linear systems analysis and two-photon imaging of a genetically encoded voltage indicator, we measure the emergence of direction-selective (DS) voltage signals in the Drosophila OFF pathway. Our study is a direct, quantitative investigation of the algorithm underlying directional signals, with the striking finding that linear spatial summation is sufficient for the emergence of direction selectivity. A linear stage of the fly EMD strongly resembles similar computations in vertebrate visual cortex, demands a reappraisal of the role of upstream nonlinearities, and implicates the voltage-to-calcium transformation in the refinement of feature selectivity in this system. VIDEO ABSTRACT.
Collapse
|
21
|
Amplitude-Based Filtering for Video Magnification in Presence of Large Motion. SENSORS 2018; 18:s18072312. [PMID: 30018210 PMCID: PMC6068733 DOI: 10.3390/s18072312] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 07/05/2018] [Accepted: 07/11/2018] [Indexed: 11/16/2022]
Abstract
Video magnification reveals important and informative subtle variations in the world. These signals are often combined with large motions which result in significant blurring artifacts and haloes when conventional video magnification approaches are used. To counter these issues, this paper presents an amplitude-based filtering algorithm that can magnify small changes in video in presence of large motions. We seek to understand the amplitude characteristic of small changes and large motions with the goal of extracting accurate signals for visualization. Based on spectrum amplitude filtering, the large motions can be removed while small changes can still be magnified by Eulerian approach. An advantage of this algorithm is that it can handle large motions, whether they are linear or nonlinear. Our experimental results show that the proposed method can amplify subtle variations in the presence of large motion, as well as significantly reduce artifacts. We demonstrate the presented algorithm by comparing to the state-of-the-art and provide subjective and objective evidence for the proposed method.
Collapse
|
22
|
An fMRI-Neuronavigated Chronometric TMS Investigation of V5 and Intraparietal Cortex in Motion Driven Attention. Front Hum Neurosci 2018; 11:638. [PMID: 29354043 PMCID: PMC5758491 DOI: 10.3389/fnhum.2017.00638] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Accepted: 12/15/2017] [Indexed: 11/13/2022] Open
Abstract
The timing of networked brain activity subserving motion driven attention in humans is currently unclear. Functional MRI (fMRI)-neuronavigated chronometric transcranial magnetic stimulation (TMS) was used to investigate critical times of parietal cortex involvement in motion driven attention. In particular, we were interested in the relative critical times for two intraparietal sulcus (IPS) sites in comparison to that previously identified for motion processing in area V5, and to explore potential earlier times of involvement. fMRI was used to individually localize V5 and middle and posterior intraparietal sulcus (mIPS; pIPS) areas active for a motion driven attention task, prior to TMS neuronavigation. Paired-pulse TMS was applied during performance of the same task at stimulus onset asynchronies (SOAs) ranging from 0 to 180 ms. There were no statistically significant decreases in performance accuracy for trials where TMS was applied to V5 at any SOA, though stimulation intensity was lower for this site than for the parietal sites. For TMS applied to mIPS, there was a trend toward a relative decrease in performance accuracy at the 150 ms SOA, as well as a relative increase at 180 ms. There was no statistically significant effect overall of TMS applied to pIPS, however, there appeared a potential trend toward a decrease in performance at the 0 ms SOA. Overall, these results provide some patterns of potential theoretical interest to follow up in future studies.
Collapse
|
23
|
Visual attention to motion stimuli and its neural correlates in cannabis users. Eur J Neurosci 2017; 47:269-276. [PMID: 29266467 DOI: 10.1111/ejn.13810] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 11/21/2017] [Accepted: 12/07/2017] [Indexed: 11/28/2022]
Abstract
Attention to motion stimuli and correct motion perception are vital for road safety. Although cannabis use has been associated with increased road crash risks, there is limited research on attentional processing of moving stimuli in cannabis users. This study investigated the neural correlates of the three-stimulus oddball task in cannabis users (n = 18) and non-users (n = 23) in response to moving stimuli. Stimulus contrast was under 16% against a low luminance background (M luminance < 16 cd/m2 ). The two groups did not differ in accuracy or in N2 peak amplitude; however, N2 latency was longer for target and standard stimuli in the cannabis group than in the control group. The cannabis group also showed a significantly reduced P3b amplitude in response to target stimuli. The AUDIT score was added as a random factor to the anova to rule out the effects of uneven alcohol consumption in the two groups. A significant group effect was found for N2 latency in response to target and standard stimuli and a significant interaction between the group, and the AUDIT score was found for the P3b peak amplitude for the distractor and standard stimuli, but not for the target stimuli. The results of this study suggest that cannabis use relates to reduced neural activity underlying attention to motion stimuli. Implications for regular early-onset cannabis use road safety are discussed.
Collapse
|
24
|
The Primary Role of Flow Processing in the Identification of Scene-Relative Object Movement. J Neurosci 2017; 38:1737-1743. [PMID: 29229707 PMCID: PMC5815455 DOI: 10.1523/jneurosci.3530-16.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Revised: 08/15/2017] [Accepted: 09/07/2017] [Indexed: 11/25/2022] Open
Abstract
Retinal image motion could be due to the movement of the observer through space or an object relative to the scene. Optic flow, form, and change of position cues all provide information that could be used to separate out retinal motion due to object movement from retinal motion due to observer movement. In Experiment 1, we used a minimal display to examine the contribution of optic flow and form cues. Human participants indicated the direction of movement of a probe object presented against a background of radially moving pairs of dots. By independently controlling the orientation of each dot pair, we were able to put flow cues to self-movement direction (the point from which all the motion radiated) and form cues to self-movement direction (the point toward which all the dot pairs were oriented) in conflict. We found that only flow cues influenced perceived probe movement. In Experiment 2, we switched to a rich stereo display composed of 3D objects to examine the contribution of flow and position cues. We moved the scene objects to simulate a lateral translation and counter-rotation of gaze. By changing the polarity of the scene objects (from light to dark and vice versa) between frames, we placed flow cues to self-movement direction in opposition to change of position cues. We found that again flow cues dominated the perceived probe movement relative to the scene. Together, these experiments indicate the neural network that processes optic flow has a primary role in the identification of scene-relative object movement. SIGNIFICANCE STATEMENT Motion of an object in the retinal image indicates relative movement between the observer and the object, but it does not indicate its cause: movement of an object in the scene; movement of the observer; or both. To isolate retinal motion due to movement of a scene object, the brain must parse out the retinal motion due to movement of the eye (“flow parsing”). Optic flow, form, and position cues all have potential roles in this process. We pitted the cues against each other and assessed their influence. We found that flow parsing relies on optic flow alone. These results indicate the primary role of the neural network that processes optic flow in the identification of scene-relative object movement.
Collapse
|
25
|
Task-specific, dimension-based attentional shaping of motion processing in monkey area MT. J Neurophysiol 2017; 118:1542-1555. [PMID: 28659459 DOI: 10.1152/jn.00183.2017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Revised: 06/15/2017] [Accepted: 06/15/2017] [Indexed: 11/22/2022] Open
Abstract
Nonspatially selective attention is based on the notion that specific features or objects in the visual environment are effectively prioritized in cortical visual processing. Feature-based attention (FBA), in particular, is a well-studied process that dynamically and selectively addresses neurons preferentially processing the attended feature attribute (e.g., leftward motion). In everyday life, however, behavior may require high sensitivity for an entire feature dimension (e.g., motion), but experimental evidence for a feature dimension-specific attentional modulation on a cellular level is lacking. Therefore, we investigated neuronal activity in macaque motion-selective mediotemporal area (MT) in an experimental setting requiring the monkeys to detect either a motion change or a color change. We hypothesized that neural activity in MT is enhanced when the task requires perceptual sensitivity to motion. In line with this, we found that mean firing rates were higher in the motion task and that response variability and latency were lower compared with values in the color task, despite identical visual stimulation. This task-specific, dimension-based modulation of motion processing emerged already in the absence of visual input, was independent of the relation between the attended and stimulating motion direction, and was accompanied by a spatially global reduction of neuronal variability. The results provide single-cell support for the hypothesis of a feature dimension-specific top-down signal emphasizing the processing of an entire feature class.NEW & NOTEWORTHY Cortical processing serving visual perception prioritizes information according to current task requirements. We provide evidence in favor of a dimension-based attentional mechanism addressing all neurons that process visual information in the task-relevant feature domain. Behavioral tasks required monkeys to attend either color or motion, causing modulations of response strength, variability, latency, and baseline activity of motion-selective monkey area MT neurons irrespective of the attended motion direction but specific to the attended feature dimension.
Collapse
|
26
|
Abstract
Recordings of local field potential (LFP) in the visual cortex can show rhythmic activity at gamma frequencies (30-100 Hz). While the gamma rhythms in the primary visual cortex have been well studied, the structural and functional characteristics of gamma rhythms in extrastriate visual cortex are less clear. Here, we studied the spatial distribution and functional specificity of gamma rhythms in extrastriate middle temporal (MT) area of visual cortex in marmoset monkeys. We found that moving gratings induced narrowband gamma rhythms across cortical layers that were coherent across much of area MT. Moving dot fields instead induced a broadband increase in LFP in middle and upper layers, with weaker narrowband gamma rhythms in deeper layers. The stimulus dependence of LFP response in middle and upper layers of area MT appears to reflect the presence (gratings) or absence (dot fields and other textures) of strongly oriented contours. Our results suggest that gamma rhythms in these layers are propagated from earlier visual cortex, while those in the deeper layers may emerge in area MT.
Collapse
|
27
|
Perceived duration of brief visual events is mediated by timing mechanisms at the global stages of visual processing. ROYAL SOCIETY OPEN SCIENCE 2017; 4:160928. [PMID: 28405382 PMCID: PMC5383839 DOI: 10.1098/rsos.160928] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2016] [Accepted: 02/01/2017] [Indexed: 06/07/2023]
Abstract
There is a growing body of evidence pointing to the existence of modality-specific timing mechanisms for encoding sub-second durations. For example, the duration compression effect describes how prior adaptation to a dynamic visual stimulus results in participants underestimating the duration of a sub-second test stimulus when it is presented at the adapted location. There is substantial evidence for the existence of both cortical and pre-cortical visual timing mechanisms; however, little is known about where in the processing hierarchy the cortical mechanisms are likely to be located. We carried out a series of experiments to determine whether or not timing mechanisms are to be found at the global processing level. We had participants adapt to random dot patterns that varied in their motion coherence, thus allowing us to probe the visual system at the level of motion integration. Our first experiment revealed a positive linear relationship between the motion coherence level of the adaptor stimulus and duration compression magnitude. However, increasing the motion coherence level in a stimulus also results in an increase in global speed. To test whether duration compression effects were driven by global speed or global motion, we repeated the experiment, but kept global speed fixed while varying motion coherence levels. The duration compression persisted, but the linear relationship with motion coherence was absent, suggesting that the effect was driven by adapting global speed mechanisms. Our results support previous claims that visual timing mechanisms persist at the level of global processing.
Collapse
|
28
|
Stimulus-specific adaptation to visual but not auditory motion direction in the barn owl's optic tectum. Eur J Neurosci 2016; 45:610-621. [PMID: 27987375 DOI: 10.1111/ejn.13505] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 12/11/2016] [Accepted: 12/12/2016] [Indexed: 12/01/2022]
Abstract
Whether the auditory and visual systems use a similar coding strategy to represent motion direction is an open question. We investigated this question in the barn owl's optic tectum (OT) testing stimulus-specific adaptation (SSA) to the direction of motion. SSA, the reduction of the response to a repetitive stimulus that does not generalize to other stimuli, has been well established in OT neurons. SSA suggests a separate representation of the adapted stimulus in upstream pathways. So far, only SSA to static stimuli has been studied in the OT. Here, we examined adaptation to moving auditory and visual stimuli. SSA to motion direction was examined using repeated presentations of moving stimuli, occasionally switching motion to the opposite direction. Acoustic motion was either mimicked by varying binaural spatial cues or implemented in free field using a speaker array. While OT neurons displayed SSA to motion direction in visual space, neither stimulation paradigms elicited significant SSA to auditory motion direction. These findings show a qualitative difference in how auditory and visual motion is processed in the OT and support the existence of dedicated circuitry for representing motion direction in the early stages of visual but not the auditory system.
Collapse
|
29
|
Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review. Front Integr Neurosci 2015; 9:62. [PMID: 26733827 PMCID: PMC4686600 DOI: 10.3389/fnint.2015.00062] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 12/03/2015] [Indexed: 11/13/2022] Open
Abstract
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing.
Collapse
|
30
|
Editorial: What can simple brains teach us about how vision works. Front Neural Circuits 2015; 9:51. [PMID: 26483639 PMCID: PMC4586271 DOI: 10.3389/fncir.2015.00051] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 09/14/2015] [Indexed: 11/30/2022] Open
|
31
|
Neural correlates of visual motion processing without awareness in patients with striate cortex and pulvinar lesions. Hum Brain Mapp 2014; 36:1585-94. [PMID: 25529748 DOI: 10.1002/hbm.22725] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2014] [Revised: 11/21/2014] [Accepted: 12/08/2014] [Indexed: 11/11/2022] Open
Abstract
Patients with striate cortex lesions experience visual perception loss in the contralateral visual field. In few patients, however, stimuli within the blind field can lead to unconscious (blindsight) or even conscious perception when the stimuli are moving (Riddoch syndrome). Using functional magnetic resonance imaging (fMRI), we investigated the neural responses elicited by motion stimulation in the sighted and blind visual fields of eight patients with lesions of the striate cortex. Importantly, repeated testing ensured that none of the patients exhibited blindsight or a Riddoch syndrome. Three patients had additional lesions in the ipsilesional pulvinar. For blind visual field stimulation, great care was given that the moving stimulus was precisely presented within the borders of the scotoma. In six of eight patients, the stimulation within the scotoma elicited hemodynamic activity in area human middle temporal (hMT) while no activity was observed within the ipsilateral lesioned area of the striate cortex. One of the two patients in whom no ipsilesional activity was observed had an extensive lesion including massive subcortical damage. The other patient had an additional focal lesion within the lateral inferior pulvinar. Fiber-tracking based on anatomical and functional markers (hMT and Pulvinar) on individual diffusion tensor imaging (DTI) data from each patient revealed the structural integrity of subcortical pathways in all but the patient with the extensive subcortical lesion. These results provide clear evidence for the robustness of direct subcortical pathways from the pulvinar to area hMT in patients with striate cortex lesions and demonstrate that ipsilesional activity in area hMT is completely independent of conscious perception.
Collapse
|
32
|
Feature integration and object representations along the dorsal stream visual hierarchy. Front Comput Neurosci 2014; 8:84. [PMID: 25140147 PMCID: PMC4122209 DOI: 10.3389/fncom.2014.00084] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2014] [Accepted: 07/16/2014] [Indexed: 11/13/2022] Open
Abstract
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features.
Collapse
|
33
|
Similar adaptation effects in primary visual cortex and area MT of the macaque monkey under matched stimulus conditions. J Neurophysiol 2013; 111:1203-13. [PMID: 24371295 DOI: 10.1152/jn.00030.2013] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recent stimulus history, or adaptation, can alter neuronal response properties. Adaptation effects have been characterized in a number of visually responsive structures, from the retina to higher visual cortex. However, it remains unclear whether adaptation effects across stages of the visual system take a similar form in response to a particular sensory event. This is because studies typically probe a single structure or cortical area, using a stimulus ensemble chosen to provide potent drive to the cells of interest. Here we adopt an alternative approach and compare adaptation effects in primary visual cortex (V1) and area MT using identical stimulus ensembles. Previous work has suggested these areas adjust to recent stimulus drive in distinct ways. We show that this is not the case: adaptation effects in V1 and MT can involve weak or strong loss of responsivity and shifts in neuronal preference toward or away from the adapter, depending on stimulus size and adaptation duration. For a particular stimulus size and adaptation duration, however, effects are similar in nature and magnitude in V1 and MT. We also show that adaptation effects in MT of awake animals depend strongly on stimulus size. Our results suggest that the strategies for adjusting to recent stimulus history depend more strongly on adaptation duration and stimulus size than on the cortical area. Moreover, they indicate that different levels of the visual system adapt similarly to recent sensory experience.
Collapse
|
34
|
Multiple routes to mental animation: language and functional relations drive motion processing for static images. Psychol Sci 2013; 24:1379-88. [PMID: 23774464 DOI: 10.1177/0956797612469209] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
When looking at static visual images, people often exhibit mental animation, anticipating visual events that have not yet happened. But what determines when mental animation occurs? Measuring mental animation using localized brain function (visual motion processing in the middle temporal and middle superior temporal areas, MT+), we demonstrated that animating static pictures of objects is dependent both on the functionally relevant spatial arrangement that objects have with one another (e.g., a bottle above a glass vs. a glass above a bottle) and on the linguistic judgment to be made about those objects (e.g., "Is the bottle above the glass?" vs. "Is the bottle bigger than the glass?"). Furthermore, we showed that mental animation is driven by functional relations and language separately in the right hemisphere of the brain but conjointly in the left hemisphere. Mental animation is not a unitary construct; the predictions humans make about the visual world are driven flexibly, with hemispheric asymmetry in the routes to MT+ activation.
Collapse
|
35
|
Recurrent competition explains temporal effects of attention in MSTd. Front Comput Neurosci 2012; 6:80. [PMID: 23060788 PMCID: PMC3464456 DOI: 10.3389/fncom.2012.00080] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2012] [Accepted: 09/19/2012] [Indexed: 12/03/2022] Open
Abstract
Navigation in a static environment along straight paths without eye movements produces radial optic flow fields. A singularity called the focus of expansion (FoE) specifies the direction of travel (heading) of the observer. Cells in primate dorsal medial superior temporal area (MSTd) respond to radial fields and are therefore thought to be heading-sensitive. Humans frequently shift their focus of attention while navigating, for example, depending on the favorable or threatening context of approaching independently moving objects. Recent neurophysiological studies show that the spatial tuning curves of primate MSTd neurons change based on the difference in visual angle between an attentional prime and the FoE. Moreover, the peak mean population activity in MSTd retreats linearly in time as the distance between the attentional prime and FoE increases. We present a dynamical neural circuit model that demonstrates the same linear temporal peak shift observed electrophysiologically. The model qualitatively matches the neuron tuning curves and population activation profiles. After model MT dynamically pools short-range motion, model MSTd incorporates recurrent competition between units tuned to different radial optic flow templates, and integrates attentional signals from model area frontal eye fields (FEF). In the model, population activity peaks occur when the recurrent competition is most active and uncertainty is greatest about the relative position of the FoE. The nature of attention, multiplicative or non-multiplicative, is largely irrelevant, so long as attention has a Gaussian-like profile. Using an appropriately tuned sigmoidal signal function to modulate recurrent feedback affords qualitative fits of deflections in the population activity that otherwise appear to be low-frequency noise. We predict that these deflections mark changes in the balance of attention between the priming and FoE locations.
Collapse
|
36
|
Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes. Front Syst Neurosci 2012; 6:27. [PMID: 22582038 PMCID: PMC3348722 DOI: 10.3389/fnsys.2012.00027] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2011] [Accepted: 04/01/2012] [Indexed: 11/24/2022] Open
Abstract
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds.
Collapse
|
37
|
Sensorimotor transformation deficits for smooth pursuit in first-episode affective psychoses and schizophrenia. Biol Psychiatry 2010; 67:217-23. [PMID: 19782964 PMCID: PMC2879155 DOI: 10.1016/j.biopsych.2009.08.005] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2009] [Revised: 08/07/2009] [Accepted: 08/07/2009] [Indexed: 12/14/2022]
Abstract
BACKGROUND Smooth pursuit deficits are an intermediate phenotype for schizophrenia that may result from disturbances in visual motion perception, sensorimotor transformation, predictive mechanisms, or alterations in basic oculomotor control. Which of these components are the primary causes of smooth pursuit impairments and whether they are impaired similarly across psychotic disorders remain to be established. METHODS First-episode psychotic patients with bipolar disorder (n = 34), unipolar depression (n = 24), or schizophrenia (n = 77) and matched healthy participants (n = 130) performed three smooth pursuit tasks designed to evaluate different components of pursuit tracking. RESULTS On ramp tasks, maintenance pursuit velocity was reduced in all three patients groups with psychotic bipolar patients exhibiting the most severe impairments. Open loop pursuit velocity was reduced in psychotic bipolar and schizophrenia patients. Motion perception during pursuit initiation, as indicated by the accuracy of saccades to moving targets, was not impaired in any patient group. Analyses in 138 participants followed for 6 weeks, during which patients were treated and psychotic symptom severity decreased, and no significant change in performance in any group was revealed. CONCLUSIONS Sensorimotor transformation deficits in all patient groups suggest a common alteration in frontostriatal networks that dynamically regulate gain control of pursuit responses using sensory input and feedback about performance. Predictive mechanisms appear to be sufficiently intact to compensate for this deficit across psychotic disorders. The absence of significant changes after acute treatment and symptom reduction suggests that these deficits appear to be stable over time.
Collapse
|
38
|
Abstract
Eye tracking dysfunction (ETD) is one of the most widely replicated behavioral deficits in schizophrenia and is over-represented in clinically unaffected first-degree relatives of schizophrenia patients. Here, we provide an overview of research relevant to the characterization and pathophysiology of this impairment. Deficits are most robust in the maintenance phase of pursuit, particularly during the tracking of predictable target movement. Impairments are also found in pursuit initiation and correlate with performance on tests of motion processing, implicating early sensory processing of motion signals. Taken together, the evidence suggests that ETD involves higher-order structures, including the frontal eye fields, which adjust the gain of the pursuit response to visual and anticipated target movement, as well as early parts of the pursuit pathway, including motion areas (the middle temporal area and the adjacent medial superior temporal area). Broader application of localizing behavioral paradigms in patient and family studies would be advantageous for refining the eye tracking phenotype for genetic studies.
Collapse
|
39
|
Abstract
When a flash of light is presented in physical alignment with a moving object, the flash is perceived to lag behind the position of the object. This phenomenon, known as the flash-lag effect, has been of particular interest to vision scientists because of the challenge it presents to understanding how the visual system generates perceptions of objects in motion. Although various explanations have been offered, the significance of this effect remains a matter of debate. Here, we show that: (i) contrary to previous reports based on limited data, the flash-lag effect is an increasing nonlinear function of image speed; and (ii) this function is accurately predicted by the frequency of occurrence of image speeds generated by the perspective transformation of moving objects. These results support the conclusion that perceptions of the relative position of a moving object are determined by accumulated experience with image speeds, in this way allowing for visual behavior in response to real-world sources whose speeds and positions cannot be perceived directly.
Collapse
|
40
|
Segmentation by color influences responses of motion-sensitive neurons in the cortical middle temporal visual area. J Neurosci 1999; 19:3935-51. [PMID: 10234024 PMCID: PMC6782728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023] Open
Abstract
We previously showed that human subjects are better able to discriminate the direction of a motion signal in dynamic noise when the signal is distinguished (segmented) from the noise by color. This finding suggested a hitherto unexplored avenue of interaction between motion and color pathways in the primate visual system. To examine whether chromatic segmentation exerts a similar influence on cortical neurons that contribute to motion direction discrimination, we have now compared the discriminative capacity of single MT neurons and psychophysical observers viewing motion signals with and without chromatic segmentation. All data were obtained from rhesus monkeys trained to discriminate motion direction in dynamic stimuli containing varying proportions of coherently moving (signal) and randomly moving (noise) dots. We obtained psychophysical and neurophysiological data in the same animals, on the same trials, and using the same visual display. Chromatic segmentation of the signal from the noise enhanced both neuronal and psychophysical sensitivity to the motion signal but had a smaller influence on neuronal than on psychophysical sensitivity. Hence the ratio of neuronal to psychophysical thresholds, one measure of the relation between neuronal and psychophysical performance, depended on chromatic segmentation. Increased neuronal sensitivity to chromatically segmented displays stemmed from larger and less noisy responses to motion in the preferred directions of the neurons, suggesting that specialized mechanisms influence responses in the motion pathway when color segments motion signal in visual scenes. These findings lead us to reevaluate potential mechanisms for pooling of MT responses and the role of MT in motion perception.
Collapse
|
41
|
A model for encoding multiple object motions and self-motion in area MST of primate visual cortex. J Neurosci 1998; 18:531-47. [PMID: 9412529 PMCID: PMC6793419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Many cells in the dorsal part of the medial superior temporal (MST) region of visual cortex respond selectively to specific combinations of expansion/contraction, translation, and rotation motions. Previous investigators have suggested that these cells may respond selectively to the flow fields generated by self-motion of an observer. These patterns can also be generated by the relative motion between an observer and a particular object. We explored a neurally constrained model based on the hypothesis that neurons in MST partially segment the motion fields generated by several independently moving objects. Inputs to the model were generated from sequences of ray-traced images that simulated realistic motion situations, combining observer motion, eye movements, and independent object motions. The input representation was based on the response properties of neurons in the middle temporal area (MT), which provides the primary input to area MST. After applying an unsupervised optimization technique, the units became tuned to patterns signaling coherent motion, matching many of the known properties of MST cells. The results of this model are consistent with recent studies indicating that MST cells primarily encode information concerning the relative three-dimensional motion between objects and the observer.
Collapse
|
42
|
How is a sensory map read Out? Effects of microstimulation in visual area MT on saccades and smooth pursuit eye movements. J Neurosci 1997; 17:4312-30. [PMID: 9151748 PMCID: PMC6573560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/1997] [Revised: 03/14/1997] [Accepted: 03/14/1997] [Indexed: 02/04/2023] Open
Abstract
To generate behavioral responses based on sensory input, motor areas of the brain must interpret, or "read out," signals from sensory maps. Our experiments tested several algorithms for how the motor systems for smooth pursuit and saccadic eye movements might extract a usable signal of target velocity from the distributed representation of velocity in the middle temporal visual area (MT or V5). Using microstimulation, we attempted to manipulate the velocity information within MT while monkeys tracked a moving visual stimulus. We examined the effects of the microstimulation on smooth pursuit and on the compensation for target velocity shown by saccadic eye movements. Microstimulation could alter both the speed and direction of the motion estimates of both types of eye movements and could also cause monkeys to generate pursuit even when the visual target was actually stationary. The pattern of alterations suggests that microstimulation can introduce an additional velocity signal into MT and that the pursuit and saccadic systems usually compute a vector average of the visually evoked and microstimulation-induced velocity signals (pursuit, 55 of 122 experiments; saccades, 70 of 122). Microstimulation effects in a few experiments were consistent with vector summation of these two signals (pursuit, 6 of 122; saccades, 2 of 122). In the remainder of the experiments, microstimulation caused either an apparent impairment in motion processing (pursuit, 47 of 122; saccades, 41 of 122) or had no effect (pursuit, 14 of 122; saccades, 9 of 122). Within individual experiments, the effects on pursuit and saccades were usually similar, but the occasional striking exception suggests that the two eye movement systems may perform motion computations somewhat independently.
Collapse
|
43
|
Visual response properties of striate cortical neurons projecting to area MT in macaque monkeys. J Neurosci 1996; 16:7733-41. [PMID: 8922429 PMCID: PMC6579106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
We have previously shown that some neurons in extrastriate area MT are capable of signaling the global motion of complex patterns; neurons randomly sampled from V1, on the other hand, respond only to the motion of individual oriented components. Because only a small fraction of V1 neurons projects to MT, we wished to establish the processing hierarchy more precisely by studying the properties of those neurons projecting to MT, identified by antidromic responses to electrical stimulation of MT. The neurons that project from V1 to MT were directionally selective and, like other V1 neurons, responded only to the motion of the components of complex patterns. The projection neurons were predominantly "special complex," responsive to a broad range of spatial and temporal frequencies, and sensitive to very low stimulus contrasts. The projection neurons thus comprise a homogeneous and highly specialized subset of V1 neurons, consistent with the notion that V1 acts as clearing house of basic visual measurements, distributing information appropriately to higher cortical areas for specialized analysis.
Collapse
|
44
|
Effects of early-onset artificial strabismus on pursuit eye movements and on neuronal responses in area MT of macaque monkeys. J Neurosci 1996; 16:6537-53. [PMID: 8815931 PMCID: PMC6578926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/1996] [Revised: 06/17/1996] [Accepted: 06/27/1996] [Indexed: 02/02/2023] Open
Abstract
In humans, esotropia of early onset is associated with a profound asymmetry in smooth pursuit eye movements. When viewing is monocular, targets are tracked well only when they are moving nasally with respect to the viewing eye. To determine whether this pursuit abnormality reflects an anomaly in cortical visual motion processing, we recorded eye movements and cortical neural responses in nonamblyopic monkeys made strabismic by surgery at the age of 10-60 d. Eye movement recordings revealed the same asymmetry in the monkeys' pursuit eye movements as in humans with early-onset esotropia. With monocular viewing, pursuit was much stronger for nasalward motion than for temporalward motion, especially for targets presented in the nasal visual field. However, for targets presented during ongoing pursuit, temporalward and nasalward image motion was equally effective in modulating eye movement. Single-unit recordings made from the same monkeys, under anesthesia, revealed that MT neurons were rarely driven binocularly, but otherwise had normal response properties. Most were directionally selective, and their direction preferences were uniformly distributed. Our neurophysiological and oculomotor measurements both suggest that the pursuit defect in these monkeys is not due to altered cortical visual motion processing. Rather, the asymmetry in pursuit may be a consequence of imbalances in the two eyes' inputs to the "downstream" areas responsible for the initiation of pursuit.
Collapse
|
45
|
Visual motion-detection circuits in flies: parallel direction- and non-direction-sensitive pathways between the medulla and lobula plate. J Neurosci 1996; 16:4551-62. [PMID: 8764644 PMCID: PMC6579027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
The neural circuitry of motion processing in insects, as in primates, involves the segregation of different types of visual information into parallel retinotopic pathways that subsequently are reunited at higher levels. In insects, achromatic, motion-sensitive pathways to the lobula plate are separated from color-processing pathways to the lobula. Further parallel subdivisions of the retinotopic pathways to the lobula plate have been suggested from anatomical observations. Here, we provide direct physiological evidence that the two most prominent of these latter pathways are, indeed, functionally distinct: recordings from the retinotopic pathway defined by small-field bushy T-cells (T4) demonstrate only weak directional selectivity to motion, in striking contrast with previously demonstrated strong directional selectivity in the second, T5-cell, pathway. Additional intracellular recordings and anatomical descriptions have been obtained from other identified neurons that may be crucial in early motion detection and processing: a deep medulla amacrine cell that seems well suited to provide the lateral interactions among retinotopic elements required for motion detection; a unique class of Y-cells that provide small-field, directionally selective feedback from the lobula plate to the medulla; and a new heterolateral lobula plate tangential cell that collates directional, motion-sensitive inputs. These results add important new elements to the set of identified neurons that process motion information. The results suggest specific hypotheses regarding the neuronal substrates for motion-processing circuitry and corroborate behavioral studies in bees that predict distinct pathways for directional and nondirectional motion.
Collapse
|