1
|
Neural sensitivity to translational self- and object-motion velocities. Hum Brain Mapp 2024; 45:e26571. [PMID: 38224544 PMCID: PMC10785198 DOI: 10.1002/hbm.26571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 12/04/2023] [Accepted: 12/07/2023] [Indexed: 01/17/2024] Open
Abstract
The ability to detect and assess world-relative object-motion is a critical computation performed by the visual system. This computation, however, is greatly complicated by the observer's movements, which generate a global pattern of motion on the observer's retina. How the visual system implements this computation is poorly understood. Since we are potentially able to detect a moving object if its motion differs in velocity (or direction) from the expected optic flow generated by our own motion, here we manipulated the relative motion velocity between the observer and the object within a stationary scene as a strategy to test how the brain accomplishes object-motion detection. Specifically, we tested the neural sensitivity of brain regions that are known to respond to egomotion-compatible visual motion (i.e., egomotion areas: cingulate sulcus visual area, posterior cingulate sulcus area, posterior insular cortex [PIC], V6+, V3A, IPSmot/VIP, and MT+) to a combination of different velocities of visually induced translational self- and object-motion within a virtual scene while participants were instructed to detect object-motion. To this aim, we combined individual surface-based brain mapping, task-evoked activity by functional magnetic resonance imaging, and parametric and representational similarity analyses. We found that all the egomotion regions (except area PIC) responded to all the possible combinations of self- and object-motion and were modulated by the self-motion velocity. Interestingly, we found that, among all the egomotion areas, only MT+, V6+, and V3A were further modulated by object-motion velocities, hence reflecting their possible role in discriminating between distinct velocities of self- and object-motion. We suggest that these egomotion regions may be involved in the complex computation required for detecting scene-relative object-motion during self-motion.
Collapse
|
2
|
Decoding self-motion from visual image sequence predicts distinctive features of reflexive motor responses to visual motion. Neural Netw 2023; 162:516-530. [PMID: 36990001 DOI: 10.1016/j.neunet.2023.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 03/10/2023] [Accepted: 03/17/2023] [Indexed: 03/28/2023]
Abstract
Visual motion analysis is crucial for humans to detect external moving objects and self-motion which are informative for planning and executing actions for various interactions with environments. Here we show that the image motion analysis trained to decode the self-motion during human natural movements by a convolutional neural network exhibits similar specificities with the reflexive ocular and manual responses induced by a large-field visual motion, in terms of stimulus spatiotemporal frequency tuning. The spatiotemporal frequency tuning of the decoder peaked at high-temporal and low-spatial frequencies, as observed in the reflexive ocular and manual responses, but differed significantly from the frequency power of the visual image itself and the density distribution of self-motion. Further, artificial manipulations of the learning data sets predicted great changes in the specificity of the spatiotemporal tuning. Interestingly, despite similar spatiotemporal frequency tunings in the vertical-axis rotational direction and in the transversal direction to full-field visual stimuli, the tunings for center-masked stimuli were different between those directions, and the specificity difference is qualitatively similar to the discrepancy between ocular and manual responses, respectively. In addition, the representational analysis demonstrated that head-axis rotation was decoded by relatively simple spatial accumulation over the visual field, while the transversal motion was decoded by more complex spatial interaction of visual information. These synthetic model examinations support the idea that visual motion analyses eliciting the reflexive motor responses, which are critical in interacting with the external world, are acquired for decoding self-motion.
Collapse
|
3
|
|
4
|
Neural bases of self- and object-motion in a naturalistic vision. Hum Brain Mapp 2019; 41:1084-1111. [PMID: 31713304 PMCID: PMC7267932 DOI: 10.1002/hbm.24862] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 10/19/2019] [Accepted: 10/31/2019] [Indexed: 12/16/2022] Open
Abstract
To plan movements toward objects our brain must recognize whether retinal displacement is due to self-motion and/or to object-motion. Here, we aimed to test whether motion areas are able to segregate these types of motion. We combined an event-related functional magnetic resonance imaging experiment, brain mapping techniques, and wide-field stimulation to study the responsivity of motion-sensitive areas to pure and combined self- and object-motion conditions during virtual movies of a train running within a realistic landscape. We observed a selective response in MT to the pure object-motion condition, and in medial (PEc, pCi, CSv, and CMA) and lateral (PIC and LOR) areas to the pure self-motion condition. Some other regions (like V6) responded more to complex visual stimulation where both object- and self-motion were present. Among all, we found that some motion regions (V3A, LOR, MT, V6, and IPSmot) could extract object-motion information from the overall motion, recognizing the real movement of the train even when the images remain still (on the screen), or moved, because of self-movements. We propose that these motion areas might be good candidates for the "flow parsing mechanism," that is the capability to extract object-motion information from retinal motion signals by subtracting out the optic flow components.
Collapse
|
5
|
Abstract
A single experiment required 26 younger and older adults to discriminate global shape as defined only by differences in the speed of stimulus element rotation. Detection of the target shape required successful perceptual grouping by common fate. A considerable adverse effect of age was found: In order to perceive the target and discriminate its shape with a d’ value of 1.5, the older observers needed target element rotational speeds that were 23.4% faster than those required for younger adults. In addition, as the difference between the rotation speeds of the background and target stimulus elements increased, the performance of the older observers improved at a rate that was only about half of that exhibited by the younger observers. The results indicate that while older adults can perceive global shape defined by similarity (and differences) in rotational speed, their abilities are nevertheless significantly compromised.
Collapse
|
6
|
|
7
|
Neuronal control of fixation and fixational eye movements. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0205. [PMID: 28242738 PMCID: PMC5332863 DOI: 10.1098/rstb.2016.0205] [Citation(s) in RCA: 117] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/07/2016] [Indexed: 11/17/2022] Open
Abstract
Ocular fixation is a dynamic process that is actively controlled by many of the same brain structures involved in the control of eye movements, including the superior colliculus, cerebellum and reticular formation. In this article, we review several aspects of this active control. First, the decision to move the eyes not only depends on target-related signals from the peripheral visual field, but also on signals from the currently fixated target at the fovea, and involves mechanisms that are shared between saccades and smooth pursuit. Second, eye position during fixation is actively controlled and depends on bilateral activity in the superior colliculi and medio-posterior cerebellum; disruption of activity in these circuits causes systematic deviations in eye position during both fixation and smooth pursuit eye movements. Third, the eyes are not completely still during fixation but make continuous miniature movements, including ocular drift and microsaccades, which are controlled by the same neuronal mechanisms that generate larger saccades. Finally, fixational eye movements have large effects on visual perception. Ocular drift transforms the visual input in ways that increase spatial acuity; microsaccades not only improve vision by relocating the fovea but also cause momentary changes in vision analogous to those caused by larger saccades. This article is part of the themed issue ‘Movement suppression: brain mechanisms for stopping and stillness’.
Collapse
|
8
|
A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2803-2821. [PMID: 27831890 DOI: 10.1109/tnnls.2016.2592969] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.
Collapse
|
9
|
The Roles of Non-retinotopic Motions in Visual Search. Front Psychol 2016; 7:840. [PMID: 27313560 PMCID: PMC4887493 DOI: 10.3389/fpsyg.2016.00840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Accepted: 05/19/2016] [Indexed: 11/30/2022] Open
Abstract
In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate systems—retinal, relative, and spatiotopic (head/body-centered). Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinate systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search.
Collapse
|
10
|
Depth perception from moving cast shadow in macaque monkey. Behav Brain Res 2015; 288:63-70. [PMID: 25882723 DOI: 10.1016/j.bbr.2015.04.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Revised: 04/02/2015] [Accepted: 04/04/2015] [Indexed: 11/26/2022]
Abstract
In the present study, we investigate whether the macaque monkey can perceive motion in depth using a moving cast shadow. To accomplish this, we conducted two experiments. In the first experiment, an adult Japanese monkey was trained in a motion discrimination task in depth by binocular disparity. A square was presented on the display so that it appeared with a binocular disparity of 0.12 degrees (initial position), and moved toward (approaching) or away from (receding) the monkey for 1s. The monkey was trained to discriminate the approaching and receding motion of the square by GO/delayed GO-type responses. The monkey showed a significantly high accuracy rate in the task, and the performance was maintained when the position, color, and shape of the moving object were changed. In the next experiment, the change in the disparity was gradually decreased in the motion discrimination task. The results showed that the performance of the monkey declined as the distance of the approaching and receding motion of the square decreased from the initial position. However, when a moving cast shadow was added to the stimulus, the monkey responded to the motion in depth induced by the cast shadow in the same way as by binocular disparity; the reward was delivered randomly or given in all trials to prevent the learning of the 2D motion of the shadow in the frontal plane. These results suggest that the macaque monkey can perceive motion in depth using a moving cast shadow as well as using binocular disparity.
Collapse
|
11
|
Effects of crowding and attention on high-levels of motion processing and motion adaptation. PLoS One 2015; 10:e0117233. [PMID: 25615577 PMCID: PMC4304809 DOI: 10.1371/journal.pone.0117233] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2014] [Accepted: 12/12/2014] [Indexed: 11/18/2022] Open
Abstract
The motion after-effect (MAE) persists in crowding conditions, i.e., when the adaptation direction cannot be reliably perceived. The MAE originating from complex moving patterns spreads into non-adapted sectors of a multi-sector adapting display (i.e., phantom MAE). In the present study we used global rotating patterns to measure the strength of the conventional and phantom MAEs in crowded and non-crowded conditions, and when attention was directed to the adapting stimulus and when it was diverted away from the adapting stimulus. The results show that: (i) the phantom MAE is weaker than the conventional MAE, for both non-crowded and crowded conditions, and when attention was focused on the adapting stimulus and when it was diverted from it, (ii) conventional and phantom MAEs in the crowded condition are weaker than in the non-crowded condition. Analysis conducted to assess the effect of crowding on high-level of motion adaptation suggests that crowding is likely to affect the awareness of the adapting stimulus rather than degrading its sensory representation, (iii) for high-level of motion processing the attentional manipulation does not affect the strength of either conventional or phantom MAEs, neither in the non-crowded nor in the crowded conditions. These results suggest that high-level MAEs do not depend on attention and that at high-level of motion adaptation the effects of crowding are not modulated by attention.
Collapse
|
12
|
Cortical-hippocampal interactions and cognitive mapping: A hypothesis based on reintegration of the parietal and inferotemporal pathways for visual processing. ACTA ACUST UNITED AC 2013. [DOI: 10.1007/bf03337774] [Citation(s) in RCA: 79] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
13
|
Abstract
The ability to perceive structure using motion information was examined using a reaction time task with two primate species. Homo sapien and Macaca mulatta subjects were quantitatively tested under identical conditions to detect the change from a control unstructured to a test structured motion stimulus. The structures underlying the test were rotations of a plane, expansion of a plane, and a rotation of a three-dimensional cylinder. On many of the stimulus conditions, the two species performed similarly, although there were some species differences. These differences may be due to the extensive training of the monkeys or the use of different cognitive strategies by the human subjects. These data provide support for the existence of a neural mechanism that uses flow fields to construct two- or three-dimensional surface representations.
Collapse
|
14
|
The representation of egocentric space in the posterior parietal cortex. Behav Brain Sci 2013; 15 Spec No 4:691-700. [PMID: 23842408 DOI: 10.1017/s0140525x00072605] [Citation(s) in RCA: 244] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The posterior parietal cortex (PPC) is the most likely site where egocentric spatial relationships are represented in the brain. PPC cells receive visual, auditory, somaesthetic, and vestibular sensory inputs; oculomotor, head, limb, and body motor signals; and strong motivational projections from the limbic system. Their discharge increases not only when an animal moves towards a sensory target, but also when it directs its attention to it. PPC lesions have the opposite effect: sensory inattention and neglect. The PPC does not seem to contain a "map" of the location of objects in space but a distributed neural network for transforming one set of sensory vectors into other sensory reference frames or into various motor coordinate systems. Which set of transformation rules is used probably depends on attention, which selectively enhances the synapses needed for making a particular sensory comparison or aiming a particular movement.
Collapse
|
15
|
Motion-form interactions beyond the motion integration level: evidence for interactions between orientation and optic flow signals. J Vis 2013; 13:16. [PMID: 23729767 PMCID: PMC3670578 DOI: 10.1167/13.6.16] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2012] [Accepted: 04/18/2013] [Indexed: 11/24/2022] Open
Abstract
Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted.
Collapse
|
16
|
Interactions between motion and form processing in the human visual system. Front Comput Neurosci 2013; 7:65. [PMID: 23730286 PMCID: PMC3657629 DOI: 10.3389/fncom.2013.00065] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Accepted: 05/02/2013] [Indexed: 11/13/2022] Open
Abstract
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.
Collapse
|
17
|
Abstract
Motion processing regions apart from V5+/MT+ are still relatively poorly understood. Here, we used functional magnetic resonance imaging to perform a detailed functional analysis of the recently described cingulate sulcus visual area (CSv) in the dorsal posterior cingulate cortex. We used distinct types of visual motion stimuli to compare CSv with V5/MT and MST, including a visual pursuit paradigm. Both V5/MT and MST preferred 3D flow over 2D planar motion, responded less yet substantially to random motion, had a strong preference for contralateral versus ipsilateral stimulation, and responded nearly equally to contralateral and to full-field stimuli. In contrast, CSv had a pronounced preference to 2D planar motion over 3D flow, did not respond to random motion, had a weak and nonsignificant lateralization that was significantly smaller than that of MST, and strongly preferred full-field over contralateral stimuli. In addition, CSv had a better capability to integrate eye movements with retinal motion compared with V5/MT and MST. CSv thus differs from V5+/MT+ by its unique preference to full-field, coherent, and planar motion cues. These results place CSv in a good position to process visual cues related to self-induced motion, in particular those associated to eye or lateral head movements.
Collapse
|
18
|
|
19
|
Abstract
Abstract
This target article draws together two groups of experimental studies on the control of human movement through peripheral feedback and centrally generated signals of motor commands. First, during natural movement, feedback from muscle, joint, and cutaneous afferents changes; in human subjects these changes have reflex and kinesthetic consequences. Recent psychophysical and microneurographic evidence suggests that joint and even cutaneous afferents may have a proprioceptive role. Second, the role of centrally generated motor commands in the control of normal movements and movements following acute and chronic deafferentation is reviewed. There is increasing evidence that subjects can perceive their motor commands under various conditions, but that this is inadequate for normal movement; deficits in motor performance arise when the reliance on proprioceptive feedback is abolished either experimentally or because of pathology. During natural movement, the CNS appears to have access to functionally useful input from a range of peripheral receptors as well as from internally generated command signals. The unanswered questions that remain suggest a number of avenues for further research.
Collapse
|
20
|
Different regions of space or different spaces altogether: What are the dorsal/ventral systems processing? Behav Brain Sci 2011. [DOI: 10.1017/s0140525x00080183] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
21
|
Functional specialization in the lower and upper visual fields in humans: Its ecological origins and neurophysiological implications. Behav Brain Sci 2011. [DOI: 10.1017/s0140525x00080018] [Citation(s) in RCA: 420] [Impact Index Per Article: 32.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractFunctional specialization in the lower and upper visual fields in humans is analyzed in relation to the origins of the primate visual system. Processing differences between the vertical hemifields are related to the distinction between near (peripersonal) and far (extrapersonal) space, which are biased toward the lower and upper visual fields, respectively. Nonlinear/global processing is required in the lower visual field in order to pergeive the optically degraded and diplopic images in near vision, whereas objects in far vision are searched for and recognized primarily using linear/local perceptual mechanisms. The functional differences between near and far visual space are correlated with their disproportionate representations in the dorsal and ventral divisions of visual association cortex, respectively, and in the magnocellular and parvocellular pathways that project to them. Advances in far visual capabilities and forelimb manipulatory skills may have led to a significant enhancement of these functional specializations.
Collapse
|
22
|
|
23
|
|
24
|
Equilibrium-point hypothesis, minimum effort control strategy and the triphasic muscle activation pattern. Behav Brain Sci 2011. [DOI: 10.1017/s0140525x00073209] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
25
|
|
26
|
|
27
|
Successive approximation in targeted movement: An alternative hypothesis. Behav Brain Sci 2011. [DOI: 10.1017/s0140525x00072848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
28
|
Abstract
AbstractEngineers use neural networks to control systems too complex for conventional engineering solutions. To examine the behavior of individual hidden units would defeat the purpose of this approach because it would be largely uninterpretable. Yet neurophysiologists spend their careers doing just that! Hidden units contain bits and scraps of signals that yield only arcane hints about network function and no information about how its individual units process signals. Most literature on single-unit recordings attests to this grim fact. On the other hand, knowing a system's function and describing it with elegant mathematics tell one very little about what to expect of interneuronal behavior. Examples of simple networks based on neurophysiology are taken from the oculomotor literature to suggest how single-unit interpretability might decrease with increasing task complexity. It is argued that trying to explain how any real neural network works on a cell-by-cell, reductionist basis is futile and we may have to be content with trying to understand the brain at higher levels of organization.
Collapse
|
29
|
Does the nervous system use equilibrium-point control to guide single and multiple joint movements? Behav Brain Sci 2011; 15:603-13. [PMID: 23302290 DOI: 10.1017/s0140525x00072538] [Citation(s) in RCA: 303] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
30
|
|
31
|
|
32
|
|
33
|
Direction and speed tuning to visual motion in cortical areas MT and MSTd during smooth pursuit eye movements. J Neurophysiol 2011; 105:1531-45. [PMID: 21273314 DOI: 10.1152/jn.00511.2010] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When tracking a moving target in the natural world with pursuit eye movement, our visual system must compensate for the self-induced retinal slip of the visual features in the background to enable us to perceive their actual motion. We previously reported that the speed of the background stimulus in space is represented by dorsal medial superior temporal (MSTd) neurons in the monkey cortex, which compensate for retinal image motion resulting from eye movements when the direction of the pursuit and background motion are parallel to the preferred direction of each neuron. To further characterize the compensation observed in the MSTd responses to the background motion, we recorded single unit activities in cortical areas middle temporal (MT) and MSTd, and we selected neurons responsive to a large-field visual stimulus. We studied their responses to the large-field stimulus in the background while monkeys pursued a moving target and while fixated a stationary target. We investigated whether compensation for retinal image motion of the background depended on the speed of pursuit. We also asked whether the directional selectivity of each neuron in relation to the external world remained the same even during pursuit and whether compensation for retinal image motion occurred irrespective of the direction of the pursuit. We found that the majority of the MSTd neurons responded to the visual motion in space by compensating for the image motion on the retina resulting from the pursuit regardless of pursuit speed and direction, whereas most of the MT neurons responded in relation to the genuine retinal image motion.
Collapse
|
34
|
|
35
|
Fiber pathways and cortical connections of preoccipital areas in rhesus monkeys. J Comp Neurol 2010; 518:3725-51. [PMID: 20653031 DOI: 10.1002/cne.22420] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
An understanding of visual function at the cerebral cortical level requires detailed knowledge of anatomical connectivity. Cortical association pathways and terminations of preoccipital visual areas were investigated in rhesus monkeys by using the autoradiographic tracing technique. Medial and adjacent dorsomedial preoccipital regions project via the occipitofrontal fascicle to the frontal lobe (dorsal area 6, and areas 8Ad, 8B, and 46); via the dorsal portion of the superior longitudinal fascicle (SLF) to dorsal area 6, area 9, and the supplementary motor area; and via the cingulate fascicle to area 24. In addition, medial and dorsomedial preoccipital areas send projections to parietal (areas PGm, PEa, PG-Opt, and POa) and superior temporal (areas MST and MT) regions. In contrast, connections from the dorsolateral, annectant, and ventral preoccipital regions are conveyed via the inferior longitudinal fascicle (ILF) to the parietal lobe (areas POa and IPd), superior temporal sulcus (areas MT, MST, FST, V4t, and IPa), inferotemporal region (areas TEO and TE1-TE3), and parahippocampal gyrus (areas TF, TH, and TL). The central-lateral preoccipital region projects via an ILF-SLF pathway to frontal area 8Av. The preoccipital areas also have caudal connections to occipital areas V1, V2, and V3. Finally, preoccipital regions are interconnected via different intrinsic pathways. These findings provide further insight into the nature of preoccipital fiber pathways and the connectional organization of the visual system.
Collapse
|
36
|
Responses of MSTd and MT neurons during smooth pursuit exhibit similar temporal frequency dependence on retinal image motion. Cereb Cortex 2009; 20:1708-18. [PMID: 19892788 DOI: 10.1093/cercor/bhp235] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
When our eyes are in constant motion, the world around us remains perceptually stable; although eye movements produce slips of the visual scene on our retinae. In our previous study, we suggested that visual motion in space is served by neurons, which compensate retinal-image motion due to pursuit eye movements, in the dorsal part of the medial superior temporal (MSTd) area. Additionally, neurons in the middle temporal (MT) area respond to retinal-image motion. In the present study, to further elucidate the visual properties of MSTd/MT neurons, we investigated the neuronal response to the motion of checkerboard patterns (CBPs) in addition to the random-dot pattern used in the previous study. We found that neuronal responses in both areas decreased regardless of fixation or pursuit when the temporal frequency of the CBPs exceeded 20 Hz on the retina. Our results support the idea that pursuit-speed compensation observed in area MSTd might be formed by the reception of retina-based visual information from MT neurons because both areas MT and MSTd were dependent on retina-based information during pursuit eye movements.
Collapse
|
37
|
Reaching in depth: hand position dominates over binocular eye position in the rostral superior parietal lobule. J Neurosci 2009; 29:11461-70. [PMID: 19759295 DOI: 10.1523/jneurosci.1305-09.2009] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Neural activity was recorded in area PE (dorsorostral part of Brodmann's area 5) of the posterior parietal cortex while monkeys performed arm reaching toward memorized targets located at different distances from the body. For any given distance, arm movements were performed while the animal kept binocular eye fixation constant. Under these conditions, the activity of a large proportion (36%) of neurons was modulated by reach distance during the memory period. By varying binocular eye position (vergence angle) and initial hand position, we found that the reaching-related activity of most neurons (61%) was influenced by changing the starting position of the hand, whereas that of a smaller, although substantial, population (13%) was influenced by changes of binocular eye position (i.e., by the angle of vergence). Furthermore, the modulation of the neural activity was better explained expressing the reach movement end-point, corresponding to the memorized target location, in terms of distance from the initial hand position, rather than from the body. These results suggest that the activity of neurons in area PE combines information about eye and hand position to encode target distance for reaching in depth predominantly in hand coordinates. This encoding mechanism is consistent with the position of PE in the functional gradient that characterizes the parieto-frontal network underlying reaching.
Collapse
|
38
|
|
39
|
Abstract
When a person tracks a small moving object, the visual images in the background of the visual scene move across his/her retina. It, however, is possible to estimate the actual motion of the images despite the eye-movement-induced motion. To understand the neural mechanism that reconstructs a stable visual world independent of eye movements, we explored areas MT (middle temporal) and MST (medial superior temporal) in the monkey cortex, both of which are known to be essential for visual motion analysis. We recorded the responses of neurons to a moving textured image that appeared briefly on the screen while the monkeys were performing smooth pursuit or stationary fixation tasks. Although neurons in both areas exhibited significant responses to the motion of the textured image with directional selectivity, the responses of MST neurons were mostly correlated with the motion of the image on the screen independent of pursuit eye movement, whereas the responses of MT neurons were mostly correlated with the motion of the image on the retina. Thus these MST neurons were more likely than MT neurons to distinguish between external and self-induced motion. The results are consistent with the idea that MST neurons code for visual motion in the external world while compensating for the counter-rotation of retinal images due to pursuit eye movements.
Collapse
|
40
|
Cerebrocerebellar circuits for the perceptual cancellation of eye-movement-induced retinal image motion. J Cogn Neurosci 2006; 18:1899-912. [PMID: 17069480 DOI: 10.1162/jocn.2006.18.11.1899] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Despite smooth pursuit eye movements, we are unaware of resultant retinal image motion. This example of perceptual invariance is achieved by comparing retinal image slip with an internal reference signal predicting the sensory consequences of the eye movement. This prediction can be manipulated experimentally, allowing one to vary the amount of self-induced image motion for which the reference signal compensates and, accordingly, the resulting percept of motion. Here we were able to map regions in CRUS I within the lateral cerebellar hemispheres that exhibited a significant correlation between functional magnetic resonance imaging signal amplitudes and the amount of motion predicted by the reference signal. The fact that these cerebellar regions were found to be functionally coupled with the left parieto-insular cortex and the supplementary eye fields points to these cortical areas as the sites of interaction between predicted and experienced sensory events, ultimately giving rise to the perception of a stable world despite self-induced retinal motion.
Collapse
|
41
|
Diversity of laminar connections linking periarcuate and lateral intraparietal areas depends on cortical structure. Eur J Neurosci 2006; 23:161-79. [PMID: 16420426 DOI: 10.1111/j.1460-9568.2005.04522.x] [Citation(s) in RCA: 99] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Lateral prefrontal and intraparietal cortices have strong connectional and functional associations but it is unclear how their common visuomotor, perceptual and working memory functions arise. The hierarchical scheme of cortical processing assumes that prefrontal cortex issues 'feedback' projections to parietal cortex. However, the architectonic heterogeneity of these cortices raises the question of whether distinct areas have laminar-specific interconnections underlying their complex functional relationship. Using quantitative procedures, we showed that laminar-specific connections between distinct prefrontal (areas 46 and 8) and lateral intraparietal (LIPv, LIPd and 7a) areas in Macaca mulatta, studied with neural tracers, varied systematically according to rules determined by the laminar architecture of the linked areas. We found that axons from areas 46 and rostral 8 terminated heavily in layers I-III of all intraparietal areas, as did caudal area 8 to area LIPv, suggesting 'feedback' communication. However, contrary to previous assumptions, axons from caudal area 8 terminated mostly in layers IV-V of LIPd and 7a, suggesting 'feedforward' communication. These laminar patterns of connections were highly correlated with consistent differences in neuronal density between linked areas. When neuronal density in a prefrontal origin was lower than in the intraparietal destination, most terminations were found in layer I with a concomitant decrease in layer IV. The opposite occurred when the prefrontal origin had a higher neuronal density than the target. These findings indicate that the neuronal density of linked areas can reliably predict their laminar connections and may form the basis of understanding the functional complexity of prefrontal-intraparietal interactions in cognition.
Collapse
|
42
|
Contextual modulation in the V1 real motion cells. Neuroreport 2004; 15:2219-22. [PMID: 15371737 DOI: 10.1097/00001756-200410050-00015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Retinal images move when the eyes move across a stationary object, or alternatively, when the object moves while the eyes are stationary. Orientation selective cells in V1 showed preference for these two types of retinal image slip. Furthermore, if an orientation cell preferred moving objects, the response to an element of a complex image was modulated by background stimuli placed outside the cell's receptive field. However, the response of cells, that showed no preference for a moving object, was hardly affected by the background. These results indicate that figure and ground are already segregated in the very early stage of visual processing.
Collapse
|
43
|
Abstract
We conducted a PET study to directly compare the differential effects of visual motion stimulation that induced either rollvection about the line of sight or forward linearvection along this axis in the same subjects. The main question was, whether the areas that respond to vection are identical or separate and distinct for rollvection and linearvection. Eleven healthy volunteers were exposed to large-field (100 degrees x 60 degrees ) visual motion stimulation consisting of (1) dots accelerating from a focus of expansion to the edge of the screen (forward linearvection) and (2) dots rotating counterclockwise in the frontal plane (clockwise rollvection). These two stimuli, which induced apparent self-motion in all subjects, were compared to each other and to a stationary visual pattern. Linearvection and rollvection led to bilateral activations of visual areas including medial parieto-occipital (PO), occipito-temporal (MT/V5), and ventral occipital (fusiform gyri) cortical areas, as well as superior parietal sites. Activations in the polar visual cortex around the calcarine sulcus (BA 17, BA 18) were larger and more significant during linearvection. Temporo-parietal sites displayed higher activity levels during rollvection. Differential activation of PO or MT/V5 was not found. Both stimuli led to simultaneous deactivations of retroinsular regions (more pronounced during linearvection); this is compatible with an inhibitory interaction between the visual and the vestibular systems for motion perception.
Collapse
|
44
|
Integration of retinal disparity and fixation-distance related signals toward an egocentric coding of distance in the posterior parietal cortex of primates. J Neurophysiol 2004; 91:2670-84. [PMID: 14960558 DOI: 10.1152/jn.00712.2003] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
For those movements that are directed toward objects located in extrapersonal space, it is necessary that visual inputs are first remapped from a retinal coordinate system to a body-centered one. The posterior parietal cortex (PPC) most likely integrates retinal and extraretinal information to determine the egocentric distance of an object located in three-dimensional (3-D) space. This determination requires both a retinal disparity signal and a parallel estimate of the fixation distance. We recorded from the lateral intraparietal area (LIP) to see if single neurons respond to both vergence angle and retinal disparity and if these two signals are integrated to encode egocentric distance. Monkeys were trained to make saccades to real targets in 3-D space. When both fixation distance and disparity of visual stimuli were varied, the disparity tuning of individual neurons display a fixation-distance modulation. We propose that the observed modulation contributes to a spatial coding domain intermediate between retinal and egocentric because the disparity tuning shifts in a systematic way with changes in fixation distance.
Collapse
|
45
|
Abstract
The visual system cannot rely only upon information from the retina to perceive object motion because identical retinal stimulations can be evoked by the movement of objects in the field of view as well as by the movements of retinal images self-evoked by eye movements. We clearly distinguish the two situations, perceiving object motion in the first case and stationarity in the second. The present work deals with the neuronal mechanisms that are likely involved in the detection of real motion. In monkeys, cells that are able to distinguish real from self-induced motion (real-motion cells) are distributed in several cortical areas of the dorsal visual stream. We suggest that the activity of these cells is responsible for motion perception, and hypothesize that these cells are the elements of a cortical network representing an internal map of a stable visual world. Supporting this view are the facts that: (i) the same cortical regions in humans are activated in brain imaging studies during perception of object motion; and (ii) lesions of these same regions produce selective impairments in motion detection, so that patients interpret any retinal image motion as object motion, even when they result from her/his eye movements. Among the areas of the dorsal visual stream rich in real-motion cells, V3A and V6, likely involved in the fast form and motion analyses needed for visual guidance of action, could use real-motion signals to orient the animal's attention towards moving objects, and/or to help grasping them. Areas MT/V5, MST and 7a, known to be involved in the control of pursuit eye movements and in the analysis of visual signals evoked by slow ocular movements, could use real-motion signals to give a proper evaluation of motion during pursuits.
Collapse
|
46
|
Abstract
Musically naive participants were scanned before and after a period of 15 weeks during which they were taught to read music and play the keyboard. When participants played melodies from musical notation after training, activation was seen in a cluster of voxels within the bilateral superior parietal cortex. A subset of these voxels were activated in a second experiment in which musical notation was present, but irrelevant for task performance. These activations suggest that music reading involves the automatic sensorimotor translation of a spatial code (written music) into a series of motor responses (keypresses).
Collapse
|
47
|
Abstract
We examined visual response properties of single neurons in the parahippocampal (PH) cortex of alert monkeys using various visual stimuli (bars, geometrical shapes such as a circle, and images such as a human face) while the monkey fixated a spot for a juice reward. Of the investigated PH neurons 104 of 359 (29%) were found to be visually responsive. The investigation was focused on spatial and object aspects of visual processing. We investigated a visual receptive field (RF) property and a direction selectivity for a moving bar with respect to spatial processing. For half of these PH neurons (53%), the optimal stimulus position, where a visual stimulus elicited the maximal response, located peripherally, that is, with an eccentricity of more than 10 deg. More than 20% of these PH neurons had an RF that does not include the center of gaze. There were neurons in the PH cortex that appeared to convey motion signals. In addition, some PH neurons showed eye-position-dependent activity. With respect to object processing, we investigated selectivities for images, geographical shapes, orientations of a bar, and colors. For comparison purposes, we also examined responses of perirhinal (PR) neurons. PH neurons showed selective responses to these stimuli, but PR neurons were found to be more selective for images than PH neurons. These results suggest that the PH cortex is involved in both spatial and object processing, but less involved than the PR cortex in processing of complex images.
Collapse
|
48
|
Auditory deprivation affects processing of motion, but not color. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2002; 14:422-34. [PMID: 12421665 DOI: 10.1016/s0926-6410(02)00211-2] [Citation(s) in RCA: 90] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Event-related potentials (ERPs) were recorded in response to color changes of isoluminant, high spatial frequency gratings and to motion of grayscale, low spatial frequency gratings in 11 normally hearing and 11 congenitally deaf adults. The stimuli were designed to activate preferentially the ventral and dorsal streams of visual processing, respectively. Color changes evoked prominent P1 and N1 components in the ERP; motion evoked an early, focal positivity (the P-INZ), a minimal P1, and a prominent N1. Color changes elicited similar ERP components in hearing and deaf participants. In contrast, motion elicited larger amplitude and more anteriorly distributed N1 components in deaf than hearing participants. These results suggest that early auditory deprivation may have more pronounced effects on the functions of the dorsal visual pathway than on functions of the ventral pathway.
Collapse
|
49
|
Abstract
When we move forward the visual images on our retinas expand. Humans rely on the focus, or center, of this expansion to estimate their direction of self-motion or heading and, as long as the eyes are still, the retinal focus corresponds to the heading. However, smooth pursuit eye movements add visual motion to the expanding retinal image and displace the focus of expansion. In spite of this, humans accurately judge their heading during pursuit eye movements even though the retinal focus no longer corresponds to the heading. Recent studies in macaque suggest that correction for pursuit may occur in the dorsal aspect of the medial superior temporal area (MSTd); neurons in this area are tuned to the retinal position of the focus and they modify their tuning to partially compensate for the focus shift caused by pursuit. However, the question remains whether these neurons shift focus tuning more at faster pursuit speeds, to compensate for the larger focus shifts created by faster pursuit. To investigate this question, we recorded from 40 MSTd neurons while monkeys made pursuit eye movements at a range of speeds across simulated self- or object motion displays. We found that most MSTd neurons modify their focus tuning more at faster pursuit speeds, consistent with the idea that they encode heading and other motion parameters regardless of pursuit speed. Across the population, the median rate of compensation increase with pursuit speed was 51% as great as required for perfect compensation. We recorded from the same neurons in a simulated pursuit condition, in which gaze was fixed but the entire display counter-rotated to produce the same retinal image as during real pursuit. This condition yielded the result that retinal cues contribute to pursuit compensation; the rate of compensation increase was 30% of that required for accurate encoding of heading. The difference between these two conditions was significant (P < 0.05), indicating that extraretinal cues also contribute significantly. We found a systematic antialignment between preferred pursuit and preferred visual motion directions. Neurons may use this antialignment to combine retinal and extraretinal compensatory cues. These results indicate that many MSTd neurons compensate for pursuit velocity, pursuit direction as previously reported and pursuit speed, and further implicate MSTd as a critical stage in the computation of egomotion.
Collapse
|
50
|
Abstract
Detection of the causal relationships between events is fundamental for understanding the world around us. We report an event-related fMRI study designed to investigate how the human brain processes the perception of mechanical causality. Subjects were presented with mechanically causal events (in which a ball collides with and causes movement of another ball) and non-causal events (in which no contact is made between the balls). There was a significantly higher level of activation of V5/MT/MST bilaterally, the superior temporal sulcus bilaterally and the left intraparietal sulcus to causal relative to non-causal events. Directing attention to the causal nature of the stimuli had no significant effect on the neural processing of the causal events. These results support theories of causality suggesting that the perception of elementary mechanical causality events is automatically processed by the visual system.
Collapse
|