1
|
Processing 3D form and 3D motion: respective contributions of attention-based and stimulus-driven activity. Neuroimage 2008; 43:736-47. [PMID: 18805496 DOI: 10.1016/j.neuroimage.2008.08.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2007] [Revised: 07/31/2008] [Accepted: 08/19/2008] [Indexed: 11/30/2022] Open
Abstract
This study aims at segregating the neural substrate for the 3D-form and 3D-motion attributes in structure-from-motion perception, and at disentangling the stimulus-driven and endogenous-attention-driven processing of these attributes. Attention and stimulus were manipulated independently: participants had to detect the transitions of one attribute--form, 3D motion or colour--while the visual stimulus underwent successive transitions of all attributes. We compared the BOLD activity related to form and 3D motion in three conditions: stimulus-driven processing (unattended transitions), endogenous attentional selection (task) or both stimulus-driven processing and attentional selection (attended transitions). In all conditions, the form versus 3D-motion contrasts revealed a clear dorsal/ventral segregation. However, while the form-related activity is consistent with previously described shape-selective areas, the activity related to 3D motion does not encompass the usual "visual motion" areas, but rather corresponds to a high-level motion system, including IPL and STS areas. Second, we found a dissociation between the neural processing of unattended attributes and that involved in endogenous attentional selection. Areas selective for 3D-motion and form showed either increased activity at transitions of these respective attributes or decreased activity when subjects' attention was directed to a competing attribute. We propose that both facilitatory and suppressive mechanisms of attribute selection are involved depending on the conditions driving this selection. Therefore, attentional selection is not limited to an increased activity in areas processing stimulus properties, and may unveil different functional localization from stimulus modulation.
Collapse
|
2
|
The visual perception of plane tilt from motion in small field and large field: psychophysics and theory. Vision Res 2006; 46:3494-513. [PMID: 16769100 DOI: 10.1016/j.visres.2006.04.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2005] [Revised: 03/30/2006] [Accepted: 04/04/2006] [Indexed: 10/24/2022]
Abstract
Subjects indicated the tilt of dotted planes rotating in depth, in monocular viewing, under perspective projection. The responses depended on the FOV (field of view) and on the angle W between the tilt and frontal translation (orthogonal to the rotation axis). Response accuracy increased with the FOV, and decreased with W. Our results support the processing of the second-order optic flow in all cases, but indicate that this flow is quantitatively small in small-field, leading to tilt ambiguities. We examine computational models based on the affine components of the optic flow to interpret our results.
Collapse
|
3
|
Measurement of the visual contribution to postural steadiness from the COP movement: methodology and reliability. Gait Posture 2005; 22:96-106. [PMID: 16139744 DOI: 10.1016/j.gaitpost.2004.07.009] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2003] [Accepted: 07/06/2004] [Indexed: 02/02/2023]
Abstract
We studied the reliability of different measures of the visual contribution to postural steadiness by recording the postural sway during standing with eyes open (EO) or eyes closed (EC). The COP trajectory was recorded in 21 subjects aged 42-61 standing on a firm or foam support. The improvement of postural steadiness due to vision was measured with a higher reliability (i.e. lower intra- and inter-subject variabilities) with the sway velocity V, than with the position RMS. Due to the increase of the variability of V and RMS with their own mean values, we quantified the visual contribution to posture by the stabilization ratio (SR), based on a logarithm transform of V or RMS. As compared to the Romberg quotient (EC/EO), SR improved the reliability of the measurement of the visual contribution to posture within individuals, across subjects, and even across different studies in the literature. Our method led to decrease the inter-subject coefficient of variation of this measurement to about 25%, using a foam support. It leads to a similar accuracy in binocular and monocular vision, and it also applies to the quantification of other non-visual sensory contributions to posture.
Collapse
|
4
|
Abstract
BACKGROUND This study evaluated the visual contribution to postural steadiness in primary open angle glaucoma (POAG), in correlation with the mean deviation (MD) measured through conventional perimetry, and with the Advanced Glaucoma Intervention Study (AGIS) score, which quantifies the extent of losses in the visual field. METHODS In 35 POAG patients and 21 age-matched normal subjects, the sway of the centre of pressure of the feet, on a firm or foam support, was recorded. The subjects stood on a force-plate with eyes closed, or with one or two eyes open. RESULTS For all subjects, the sway velocity was lower with vision than without vision, indicating the existence of a visual contribution to posture at all stages of glaucoma. This contribution was significantly lower for POAG patients than for normals in monocular and binocular vision, and decreased with the MD, or as the AGIS score increased. Among the maximum, minimum and average values of the two monocular MD, the MD of the worse eye presented the most significant negative correlation with the visual contribution to posture. The somatosensory contribution to postural steadiness was larger in POAG patients, as compared to normals, in monocular or binocular vision. CONCLUSION Primary open angle glaucoma induces a deficit in the visual contribution to postural steadiness, which should be taken into account for the prevention of falls.
Collapse
|
5
|
Contribution of extraretinal signals to the scaling of object distance during self-motion. PERCEPTION & PSYCHOPHYSICS 2002; 64:717-31. [PMID: 12201331 DOI: 10.3758/bf03194739] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We investigated the role of extraretinal information in the perception of absolute distance. In a computer-simulated environment, monocular observers judged the distance of objects positioned at different locations in depth while performing frontoparallel movements of the head. The objects were spheres covered with random dots subtending three different visual angles. Observers viewed the objects ateye level, either in isolation or superimposed on a ground floor. The distance and size of the spheres were covaried to suppress relative size information. Hence, the main cues to distance were the motion parallax and the extraretinal signals. In three experiments, we found evidence that (1) perceived distance is correlated with simulated distance in terms of precision and accuracy, (2) the accuracy in the distance estimate is slightly improved by the presence of a ground-floor surface, (3) the perceived distance is not altered significantly when the visual field size increases, and (4) the absolute distance is estimated correctly during self-motion. Conversely, stationary subjects failed to report absolute distance when they passively observed a moving object producing the same retinal stimulation, unless they could rely on knowledge of the three-dimensional movements.
Collapse
|
6
|
Absolute distance perception during in-depth head movement: calibrating optic flow with extra-retinal information. Vision Res 2002; 42:1991-2003. [PMID: 12160571 DOI: 10.1016/s0042-6989(02)00120-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
We investigated the ability of monocular human observer to scale absolute distance during sagittal head motion in the presence of pure optic flow information. Subjects were presented at eye-level computer-generated spheres (covered with randomly distributed dots) placed at several distances. We compared the condition of self-motion (SM) versus object-motion (OM) using equivalent optic flow field. When the amplitude of head movement was relatively constant, subjects estimated absolute distance rather accurately in both the SM and OM conditions. However, when the amplitude changed on a trial-to-trial basis, subjects' performance deteriorated only in the OM condition. We found that distance judgment in OM condition correlated strongly with optic flow divergence, and that non-visual cues served as important factors for scaling distances in SM condition. Absolute distance also seemed to be better scaled with sagittal head movement when compared with lateral head translation.
Collapse
|
7
|
Abstract
We measured the ability to report the tilt (direction of maximal slope) of a plane under monocular viewing conditions, from static depth cues (square grid patterns) and motion parallax (small rotations of the plane about a frontoparallel axis). These two cues were presented separately, or simultaneously. In the latter case they specified tilts that were either collinear (coherent case) or orthogonal (conflict case). The field of view was small (8 degrees) or large (60 degrees). In small field, for motion parallax, the reported tilt depends strongly on the orientation of the plane relative to the rotation axis, being totally ambiguous when tilt is collinear with the rotation axis. In contrast, in large field, the reported tilt depends little on this variable, and is accurately specified by motion cues. In both cases static cues strongly dominated the tilt reports. Hence static grid patterns constitute robust tilt cues, which can dominate contradictory tilt indications from motion parallax, and should be considered as essential for the visual orientation during locomotion, or the immersion in virtual reality environments.
Collapse
|
8
|
Lines and dots: characteristics of the motion integration process. Vision Res 2001; 41:2207-19. [PMID: 11448713 DOI: 10.1016/s0042-6989(01)00022-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Local motion detectors can only provide the velocity component perpendicular to a moving line that crosses their receptive field, leading to an ambiguity known as the "aperture problem". This problem is solved exactly for rigid objects translating in the screen plane via the intersection of constraints (IOC). In natural scenes, however, object motions are not restricted to fronto-parallel translations, and several objects with distinct motions may be present in the visual space. Under these conditions the usual IOC construction is no longer valid, which raises questions as its use as a basis for spatial integration and selection of motion signals in uniform and non-uniform velocity fields. The influence of the motion of random dots on the perceived direction of a horizontal line grating was measured, when dots and lines are seen through different apertures. The random dots were mapped on a plane that translates in a fronto-parallel plane (uniform 2D translation) or in depth (3D, corresponding to a non-uniform projected velocity field, either expanding or contracting). The grating was either moving rigidly with the dots or in the opposite direction. Subjects' responses show that the direction of line grating movement was reliably influenced only in conditions consistent with rigid motion; where there was a reliable influence, the perceived direction was consistent with the dot motion pattern. This finding points to the existence of a motion-based selection mechanism that operates prior to the disambiguation of the line movement direction. Disambiguation could occur for both uniform and non-uniform velocity fields, even though in the last case none of the individual dots indicated the proper direction in 2D velocity space. Finally, the capture by non-uniform motion patterns was less robust than that by uniform 2D translations, and could be disrupted by manipulations of the shape and size of the apertures.
Collapse
|
9
|
Abstract
Functional magnetic resonance imaging was used to study the cortical bases of 3-D structure perception from visual motion in human. Nine subjects underwent three experiments designed to locate the areas involved in (i) motion processing (random motion versus static dots), (ii) coherent motion processing (expansion/ contraction versus random motion) and (iii) 3-D shape from motion reconstruction (3-D surface oscillating in depth versus random motion). Two control experiments tested the specific influence of speed distribution and surface curvature on the activation results. All stimuli consisted of random dots so that motion parallax was the only cue available for 3-D shape perception. As expected, random motion compared with static dots induced strong activity in areas V1/V2, V5+ and the superior occipital gyrus (SOG; presumptive V3/V3A). V1/V2 and V5+ showed no activity increase when comparing coherent motion (expansion or 3-D surface) with random motion. Conversely, V3/V3A and the dorsal parieto-occipital junction were highlighted in both comparisons and showed gradually increased activity for random motion, coherent motion and a curved surface rotating in depth, which suggests their involvement in the coding of 3-D shape from motion. Also, the ventral aspect of the left occipito-temporal junction was found to be equally responsive to random and coherent motion stimuli, but showed a specific sensitivity to curved 3-D surfaces compared with plane surfaces. As this region is already known to be involved in the coding of static object shape, our results suggest that it might integrate various cues for the perception of 3-D shape.
Collapse
|
10
|
A 6-dof device to measure head movements in active vision experiments: geometric modeling and metric accuracy. J Neurosci Methods 1999; 90:97-106. [PMID: 10513593 DOI: 10.1016/s0165-0270(99)00054-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
This work describes a technique for measuring human head movements in 3D space. Rotations and translations of the head are tracked using a light helmet fastened to a multi-joint mechanical structure. This apparatus has been designed to be used in a series of psycho-physiological experiments in the field of active vision, where position and orientation of the head need to be measured in real time with high accuracy, high reliability and minimal interference with subject movements. A geometric model is developed to recover the position information and its parameters are identified through a calibration procedure. The expected accuracy, derived on the basis of the pure geometric model and the sensor resolution, is compared with the real accuracy, obtained by performing repetitive measurements on a calibration fixture. The outcome of the comparison confirms the validity of the proposed solution which turns out to be effective in providing measurement of head position with an overall accuracy of 0.6 mm and sampling frequency above 1 kHz.
Collapse
|
11
|
A Computational Model of the Perceived Velocity of Moving Plaids. Perception 1996. [DOI: 10.1068/v96l0712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Local motion detection mechanisms generally lead to one component of the optic flow becoming indeterminate. One way to solve this ‘aperture problem’ is to compute the optic flow which minimises some smoothing constraint. With iterative schemes the computed velocity array is suboptimal relative to the constraint until the process has converged. Under the original assumption that the iteration rate is sufficiently low to allow the perception of suboptimal flows at short stimulus durations, iterative gradient models give an accurate description of biases in the perception of tilted line velocity. We examine whether this approach can be applied to moving sinusoidal plaids. Our simulations are in agreement with a number of psychophysical results on both speed and direction perception. In particular we show that the effect of stimulus duration on the perceived direction of type II plaids [Yo and Wilson, 1992 Vision Research32(1)] can be accounted for without recourse to second-order mechanisms. The effects of contrast and component directions on the evolution rate of this bias are well reproduced. The model also successfully describes the effect of spatial frequency, and data obtained with gratings. These results suggest that iterative gradient schemes can model the dynamics of interactions between local velocity detectors, as revealed by psychophysical experiments with lines and plaids.
Collapse
|
12
|
The Dominance of Static Depth Cues over Motion Parallax in the Perception of Surface Orientation. Perception 1996. [DOI: 10.1068/v96l0904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Under polar projection (the natural projection for visual scenes) motion parallax is a powerful cue specifying relative depth. For small-field stimuli, it is ambiguous in the sense that a concave surface can be perceived as convex and deforming. By contrast, concavity/convexity of wide-field surfaces is unambiguously perceived. This led us to hypothesise a critical role of the 3-D rigidity constraint for large visual scenes in motion (Dijkstra et al, 1995 Vision Research35 453 – 462). To examine this hypothesis, we exposed subjects to planes inclined in space, and asked them to report the tilt (direction of inclination). Depth was specified either by motion parallax (MP, the surface oscillated around a frontoparallel axis) or by static perspective cues (SP, orthogonal square grids drawn on the plane). At ECVP95, we had reported a predominance of SP over MP when the tilts specified by these two cues ( tMP and tSP respectively) differed (1995 Perception24 Supplement, 137). Since these results were obtained for fast movements (oscillation frequency for MP: 3.6 Hz), we extended our investigation to a slower frequency (0.5 Hz) which is more likely to be involved during natural head-movements. We found that: (i) errors in tilt reports were larger for MP than for SP, and decreased with increasing field-size; (ii) in the case of conflict ( tMP= tSP±90°), the reported tilt was either tMP or tSP, rather than an average of these two values; (iii) in this case, tilt was most often reported according to SP, rather than to MP cues; this effect occurred even when the accuracies for the two individual cues were similar. Therefore, in a conflict situation between MP and SP, surface orientation is reported according to a winner-take-all rule, which is largely in favour of static grid-cues. Hence, even for wide-field movements, the image contrast distribution can lead the visual system to prefer an unrigid, rather than rigid, solution to the 3-D shape-from-motion problem.
Collapse
|
13
|
Abstract
Moving and acting in a 3D environment requires the perception of its 3D structure. Vision is known to play a crucial role in the control of self-motion, particularly through the changes in the retinal image subsequent to movements of the observer. Reciprocally, signals related to self-motion can also influence our visual perception of 3D space. These interactions between 3D visual perception and self-motion, as demonstrated behaviourally, are now better understood thanks to the development of computational models for processing moving images. They also bear a particular interest in the context of the recent intensive exploration of the inferior parietal lobe (IPL) by neurophysiologists. The IPL is now firmly established as one site of interaction between 3D visual perception and motor control. The parallel between behaviour and neurophysiology leads to a set of crucial, yet unanswered, questions.
Collapse
|
14
|
Perception of three-dimensional shape from ego- and object-motion: comparison between small- and large-field stimuli. Vision Res 1995; 35:453-62. [PMID: 7900286 DOI: 10.1016/0042-6989(94)00147-e] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
We compare the performance in the detection of the shape of concave, planar and convex surfaces for small-field (8 deg) and large-field (90 deg) stimuli. Shape is perceived from head translations, object translations and object rotations. We find large differences between small-field and large-field stimulation. For small-field stimulation performance is best for object rotation, intermediate for self-motion and worst for object translation. For large-field stimulation performance is similar across conditions. Few errors on the sign of the curvature are found for self-motion for both field sizes, indicating that self-motion information disambiguates the curvature sign. For object rotation with small-field stimulation, the concave-convex ambiguity is strong with many apparent deformations. In contrast, large-field curvature signs are always accurately reported, suggesting that the weight of the rigidity hypothesis depends on field size.
Collapse
|
15
|
Abstract
To evaluate the influence of egomotion on the three-dimensional visual processing of structure-from-motion (SFM), we compared the visual discrimination between planar and spherical surfaces during subject-translation, object-translation, or rotation of the object in depth. Performance was the best for object-rotation, intermediate for subject-translation, and the poorest for object-translation--and thus increased with the quality of retinal image stabilization achieved in the different conditions. This suggests that the major role of self-motion information was to stabilize retinal images. In view of previous results, we propose that the interactions between self-motion information and SFM are reduced to functional complementarity, in the sense that self-motion can lift visual ambiguities but does not improve the sensitivity of SFM processes.
Collapse
|
16
|
Stereo-motion cooperation and the use of motion disparity in the visual perception of 3-D structure. PERCEPTION & PSYCHOPHYSICS 1993; 54:223-39. [PMID: 8361838 DOI: 10.3758/bf03211759] [Citation(s) in RCA: 29] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
When an observer views a moving scene binocularly, both motion parallax and binocular disparity provide depth information. In Experiments 1A-1C, we measured sensitivity to surface curvature when these depth cues were available either individually or simultaneously. When the depth cues yielded comparable sensitivity to surface curvature, we found that curvature detection was easier with the cues present simultaneously, rather than individually. For 2 of the 6 subjects, this effect was stronger when the component of frontal translation of the surface was vertical, rather than horizontal. No such anisotropy was found for the 4 other subjects. If a moving object is observed binocularly, the patterns of optic flow are different on the left and right retinae. We have suggested elsewhere (Cornilleau-Pérès & Droulez, in press) that this motion disparity might be used as a visual cue for the perception of a 3-D structure. Our model consisted in deriving binocular disparity from the left and right distributions of vertical velocities, rather than from luminous intensities, as has been done in classical studies on stereoscopic vision. The model led to some predictions concerning the detection of surface curvature from motion disparity in the presence or absence of intensity-based disparity (classically termed binocular disparity). In a second set of experiments, we attempted to test these predictions, and we failed to validate our theoretical scheme from a physiological point of view.
Collapse
|
17
|
|
18
|
Visual perception of surface curvature: psychophysics of curvature detection induced by motion parallax. PERCEPTION & PSYCHOPHYSICS 1989; 46:351-64. [PMID: 2798029 DOI: 10.3758/bf03204989] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
The continuous approach to optic-flow processing shows that the curvature of a moving surface is related to a second spatial derivative of the velocity field, the spin variation (Droulez & Cornilleau-Pérès, 1989). With this approach as a theoretical framework, visual sensitivity to the curvature of a cylinder in motion was measured using a task of discrimination between cylindrical and planar patches. The results confirm the predictions suggested by the theory: (1) Sensitivity to curvature was always greater when the cylinder axis and the frontal translation were parallel than when they were orthogonal. The ratio of curvature detection thresholds in the two cases was between 1.3 and 2.5; the value predicted from the spin variation theory is about 2. (2) Sensitivity to curvature increased strongly with the velocity of the motion but was only weakly affected by its amplitude and the duration of viewing for the range of values used in our experiments.
Collapse
|