1
|
Brenner E, de la Malla C, Smeets JBJ. Tapping on a target: dealing with uncertainty about its position and motion. Exp Brain Res 2023; 241:81-104. [PMID: 36371477 PMCID: PMC9870842 DOI: 10.1007/s00221-022-06503-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
Reaching movements are guided by estimates of the target object's location. Since the precision of instantaneous estimates is limited, one might accumulate visual information over time. However, if the object is not stationary, accumulating information can bias the estimate. How do people deal with this trade-off between improving precision and reducing the bias? To find out, we asked participants to tap on targets. The targets were stationary or moving, with jitter added to their positions. By analysing the response to the jitter, we show that people continuously use the latest available information about the target's position. When the target is moving, they combine this instantaneous target position with an extrapolation based on the target's average velocity during the last several hundred milliseconds. This strategy leads to a bias if the target's velocity changes systematically. Having people tap on accelerating targets showed that the bias that results from ignoring systematic changes in velocity is removed by compensating for endpoint errors if such errors are consistent across trials. We conclude that combining simple continuous updating of visual information with the low-pass filter characteristics of muscles, and adjusting movements to compensate for errors made in previous trials, leads to the precise and accurate human goal-directed movements.
Collapse
Affiliation(s)
- Eli Brenner
- grid.12380.380000 0004 1754 9227Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT Amsterdam, The Netherlands
| | - Cristina de la Malla
- grid.12380.380000 0004 1754 9227Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT Amsterdam, The Netherlands ,grid.5841.80000 0004 1937 0247Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Jeroen B. J. Smeets
- grid.12380.380000 0004 1754 9227Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT Amsterdam, The Netherlands
| |
Collapse
|
2
|
Plaisier MA, Kuling IA, Brenner E, Smeets JBJ. When Does One Decide How Heavy an Object Feels While Picking It Up? Psychol Sci 2019; 30:822-829. [PMID: 30917092 PMCID: PMC6560521 DOI: 10.1177/0956797619837981] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
When lifting an object, it takes time to decide how heavy it is. How does this weight judgment develop? To answer this question, we examined when visual size information has to be present to induce a size-weight illusion. We found that a short glimpse (200 ms) of size information is sufficient to induce a size-weight illusion. The illusion occurred not only when the glimpse was before the onset of lifting but also when the object's weight could already be felt. Only glimpses more than 300 ms after the onset of lifting did not influence the judged weight. This suggests that it takes about 300 ms to reach a perceptual decision about the weight.
Collapse
Affiliation(s)
| | - Irene A Kuling
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam
| | | |
Collapse
|
3
|
Abstract
Surface orientation is an important visual primitive that can be estimated from monocular or binocular (stereoscopic) signals. Changes in motor planning occur within about 200 ms after either type of signal is perturbed, but the time it takes for apparent (perceived) slant to develop from stereoscopic cues is not known. Apparent slant sometimes develops very slowly (Gillam, Chambers, & Russo, 1988; van Ee & Erkelens, 1996). However, these long durations could reflect the time it takes for the visual system to resolve conflicts between slant cues that inevitably specify different slants in laboratory displays (Allison & Howard, 2000). We used a speed–accuracy tradeoff analysis to measure the time it takes to discriminate slant, allowing us to report psychometric functions as a function of response time. Observers reported which side of a slanted surface was farther, with a temporal deadline for responding that varied block-to-block. Stereoscopic slant discrimination rose above chance starting at 200 ms after stimulus onset. Unexpectedly, observers discriminated slant from binocular disparity faster than texture, and for stereoscopic whole-field stimuli faster than stereoscopic slant contrast stimuli. However, performance after the initial deviation from chance increased more rapidly for slant-contrast stimuli than whole-field stimuli. Discrimination latencies were similar for slants about the horizontal and vertical axes, but performance increased faster for slants about the vertical axis. Finally, slant from vertical disparity was somewhat slower than slant from horizontal disparity, which may reflect cue conflict. These results demonstrate, in contradiction with the previous literature, that the perception of slant from disparity happens very quickly—in fact, more quickly than the perception of slant from texture—and in comparable time to the simple perception of brightness from luminance.
Collapse
Affiliation(s)
- Baptiste Caziot
- Graduate Center for Vision Research, SUNY College of Optometry, New York, NY, USA.,SUNY Eye Institute, New York, NY, USA.,Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Benjamin T Backus
- Graduate Center for Vision Research, SUNY College of Optometry, New York, NY, USA.,SUNY Eye Institute, New York, NY, USA
| | - Esther Lin
- Southern California College of Optometry, Ketchum University, Fullerton, CA, USA
| |
Collapse
|
4
|
Viewing geometry determines the contribution of binocular vision to the online control of grasping. Exp Brain Res 2017; 235:3631-3643. [PMID: 28900689 PMCID: PMC5671520 DOI: 10.1007/s00221-017-5087-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Accepted: 09/08/2017] [Indexed: 01/12/2023]
Abstract
Binocular vision is often assumed to make a specific, critical contribution to online visual control of grasping by providing precise information about the separation between digits and object. This account overlooks the ‘viewing geometry’ typically encountered in grasping, however. Separation of hand and object is rarely aligned precisely with the line of sight (the visual depth dimension), and analysis of the raw signals suggests that, for most other viewing angles, binocular feedback is less precise than monocular feedback. Thus, online grasp control relying selectively on binocular feedback would not be robust to natural changes in viewing geometry. Alternatively, sensory integration theory suggests that different signals contribute according to their relative precision, in which case the role of binocular feedback should depend on viewing geometry, rather than being ‘hard-wired’. We manipulated viewing geometry, and assessed the role of binocular feedback by measuring the effects on grasping of occluding one eye at movement onset. Loss of binocular feedback resulted in a significantly less extended final slow-movement phase when hand and object were separated primarily in the frontoparallel plane (where binocular information is relatively imprecise), compared to when they were separated primarily along the line of sight (where binocular information is relatively precise). Consistent with sensory integration theory, this suggests the role of binocular (and monocular) vision in online grasp control is not a fixed, ‘architectural’ property of the visuo-motor system, but arises instead from the interaction of viewer and situation, allowing robust online control across natural variations in viewing geometry.
Collapse
|
5
|
Brenner E, Smeets JB. Accumulating visual information for action. PROGRESS IN BRAIN RESEARCH 2017; 236:75-95. [DOI: 10.1016/bs.pbr.2017.07.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
6
|
Chen Z, Saunders JA. Automatic adjustments toward unseen visual targets during grasping movements. Exp Brain Res 2016; 234:2091-2103. [PMID: 26979436 DOI: 10.1007/s00221-016-4613-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2015] [Accepted: 02/26/2016] [Indexed: 10/22/2022]
Abstract
We investigated whether control of hand movements can be driven by visual information that is not consciously perceived. Subjects performed reach-to-grasp movements toward 2D virtual objects that were projected onto a rigid surface. On perturbed trials, the target object was briefly presented at a different orientation (±20° rotation) or different size (±20 % scaling) during movement. The perturbed objects were presented for 33 ms, followed by a 200-ms mask and reappearance of the original target object. Subjects perceived only the mask and were not aware of the preceding perturbed stimuli. Unperturbed trials were identical except that there was no change in the target object before the mask. Despite being unaware of the brief perturbed stimuli, subjects showed corrective adjustments to their movements: rotation of the grip axis in response to orientation perturbations, and scaling of grip aperture in response to size perturbations. Responses were detectable 250-300 ms after the perturbation onset and began to reduce 250-300 ms after the reappearance of the original target. Our results demonstrate that the visuomotor system can utilize visual information for control of grasping even when this information is not available for conscious perception. We suggest that this dissociation is due to different temporal resolution of visual processing mechanisms underlying conscious perception and control of actions.
Collapse
Affiliation(s)
- Zhongting Chen
- Department of Psychology, University of Hong Kong, Hong Kong, Hong Kong SAR
| | - Jeffrey A Saunders
- Department of Psychology, University of Hong Kong, Hong Kong, Hong Kong SAR.
| |
Collapse
|
7
|
The Role of Temporal Information in Perisaccadic Mislocalization. PLoS One 2015; 10:e0134081. [PMID: 26352603 PMCID: PMC4564215 DOI: 10.1371/journal.pone.0134081] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2014] [Accepted: 07/06/2015] [Indexed: 01/05/2023] Open
Abstract
In dynamic environments, it is crucial to accurately consider the timing of information. For instance, during saccades the eyes rotate so fast that even small temporal errors in relating retinal stimulation by flashed stimuli to extra-retinal information about the eyes' orientations will give rise to substantial errors in where the stimuli are judged to be. If spatial localization involves judging the eyes' orientations at the estimated time of the flash, we should be able to manipulate the pattern of mislocalization by altering the estimated time of the flash. We reasoned that if we presented a relevant flash within a short rapid sequence of irrelevant flashes, participants' estimates of when the relevant flash was presented might be shifted towards the centre of the sequence. In a first experiment, we presented five bars at different positions around the time of a saccade. Four of the bars were black. Either the second or the fourth bar in the sequence was red. The task was to localize the red bar. We found that when the red bar was presented second in the sequence, it was judged to be further in the direction of the saccade than when it was presented fourth in the sequence. Could this be because the red bar was processed faster when more black bars preceded it? In a second experiment, a red bar was either presented alone or followed by two black bars. When two black bars followed it, it was judged to be further in the direction of the saccade. We conclude that the spatial localization of flashed stimuli involves judging the eye orientation at the estimated time of the flash.
Collapse
|
8
|
Cluff T, Crevecoeur F, Scott SH. A perspective on multisensory integration and rapid perturbation responses. Vision Res 2015; 110:215-22. [DOI: 10.1016/j.visres.2014.06.011] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Revised: 06/01/2014] [Accepted: 06/23/2014] [Indexed: 10/25/2022]
|
9
|
Brenner E, Smeets JBJ. How moving backgrounds influence interception. PLoS One 2015; 10:e0119903. [PMID: 25767873 PMCID: PMC4358934 DOI: 10.1371/journal.pone.0119903] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2014] [Accepted: 02/03/2015] [Indexed: 11/29/2022] Open
Abstract
Reaching movements towards an object are continuously guided by visual information about the target and the arm. Such guidance increases precision and allows one to adjust the movement if the target unexpectedly moves. On-going arm movements are also influenced by motion in the surrounding. Fast responses to motion in the surrounding could help cope with moving obstacles and with the consequences of changes in one’s eye orientation and vantage point. To further evaluate how motion in the surrounding influences interceptive movements we asked subjects to tap a moving target when it reached a second, static target. We varied the direction and location of motion in the surrounding, as well as details of the stimuli that are known to influence eye movements. Subjects were most sensitive to motion in the background when such motion was near the targets. Whether or not the eyes were moving, and the direction of the background motion in relation to the direction in which the eyes were moving, had very little influence on the response to the background motion. We conclude that the responses to background motion are driven by motion near the target rather than by a global analysis of the optic flow and its relation with other information about self-motion.
Collapse
Affiliation(s)
- Eli Brenner
- Faculty of Human Movement Sciences, MOVE Research Institute, VU University, Amsterdam, The Netherlands
- * E-mail:
| | - Jeroen B. J. Smeets
- Faculty of Human Movement Sciences, MOVE Research Institute, VU University, Amsterdam, The Netherlands
| |
Collapse
|
10
|
Automatic correction of hand pointing in stereoscopic depth. Sci Rep 2014; 4:7444. [PMID: 25501878 PMCID: PMC5377023 DOI: 10.1038/srep07444] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2014] [Accepted: 11/24/2014] [Indexed: 11/08/2022] Open
Abstract
In order to examine whether stereoscopic depth information could drive fast automatic correction of hand pointing, an experiment was designed in a 3D visual environment in which participants were asked to point to a target at different stereoscopic depths as quickly and accurately as possible within a limited time window (≤300 ms). The experiment consisted of two tasks: "depthGO" in which participants were asked to point to the new target position if the target jumped, and "depthSTOP" in which participants were instructed to abort their ongoing movements after the target jumped. The depth jump was designed to occur in 20% of the trials in both tasks. Results showed that fast automatic correction of hand movements could be driven by stereoscopic depth to occur in as early as 190 ms.
Collapse
|
11
|
Voudouris D, Smeets JBJ, Brenner E. Ultra-fast selection of grasping points. J Neurophysiol 2013; 110:1484-9. [DOI: 10.1152/jn.00066.2013] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To grasp an object one needs to determine suitable positions on its surface for placing the digits and to move the digits to those positions. If the object is displaced during a reach-to-grasp movement, the digit movements are quickly adjusted. Do these fast adjustments only guide the digits to previously chosen positions on the surface of the object, or is the choice of contact points also constantly reconsidered? Subjects grasped a ball or a cube that sometimes rotated briefly when the digits started moving. The digits followed the rotation within 115 ms. When the object was a ball, subjects quickly counteracted the initial following response by reconsidering their choice of grasping points so that the digits ended at different positions on the rotated surface of the ball, and the ball was grasped with the preferred orientation of the hand. When the object was a cube, subjects sometimes counteracted the initial following response to grasp the cube by a different pair of sides. This altered choice of grasping points was evident within ∼160 ms of rotation onset, which is shorter than regular reaction times.
Collapse
Affiliation(s)
- D. Voudouris
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University, Amsterdam, The Netherlands
| | - J. B. J. Smeets
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University, Amsterdam, The Netherlands
| | - E. Brenner
- Research Institute MOVE, Faculty of Human Movement Sciences, VU University, Amsterdam, The Netherlands
| |
Collapse
|
12
|
Hu B, Knill DC. Binocular and monocular depth cues in online feedback control of 3D pointing movement. J Vis 2011; 11:11.7.23. [PMID: 21724567 DOI: 10.1167/11.7.23] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and, thus, were available in an observer's retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size, and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions.
Collapse
Affiliation(s)
- Bo Hu
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.
| | | |
Collapse
|
13
|
Keefe BD, Hibbard PB, Watt SJ. Depth-cue integration in grasp programming: no evidence for a binocular specialism. Neuropsychologia 2011; 49:1246-1257. [PMID: 21371484 DOI: 10.1016/j.neuropsychologia.2011.02.047] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2010] [Revised: 01/18/2011] [Accepted: 02/22/2011] [Indexed: 10/18/2022]
Abstract
When we grasp with one eye covered, the finger and thumb are typically opened wider than for binocularly guided grasps, as if to build a margin-for-error into the movement. Also, patients with visual form agnosia can have profound deficits in their (otherwise relatively normal) grasping when binocular information is removed. One interpretation of these findings is that there is a functional specialism for binocular vision in the control of grasping. Alternatively, cue-integration theory suggests that binocular and monocular depth cues are combined in the control of grasping, and so impaired performance reflects not the loss of 'critical' binocular cues, but increased uncertainty per se. Unfortunately, removing binocular information confounds removing particular (binocular) depth cues with an overall reduction in the available information, and so such experiments cannot distinguish between these alternatives. We measured the effects on visually open-loop grasping of selectively removing monocular (texture) or binocular depth cues. To allow meaningful comparisons, we made psychophysical measurements of the uncertainty in size estimates in each case, so that the informativeness of binocular and monocular cues was known in each condition. Consistent with cue-integration theory, removing either binocular or monocular cues resulted in similar increases in grip apertures. In a separate experiment, we also confirmed that changes in uncertainty per se (keeping the same depth cues available) resulted in larger grip apertures. Overall, changes in the margin-for-error in grasping movements were determined by the uncertainty in size estimates and not by the presence or absence of particular depth cues. Our data therefore argue against a binocular specialism for grasp programming. Instead, grip apertures were smaller when binocular and monocular cues were available than with either cue alone, providing strong evidence that the visuo-motor system exploits the redundancy available in multiple sources of information, and integrates binocular and monocular cues to improve grasping performance.
Collapse
Affiliation(s)
- Bruce D Keefe
- School of Psychology, Bangor University, Wales, United Kingdom
| | - Paul B Hibbard
- School of Psychology, University of St. Andrews, Scotland, United Kingdom
| | - Simon J Watt
- School of Psychology, Bangor University, Wales, United Kingdom.
| |
Collapse
|
14
|
Schenk T. Visuomotor robustness is based on integration not segregation. Vision Res 2010; 50:2627-32. [DOI: 10.1016/j.visres.2010.08.013] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2010] [Revised: 08/09/2010] [Accepted: 08/12/2010] [Indexed: 11/27/2022]
|
15
|
Brenner E, Smeets JBJ. How well can people judge when something happened? Vision Res 2010; 50:1101-8. [PMID: 20214919 DOI: 10.1016/j.visres.2010.03.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2009] [Revised: 03/04/2010] [Accepted: 03/04/2010] [Indexed: 11/26/2022]
Abstract
One way to estimate the temporal precision of vision is with judgments of synchrony or temporal order of visual events. We show that irrelevant motion disrupts the high temporal precision that can be found in such tasks when the two events occur close together, suggesting that the high precision is based on detecting illusory motion rather than on detecting time differences. We also show that temporal precision is not necessarily better when one can accurately anticipate the moments of the events. Finally, we illustrate that a limited resolution of determining the duration of an event imposes a fundamental problem in determining when the event happened. Our experimental estimates of how well people can explicitly judge when something happened are far too poor to account for human performance in various tasks that require temporal precision, such as interception, judging motion or aligning moving targets spatially.
Collapse
Affiliation(s)
- Eli Brenner
- Faculty of Human Movement Sciences, VU University, Van der Boechorststraat 9, NL-1081 BT Amsterdam, The Netherlands.
| | | |
Collapse
|
16
|
|