1
|
Mecheri S, Mars F, Lobjois R. Influence of continuous edge-line delineation on drivers' lateral positioning in curves: a gaze-steering approach. ERGONOMICS 2024; 67:422-432. [PMID: 37323071 DOI: 10.1080/00140139.2023.2226844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 06/13/2023] [Indexed: 06/17/2023]
Abstract
Recent research indicates that installing shoulders on rural roads for safety purposes causes drivers to steer further inside on right bends and thus exceed lane boundaries. The present simulator study examined whether continuous rather than broken edge-line delineation would help drivers to keep their vehicles within the lane. The results indicated that continuous delineation significantly impacts the drivers' gaze and steering trajectories. Drivers looked more towards the lane centre and shifted their steering trajectories accordingly. This was accompanied by a significant decrease in lane-departure frequency when driving on a 3.50-m lane but not on a 2.75-m lane. Overall, the findings provide evidence that continuous delineation influences steering control by altering the visual processes underlying trajectory planning. It is concluded that continuous edge-line delineation between lanes and shoulders may induce safer driver behaviour on right bends, which has potential implications for preventing run-off-road crashes and cyclist safety.Practitioner summary: This study examined how continuous and broken edge lines influence driving behaviour around bends with shoulders. With continuous delineation, drivers gazed and steered in the bend further from the edge line and thus had fewer lane departures. Continuous marking can therefore help prevent run-off-road crashes and improve cyclists' safety.
Collapse
Affiliation(s)
- Sami Mecheri
- Département Neurosciences et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny-sur-Orge, France
| | - Franck Mars
- Centrale Nantes, CNRS, LS2N UMR CNRS 6004, Nantes, France
| | - Régis Lobjois
- COSYS-PICS-L, Université Gustave Eiffel, Marne-la-Vallée, France
| |
Collapse
|
2
|
Gonçalves RC, Louw TL, Madigan R, Quaresma M, Romano R, Merat N. The effect of information from dash-based human-machine interfaces on drivers' gaze patterns and lane-change manoeuvres after conditionally automated driving. ACCIDENT; ANALYSIS AND PREVENTION 2022; 174:106726. [PMID: 35716544 DOI: 10.1016/j.aap.2022.106726] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 04/13/2022] [Accepted: 05/28/2022] [Indexed: 06/15/2023]
Abstract
The goal of this paper was to measure the effect of Human-Machine Interface (HMI) information and guidance on drivers' gaze and takeover behaviour during transitions of control from automation. The motivation for this study came from a gap in the literature, where previous research reports improved performance of drivers' takeover based on HMI information, without considering its effect on drivers' visual attention distribution, and how drivers also use the information available in the environment to guide their response. This driving simulator study investigated drivers' lane-changing behaviour after resumption of control from automation. Different levels of information were provided on a dash-based HMI, prior to each lane change, to investigate how drivers distribute their attention between the surrounding environment and the HMI. The difficulty of the lane change was also manipulated by controlling the position of approaching vehicles in drivers' offside lane. Results indicated that drivers' decision-making time was sensitive to the presence of nearby vehicles in the offside lane, but not directly influenced by the information on the HMI. In terms of gaze behaviour, the closer the position of vehicles in the offside lane, the longer drivers looked in that direction. Drivers looked more at the HMI, and less towards the road centre, when the HMI presented information about automation status, and included an advisory message indicating it was safe to change lane. Machine learning techniques showed a strong relationship between drivers' gaze to the information presented on the HMI, and decision-making time (DMT). These results contribute to our understanding of HMI design for automated vehicles, by demonstrating the attentional costs of an overly-informative HMI, and that drivers still rely on environmental information to perform a lane-change, even when the same information can be acquired by the HMI of the vehicle.
Collapse
Affiliation(s)
| | - Tyron L Louw
- Pontifical Catholic University of Rio de Janeiro, Brazil
| | - Ruth Madigan
- Pontifical Catholic University of Rio de Janeiro, Brazil
| | - Manuela Quaresma
- University of Leeds, Institute for Transport Studies, United Kingdom
| | - Richard Romano
- Pontifical Catholic University of Rio de Janeiro, Brazil
| | - Natasha Merat
- Pontifical Catholic University of Rio de Janeiro, Brazil
| |
Collapse
|
3
|
Mecheri S, Mars F, Lobjois R. Gaze and steering strategies while driving around bends with shoulders. APPLIED ERGONOMICS 2022; 103:103798. [PMID: 35588556 DOI: 10.1016/j.apergo.2022.103798] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 05/02/2022] [Accepted: 05/08/2022] [Indexed: 06/15/2023]
Abstract
The installation of shoulders on rural roads to create more forgiving roads encourages drivers to cut corners on right-hand bends, but the underlying mechanisms are poorly understood. Since eye movements and steering control are closely coupled, this study investigated how the presence of a shoulder influences drivers' gaze strategies. To this end, eighteen drivers negotiated right-hand bends with and without a shoulder on a simulated rural road. In the presence of a shoulder, participants modified their visual sampling of the road by directing their gaze further inside the bend. At the same time, their lane position was deviated inward throughout the bend and the vehicle spent more time out of the lane. These results suggest that the shoulder influences the visual processes involved in trajectory planning. Recommendations are made to encourage drivers to keep their eyes and vehicle in the driving lane when a shoulder is present.
Collapse
Affiliation(s)
- Sami Mecheri
- Département Neurosciences et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny-sur-Orge, France.
| | - Franck Mars
- Centrale Nantes, CNRS, LS2N UMR CNRS 6004, Nantes, France.
| | - Régis Lobjois
- COSYS-PICS-L, Univ Gustave Eiffel, IFSTTAR, F-77454, Marne-la-Vallée, France.
| |
Collapse
|
4
|
Bhojwani TM, Lynch SD, Bühler MA, Lamontagne A. Impact of dual tasking on gaze behaviour and locomotor strategies adopted while circumventing virtual pedestrians during a collision avoidance task. Exp Brain Res 2022; 240:2633-2645. [PMID: 35980438 DOI: 10.1007/s00221-022-06427-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 07/25/2022] [Indexed: 11/25/2022]
Abstract
We investigated gaze behaviour and collision avoidance strategies in 16 healthy young individuals walking towards a goal while exposed to virtual pedestrians (VRPs) approaching from different directions (left, middle, right). This locomotor task and an auditory-based cognitive task were performed under single and dual-task conditions. Longer gaze fixation durations were observed on the approaching vs. other VRPs, with longer fixations devoted to the upper trunk and head compared to other body segments. Compared to other pedestrian approaches, the middle pedestrian received longer fixations and elicited faster walking speeds, larger onset distances of trajectory devitation and smaller obstacle clearances. Gaze and locomotor behaviours were similar between single and dual-task conditions but dual-task costs were observed for the cognitive task. The longer gaze fixations on approaching vs. other pedestrians suggest that enhanced visual attention is devoted to pedestrians posing a greater risk of collision. Likewise, longer gaze fixations for the middle pedestrians may be due to the greater collision risk entailed by this condition, and/or to the fact that this pedestrian was positioned in front of the end goal. Longer fixations on approaching VRPs' trunk and head may serve the purpose of anticipating their walking trajectory. Finally, the dual-task effects that were limited to the cognitive task suggest that healthy young adults prioritize the locomotor task and associated acquisition of visual information. The healthy patterns of visuomotor behaviour characterized in this study will serve as a basis for comparison to further understand defective collision avoidance strategies in patient populations.
Collapse
Affiliation(s)
- Trineta M Bhojwani
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- CRIR-Feil and Oberfeld Research Center, Jewish Rehabilitation Hospital, CISSS-Laval, 3205 Place Alton-Goldbloom, Laval, QC, H7V 1R2, Canada
| | - Sean D Lynch
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- CRIR-Feil and Oberfeld Research Center, Jewish Rehabilitation Hospital, CISSS-Laval, 3205 Place Alton-Goldbloom, Laval, QC, H7V 1R2, Canada
| | - Marco A Bühler
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- CRIR-Feil and Oberfeld Research Center, Jewish Rehabilitation Hospital, CISSS-Laval, 3205 Place Alton-Goldbloom, Laval, QC, H7V 1R2, Canada
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.
- CRIR-Feil and Oberfeld Research Center, Jewish Rehabilitation Hospital, CISSS-Laval, 3205 Place Alton-Goldbloom, Laval, QC, H7V 1R2, Canada.
| |
Collapse
|
5
|
Rasulo S, Vilhelmsen K, van der Weel FRR, van der Meer ALH. Development of motion speed perception from infancy to early adulthood: a high-density EEG study of simulated forward motion through optic flow. Exp Brain Res 2021; 239:3143-3154. [PMID: 34420060 PMCID: PMC8536648 DOI: 10.1007/s00221-021-06195-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 08/11/2021] [Indexed: 12/19/2022]
Abstract
This study investigated evoked and oscillatory brain activity in response to forward visual motion at three different ecologically valid speeds, simulated through an optic flow pattern consisting of a virtual road with moving poles at either side of it. Participants were prelocomotor infants at 4–5 months, crawling infants at 9–11 months, primary school children at 6 years, adolescents at 12 years, and young adults. N2 latencies for motion decreased significantly with age from around 400 ms in prelocomotor infants to 325 ms in crawling infants, and from 300 and 275 ms in 6- and 12-year-olds, respectively, to 250 ms in adults. Infants at 4–5 months displayed the longest latencies and appeared unable to differentiate between motion speeds. In contrast, crawling infants at 9–11 months and 6-year-old children differentiated between low, medium and high speeds, with shortest latency for low speed. Adolescents and adults displayed similar short latencies for the three motion speeds, indicating that they perceived them as equally easy to detect. Time–frequency analyses indicated that with increasing age, participants showed a progression from low- to high-frequency desynchronized oscillatory brain activity in response to visual motion. The developmental differences in motion speed perception are interpreted in terms of a combination of neurobiological development and increased experience with self-produced locomotion. Our findings suggest that motion speed perception is not fully developed until adolescence, which has implications for children’s road traffic safety.
Collapse
Affiliation(s)
- Stefania Rasulo
- Developmental Neuroscience Laboratory, Department of Psychology, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Kenneth Vilhelmsen
- Developmental Neuroscience Laboratory, Department of Psychology, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - F R Ruud van der Weel
- Developmental Neuroscience Laboratory, Department of Psychology, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Audrey L H van der Meer
- Developmental Neuroscience Laboratory, Department of Psychology, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.
| |
Collapse
|
6
|
Tuhkanen S, Pekkanen J, Wilkie RM, Lappi O. Visual anticipation of the future path: Predictive gaze and steering. J Vis 2021; 21:25. [PMID: 34436510 PMCID: PMC8399320 DOI: 10.1167/jov.21.8.25] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 07/10/2021] [Indexed: 11/24/2022] Open
Abstract
Skillful behavior requires the anticipation of future action requirements. This is particularly true during high-speed locomotor steering where solely detecting and correcting current error is insufficient to produce smooth and accurate trajectories. Anticipating future steering requirements could be supported using "model-free" prospective signals from the scene ahead or might rely instead on model-based predictive control solutions. The present study generated conditions whereby the future steering trajectory was specified using a breadcrumb trail of waypoints, placed at regular intervals on the ground to create a predictable course (a repeated series of identical "S-bends"). The steering trajectories and gaze behavior relative to each waypoint were recorded for each participant (N = 16). To investigate the extent to which drivers predicted the location of future waypoints, "gaps" were included (20% of waypoints) whereby the next waypoint in the sequence did not appear. Gap location was varied relative to the S-bend inflection point to manipulate the chances that the next waypoint indicated a change in direction of the bend. Gaze patterns did indeed change according to gap location, suggesting that participants were sensitive to the underlying structure of the course and were predicting the future waypoint locations. The results demonstrate that gaze and steering both rely upon anticipation of the future path consistent with some form of internal model.
Collapse
Affiliation(s)
- Samuel Tuhkanen
- Cognitive Science, Traffic Research Unit, University of Helsinki, Helsinki, Finland
| | - Jami Pekkanen
- Cognitive Science, University of Helsinki, Helsinki, Finland
| | | | - Otto Lappi
- Cognitive Science, Traffic Research Unit, University of Helsinki, Helsinki, Finland
| |
Collapse
|
7
|
Keshner EA, Lamontagne A. The Untapped Potential of Virtual Reality in Rehabilitation of Balance and Gait in Neurological Disorders. FRONTIERS IN VIRTUAL REALITY 2021; 2:641650. [PMID: 33860281 PMCID: PMC8046008 DOI: 10.3389/frvir.2021.641650] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Dynamic systems theory transformed our understanding of motor control by recognizing the continual interaction between the organism and the environment. Movement could no longer be visualized simply as a response to a pattern of stimuli or as a demonstration of prior intent; movement is context dependent and is continuously reshaped by the ongoing dynamics of the world around us. Virtual reality is one methodological variable that allows us to control and manipulate that environmental context. A large body of literature exists to support the impact of visual flow, visual conditions, and visual perception on the planning and execution of movement. In rehabilitative practice, however, this technology has been employed mostly as a tool for motivation and enjoyment of physical exercise. The opportunity to modulate motor behavior through the parameters of the virtual world is often ignored in practice. In this article we present the results of experiments from our laboratories and from others demonstrating that presenting particular characteristics of the virtual world through different sensory modalities will modify balance and locomotor behavior. We will discuss how movement in the virtual world opens a window into the motor planning processes and informs us about the relative weighting of visual and somatosensory signals. Finally, we discuss how these findings should influence future treatment design.
Collapse
Affiliation(s)
- Emily A. Keshner
- Department of Health and Rehabilitation Sciences, Temple University, Philadelphia, PA, United States
- Correspondence: Emily A. Keshner,
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada
- Virtual Reality and Mobility Laboratory, CISSS Laval—Jewish Rehabilitation Hospital Site of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Laval, QC, Canada
| |
Collapse
|
8
|
Theoretical interpretation of drivers' gaze strategy influenced by optical flow. Sci Rep 2021; 11:2389. [PMID: 33504938 PMCID: PMC7840940 DOI: 10.1038/s41598-021-82062-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 01/15/2021] [Indexed: 11/08/2022] Open
Abstract
Driver analysis, particularly revealing where drivers gaze, is a key factor in understanding drivers’ perception. Several studies have examined drivers’ gaze behavior and the two main hypotheses that have been developed are Tangent Point (TP) and Future Path Point (FP). TP is a point on the inner side of the lane, where the driver’s gaze direction becomes tangential with the lane edge. FP is an arbitrary single point on the ideal future path for an individual driver on the road. The location of this single point is dependent on the individual driver. While these gaze points have been verified and discussed by various psychological experiments, it is unclear why drivers gaze at these points. Therefore, in this study, we used optical flow theory to understand drivers’ gaze strategy. Optical flow theory is a method to quantify the extent to which drivers can perceive the future path of the vehicle. The results of numerical simulations demonstrated that optical flow theory can potentially estimate drivers’ gaze behavior. We also conducted an experiment in which the observed driver gaze behavior was compared to calculated gaze strategy based on optical flow theory. The experimental results demonstrate that drivers’ gaze can be estimated with an accuracy of 70.8% and 65.1% on circular and straight paths, respectively. Thus, these results suggest that optical flow theory can be a determining factor in drivers’ gaze strategy.
Collapse
|
9
|
Drivers use active gaze to monitor waypoints during automated driving. Sci Rep 2021; 11:263. [PMID: 33420150 PMCID: PMC7794576 DOI: 10.1038/s41598-020-80126-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 12/14/2020] [Indexed: 11/08/2022] Open
Abstract
Automated vehicles (AVs) will change the role of the driver, from actively controlling the vehicle to primarily monitoring it. Removing the driver from the control loop could fundamentally change the way that drivers sample visual information from the scene, and in particular, alter the gaze patterns generated when under AV control. To better understand how automation affects gaze patterns this experiment used tightly controlled experimental conditions with a series of transitions from 'Manual' control to 'Automated' vehicle control. Automated trials were produced using either a 'Replay' of the driver's own steering trajectories or standard 'Stock' trials that were identical for all participants. Gaze patterns produced during Manual and Automated conditions were recorded and compared. Overall the gaze patterns across conditions were very similar, but detailed analysis shows that drivers looked slightly further ahead (increased gaze time headway) during Automation with only small differences between Stock and Replay trials. A novel mixture modelling method decomposed gaze patterns into two distinct categories and revealed that the gaze time headway increased during Automation. Further analyses revealed that while there was a general shift to look further ahead (and fixate the bend entry earlier) when under automated vehicle control, similar waypoint-tracking gaze patterns were produced during Manual driving and Automation. The consistency of gaze patterns across driving modes suggests that active-gaze models (developed for manual driving) might be useful for monitoring driver engagement during Automated driving, with deviations in gaze behaviour from what would be expected during manual control potentially indicating that a driver is not closely monitoring the automated system.
Collapse
|
10
|
Goncalves RC, Louw TL, Quaresma M, Madigan R, Merat N. The effect of motor control requirements on drivers' eye-gaze pattern during automated driving. ACCIDENT; ANALYSIS AND PREVENTION 2020; 148:105788. [PMID: 33039820 DOI: 10.1016/j.aap.2020.105788] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 08/19/2020] [Accepted: 09/14/2020] [Indexed: 06/11/2023]
Abstract
This driving simulator study compared drivers' eye movements during a series of lane-changes, which required different levels of motor control for their execution. Participants completed 12 lane-changing manoeuvres in three drives, categorised by degree of manual engagement with the driving task: Fully Manual Drive, Manual Intervention Required, Fully Automated Drive (Manual drive, Partial automation, Full automation). For Partial automation, drivers resumed control from the automated system and changed lane manually. For Full automation, the automated system managed the lane change, but participants initiated the manoeuvre by pulling the indicator lever. Results were compared to the Manual drive condition, where drivers controlled the vehicle at all times. For each driving condition, lane changing was initiated by drivers, at their discretion, in response to a slow-moving lead vehicle, which entered their lane. Failure to change lane did not result in a collision. To understand how different motor control requirements affected driver visual attention, eye movements to the road centre, and drivers' vertical and horizontal gaze dispersion were compared during different stages of the lane change manoeuvre, for the three drives. Results showed that drivers' attention to the road centre was generally lower for drives with less motor control requirements, especially when they were not engaged in the lane change process. However, as drivers moved closer to the lead vehicle, and prepared to change lane, the pattern of eye movements to the road centre converged, regardless of whether drivers were responsible for the manual control of the lane change. While there were no significant differences in horizontal gaze dispersion between the three drives, vertical dispersion for the two levels of automation was quite different, with higher dispersion during Partial automation, which was due to a higher reliance on the HMI placed in the centre console.
Collapse
Affiliation(s)
- Rafael C Goncalves
- University of Leeds, Institute for Transport Studies, Pontifical Catholic University of Rio de Janeiro, LEUI, United Kingdom.
| | - Tyron L Louw
- University of Leeds, Institute for Transport Studies, Pontifical Catholic University of Rio de Janeiro, LEUI, United Kingdom
| | - Manuela Quaresma
- University of Leeds, Institute for Transport Studies, Pontifical Catholic University of Rio de Janeiro, LEUI, United Kingdom
| | - Ruth Madigan
- University of Leeds, Institute for Transport Studies, Pontifical Catholic University of Rio de Janeiro, LEUI, United Kingdom
| | - Natasha Merat
- University of Leeds, Institute for Transport Studies, Pontifical Catholic University of Rio de Janeiro, LEUI, United Kingdom
| |
Collapse
|
11
|
Billington J, Webster RJ, Sherratt TN, Wilkie RM, Hassall C. The (Under)Use of Eye-Tracking in Evolutionary Ecology. Trends Ecol Evol 2020; 35:495-502. [PMID: 32396816 DOI: 10.1016/j.tree.2020.01.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 12/18/2019] [Accepted: 01/20/2020] [Indexed: 02/07/2023]
Abstract
To survive and pass on their genes, animals must perform many tasks that affect their fitness, such as mate-choice, foraging, and predator avoidance. The ability to make rapid decisions is dependent on the information that needs to be sampled from the environment and how it is processed. We highlight the need to consider visual attention within sensory ecology and advocate the use of eye-tracking methods to better understand how animals prioritise the sampling of information from their environments prior to making a goal-directed decision. We consider ways in which eye-tracking can be used to determine how animals work within attentional constraints and how environmental pressures may exploit these limitations.
Collapse
Affiliation(s)
- J Billington
- School of Psychology, University of Leeds, Leeds, UK.
| | - R J Webster
- Department of Biology, Carleton University, Ottawa, Ontario, Canada
| | - T N Sherratt
- Department of Biology, Carleton University, Ottawa, Ontario, Canada
| | - R M Wilkie
- School of Psychology, University of Leeds, Leeds, UK
| | - C Hassall
- School of Biology, Faculty of Biological Sciences, University of Leeds, Leeds, UK
| |
Collapse
|
12
|
Mole CD, Lappi O, Giles O, Markkula G, Mars F, Wilkie RM. Getting Back Into the Loop: The Perceptual-Motor Determinants of Successful Transitions out of Automated Driving. HUMAN FACTORS 2019; 61:1037-1065. [PMID: 30840514 DOI: 10.1177/0018720819829594] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To present a structured, narrative review highlighting research into human perceptual-motor coordination that can be applied to automated vehicle (AV)-human transitions. BACKGROUND Manual control of vehicles is made possible by the coordination of perceptual-motor behaviors (gaze and steering actions), where active feedback loops enable drivers to respond rapidly to ever-changing environments. AVs will change the nature of driving to periods of monitoring followed by the human driver taking over manual control. The impact of this change is currently poorly understood. METHOD We outline an explanatory framework for understanding control transitions based on models of human steering control. This framework can be summarized as a perceptual-motor loop that requires (a) calibration and (b) gaze and steering coordination. A review of the current experimental literature on transitions is presented in the light of this framework. RESULTS The success of transitions are often measured using reaction times, however, the perceptual-motor mechanisms underpinning steering quality remain relatively unexplored. CONCLUSION Modeling the coordination of gaze and steering and the calibration of perceptual-motor control will be crucial to ensure safe and successful transitions out of automated driving. APPLICATION This conclusion poses a challenge for future research on AV-human transitions. Future studies need to provide an understanding of human behavior that will be sufficient to capture the essential characteristics of drivers reengaging control of their vehicle. The proposed framework can provide a guide for investigating specific components of human control of steering and potential routes to improving manual control recovery.
Collapse
Affiliation(s)
| | - Otto Lappi
- Cognitive Science, University of Helsinki, Finland
| | | | | | | | | |
Collapse
|
13
|
Affiliation(s)
- Katja Fiehler
- Department of Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), Universities of Marburg and Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
14
|
Tuhkanen S, Pekkanen J, Rinkkala P, Mole C, Wilkie RM, Lappi O. Humans Use Predictive Gaze Strategies to Target Waypoints for Steering. Sci Rep 2019; 9:8344. [PMID: 31171850 PMCID: PMC6554351 DOI: 10.1038/s41598-019-44723-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Accepted: 05/15/2019] [Indexed: 12/22/2022] Open
Abstract
A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.
Collapse
Affiliation(s)
- Samuel Tuhkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Jami Pekkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Paavo Rinkkala
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Callum Mole
- School of Psychology, University of Leeds, Leeds, UK
| | | | - Otto Lappi
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland. .,TRUlab, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
15
|
Voudouris D, Smeets JBJ, Fiehler K, Brenner E. Gaze when reaching to grasp a glass. J Vis 2018; 18:16. [PMID: 30167674 DOI: 10.1167/18.8.16] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
People have often been reported to look near their index finger's contact point when grasping. They have only been reported to look near the thumb's contact point when grasping an opaque object at eye height with a horizontal grip-thus when the region near the index finger's contact point is occluded. To examine to what extent being able to see the digits' final trajectories influences where people look, we compared gaze when reaching to grasp a glass of water or milk that was placed at eye or hip height. Participants grasped the glass and poured its contents into another glass on their left. Surprisingly, most participants looked nearer to their thumb's contact point. To examine whether this was because gaze was biased toward the position of the subsequent action, which was to the left, we asked participants in a second experiment to grasp a glass and either place it or pour its contents into another glass either to their left or right. Most participants' gaze was biased to some extent toward the position of the next action, but gaze was not influenced consistently across participants. Gaze was also not influenced consistently across the experiments for individual participants-even for those who participated in both experiments. We conclude that gaze is not simply determined by the identity of the digit or by details of the contact points, such as their visibility, but that gaze is just as sensitive to other factors, such as where one will manipulate the object after grasping.
Collapse
Affiliation(s)
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig University, Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
16
|
Mecheri S, Lobjois R. Steering Control in a Low-Cost Driving Simulator: A Case for the Role of Virtual Vehicle Cab. HUMAN FACTORS 2018; 60:719-734. [PMID: 29664680 DOI: 10.1177/0018720818769253] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The aim of this study was to investigate steering control in a low-cost driving simulator with and without a virtual vehicle cab. BACKGROUND In low-cost simulators, the lack of a vehicle cab denies driver access to vehicle width, which could affect steering control, insofar as locomotor adjustments are known to be based on action-scaled visual judgments of the environment. METHOD Two experiments were conducted in which steering control with and without a virtual vehicle cab was investigated in a within-subject design, using cornering and straight-lane-keeping tasks. RESULTS Driving around curves without vehicle cab information made drivers deviate more from the lane center toward the inner edge in right (virtual cab = 4 ± 19 cm; no cab = 42 ± 28 cm; at the apex of the curve, p < .001) but not in left curves. More lateral deviation from the lane center toward the edge line was also found in driving without the virtual cab on straight roads (virtual cab = 21 ± 28 cm; no cab = 36 ± 27 cm; p < .001), whereas driving stability and presence ratings were not affected. In both experiments, the greater lateral deviation in the no-cab condition led to significantly more time driving off the lane. CONCLUSION The findings strongly suggest that without cab information, participants underestimate the distance to the right edge of the car (in contrast to the left edge) and thus vehicle width. This produces considerable differences in the steering trajectory. APPLICATION Providing a virtual vehicle cab must be encouraged for more effectively capturing drivers' steering control in low-cost simulators.
Collapse
|
17
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
18
|
Hanna M, Fung J, Lamontagne A. Multisensory control of a straight locomotor trajectory. J Vestib Res 2018; 27:17-25. [PMID: 28387689 DOI: 10.3233/ves-170603] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Locomotor steering is contingent upon orienting oneself spatially in the environment. When the head is turned while walking, the optic flow projected onto the retina is a complex pattern comprising of a translational and a rotational component. We have created a unique paradigm to simulate different optic flows in a virtual environment. We hypothesized that non-visual (vestibular and somatosensory) cues are required for proper control of a straight trajectory while walking. This research study included 9 healthy young subjects walking in a large physical space (40×25m2) while the virtual environment is viewed in a helmet-mounted display. They were instructed to walk straight in the physical world while being exposed to three conditions: (1) self-initiated active head turns (AHT: 40° right, left, or none); (2) visually simulated head turns (SHT); and (3) visually simulated head turns with no target element (SHT_NT). Conditions 1 and 2 involved an eye-level target which subjects were instructed to fixate, whereas condition 3 was similar to condition 2 but with no target. Identical retinal flow patterns were present in the AHT and SHT conditions whereas non-visual cues differed in that a head rotation was sensed only in AHT but not in SHT. Body motions were captured by a 12-camera Vicon system. Horizontal orientations of the head and body segments, as well as the trajectory of the body's centre of mass were analyzed. SHT and SNT_NT yielded similar results. Heading and body segment orientations changed in the direction opposite to the head turns in SHT conditions. Heading remained unchanged across head turn directions in AHT. Results suggest that non-visual information is used in the control of heading while being exposed to changing rotational optic flows. The small magnitude of the changes in SHT conditions suggests that the CNS can re-weight relevant sources of information to minimize heading errors in the presence of sensory conflicts.
Collapse
Affiliation(s)
- Maxim Hanna
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| | - Joyce Fung
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, QC, Canada.,Feil and Oberfeld /CRIR Research Centre, Jewish Rehabilitation Hospital, CISSS-Laval, QC, Canada
| |
Collapse
|
19
|
van Leeuwen PM, de Groot S, Happee R, de Winter JCF. Differences between racing and non-racing drivers: A simulator study using eye-tracking. PLoS One 2017; 12:e0186871. [PMID: 29121090 PMCID: PMC5679571 DOI: 10.1371/journal.pone.0186871] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 09/21/2017] [Indexed: 12/15/2022] Open
Abstract
Motorsport has developed into a professional international competition. However, limited research is available on the perceptual and cognitive skills of racing drivers. By means of a racing simulator, we compared the driving performance of seven racing drivers with ten non-racing drivers. Participants were tasked to drive the fastest possible lap time. Additionally, both groups completed a choice reaction time task and a tracking task. Results from the simulator showed faster lap times, higher steering activity, and a more optimal racing line for the racing drivers than for the non-racing drivers. The non-racing drivers’ gaze behavior corresponded to the tangent point model, whereas racing drivers showed a more variable gaze behavior combined with larger head rotations while cornering. Results from the choice reaction time task and tracking task showed no statistically significant difference between the two groups. Our results are consistent with the current consensus in sports sciences in that task-specific differences exist between experts and novices while there are no major differences in general cognitive and motor abilities.
Collapse
Affiliation(s)
- Peter M. van Leeuwen
- Delft University of Technology, Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, CD Delft, The Netherlands
- * E-mail:
| | - Stefan de Groot
- Delft University of Technology, Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, CD Delft, The Netherlands
| | - Riender Happee
- Delft University of Technology, Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, CD Delft, The Netherlands
| | - Joost C. F. de Winter
- Delft University of Technology, Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Mekelweg 2, CD Delft, The Netherlands
| |
Collapse
|
20
|
Mole CD, Wilkie RM. Looking forward to safer HGVs: The impact of mirrors on driver reaction times. ACCIDENT; ANALYSIS AND PREVENTION 2017; 107:173-185. [PMID: 28865992 DOI: 10.1016/j.aap.2017.07.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 05/31/2017] [Accepted: 07/28/2017] [Indexed: 06/07/2023]
Abstract
Heavy Goods Vehicle (HGV) collisions are responsible for a disproportionate number of urban vulnerable road user casualties (VRU - cyclists and pedestrians). Blind-spots to the front and side of HGVs can make it difficult (sometimes impossible) to detect close proximity VRUs and may be the cause of some collisions. The current solution to this problem is to provide additional mirrors that can allow the driver to see into the blind-spots. However, keeping track of many mirrors requires frequent off-road glances which can be difficult to execute during demanding driving situations. One suggestion is that driving safety could be improved by redesigning cabs in order to reduce/remove blind-spot regions, with the aim of reducing the need for mirrors, and increasing detection rates (and thereby reducing collisions). To examine whether mirrors delay driver responses we created a series of simulated driving tasks and tested regular car drivers and expert HGV drivers. First we measured baseline reaction times to objects appearing when not driving ('Parked'). Participants then repeated the task whilst driving through a simulated town (primary driving tasks were steering, braking, and following directional signs): driving slowed reaction times to objects visible in mirrors but not to objects visible through the front windscreen. In a second experiment cognitive load was increased, this slowed RTs overall but did not alter the pattern of responses across windows and mirrors. Crucially, we demonstrate that the distribution of mirror RTs can be captured simply by the mirror's spatial position (eccentricity). These findings provide robust evidence that drivers are slower reacting to objects only visible in eccentric mirrors compared to direct viewing through the front windscreen.
Collapse
|
21
|
Crane BT. Effect of eye position during human visual-vestibular integration of heading perception. J Neurophysiol 2017; 118:1609-1621. [PMID: 28615328 DOI: 10.1152/jn.00037.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 06/13/2017] [Accepted: 06/13/2017] [Indexed: 11/22/2022] Open
Abstract
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems.NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability.
Collapse
Affiliation(s)
- Benjamin T Crane
- Department of Otolaryngology, University of Rochester, Rochester, New York
| |
Collapse
|
22
|
Abstract
When steering down a winding road, drivers have been shown to use both near and far regions of the road for guidance during steering. We propose a model of steering that explicitly embodies this idea, using both a ‘near point’ to maintain a central lane position and a ‘far point’ to account for the upcoming roadway. Unlike control models that integrate near and far information to compute curvature or more complex features, our model relies solely on one perceptually plausible feature of the near and far points, namely the visual direction to each point. The resulting parsimonious model can be run in simulation within a realistic highway environment to facilitate direct comparison between model and human behavior. Using such simulations, we demonstrate that the proposed two-point model is able to account for four interesting aspects of steering behavior: curve negotiation with occluded visual regions, corrective steering after a lateral drift, lane changing, and individual differences.
Collapse
Affiliation(s)
- Dario D Salvucci
- Department of Computer Science, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104, USA.
| | | |
Collapse
|
23
|
Longitudinal study of preterm and full-term infants: High-density EEG analyses of cortical activity in response to visual motion. Neuropsychologia 2016; 84:89-104. [DOI: 10.1016/j.neuropsychologia.2016.02.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2015] [Revised: 01/14/2016] [Accepted: 02/03/2016] [Indexed: 11/21/2022]
|
24
|
Crane BT. Coordinates of Human Visual and Inertial Heading Perception. PLoS One 2015; 10:e0135539. [PMID: 26267865 PMCID: PMC4534459 DOI: 10.1371/journal.pone.0135539] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 07/22/2015] [Indexed: 11/22/2022] Open
Abstract
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Collapse
Affiliation(s)
- Benjamin Thomas Crane
- Department of Otolaryngology, University of Rochester, Rochester, NY, United States of America
- Department of Bioengineering, University of Rochester, Rochester, NY, United States of America
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
- * E-mail:
| |
Collapse
|
25
|
Authié CN, Hilt PM, N'Guyen S, Berthoz A, Bennequin D. Differences in gaze anticipation for locomotion with and without vision. Front Hum Neurosci 2015; 9:312. [PMID: 26106313 PMCID: PMC4458691 DOI: 10.3389/fnhum.2015.00312] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2014] [Accepted: 05/16/2015] [Indexed: 12/02/2022] Open
Abstract
Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition.
Collapse
Affiliation(s)
- Colas N Authié
- Laboratoire de Physiologie de la Perception et de l'Action, UMR 7152, Collège de France, Centre National de la Recherche Scientifique Paris, France
| | - Pauline M Hilt
- Laboratoire de Physiologie de la Perception et de l'Action, UMR 7152, Collège de France, Centre National de la Recherche Scientifique Paris, France
| | - Steve N'Guyen
- Laboratoire de Physiologie de la Perception et de l'Action, UMR 7152, Collège de France, Centre National de la Recherche Scientifique Paris, France
| | - Alain Berthoz
- Laboratoire de Physiologie de la Perception et de l'Action, UMR 7152, Collège de France, Centre National de la Recherche Scientifique Paris, France
| | - Daniel Bennequin
- UFR de Mathématiques, Équipe Géométrie et Dynamique, Institut de Mathématiques de Jussieu, Université Paris Diderot-Paris 7, UMR 7586 Paris, France
| |
Collapse
|
26
|
Okafuji Y, Fukao T, Inou H. Development of Automatic Steering System by Modeling Human Behavior Based on Optical Flow. JOURNAL OF ROBOTICS AND MECHATRONICS 2015. [DOI: 10.20965/jrm.2015.p0136] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270002/03.jpg"" width=""300"" /> Manipulated optical flow field</div> Recently, various driving support systems have been developed to improve safety. However, because drivers occasionally feel that something is wrong, systems need to be designed based on information that drivers perceive. Therefore, we focused on optical flow, which is one of the visual information used by humans to improve driving feel. Humans are said to perceive the direction of self-motion from optical flow and also utilize it during driving. Applying the optical flow model to automatic steering systems, a human-oriented system might be able to be developed. In this paper, we derive the focus of expansion (FOE) in the frame of a camera that is the direction of self-motion in optical flow and propose a nonlinear control method based on the FOE. The effectiveness of the proposed method was verified through a vehicle simulation, and the results showed that the proposed method simulates human behavior. Based on these results, this approach may serve as a foundation of human-oriented system designs. </span>
Collapse
|
27
|
Palmisano S, Allison RS, Schira MM, Barry RJ. Future challenges for vection research: definitions, functional significance, measures, and neural bases. Front Psychol 2015; 6:193. [PMID: 25774143 PMCID: PMC4342884 DOI: 10.3389/fpsyg.2015.00193] [Citation(s) in RCA: 75] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Accepted: 02/07/2015] [Indexed: 11/25/2022] Open
Abstract
This paper discusses four major challenges facing modern vection research. Challenge 1 (Defining Vection) outlines the different ways that vection has been defined in the literature and discusses their theoretical and experimental ramifications. The term vection is most often used to refer to visual illusions of self-motion induced in stationary observers (by moving, or simulating the motion of, the surrounding environment). However, vection is increasingly being used to also refer to non-visual illusions of self-motion, visually mediated self-motion perceptions, and even general subjective experiences (i.e., “feelings”) of self-motion. The common thread in all of these definitions is the conscious subjective experience of self-motion. Thus, Challenge 2 (Significance of Vection) tackles the crucial issue of whether such conscious experiences actually serve functional roles during self-motion (e.g., in terms of controlling or guiding the self-motion). After more than 100 years of vection research there has been surprisingly little investigation into its functional significance. Challenge 3 (Vection Measures) discusses the difficulties with existing subjective self-report measures of vection (particularly in the context of contemporary research), and proposes several more objective measures of vection based on recent empirical findings. Finally, Challenge 4 (Neural Basis) reviews the recent neuroimaging literature examining the neural basis of vection and discusses the hurdles still facing these investigations.
Collapse
Affiliation(s)
- Stephen Palmisano
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| | - Robert S Allison
- Department of Electrical Engineering and Computer Science, York University Toronto, ON, Canada
| | - Mark M Schira
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| | - Robert J Barry
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| |
Collapse
|
28
|
Charette C, Routhier F, McFadyen BJ. Visuo-locomotor coordination for direction changes in a manual wheelchair as compared to biped locomotion in healthy subjects. Neurosci Lett 2015; 588:83-7. [PMID: 25562632 DOI: 10.1016/j.neulet.2015.01.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Revised: 12/19/2014] [Accepted: 01/02/2015] [Indexed: 10/24/2022]
Abstract
The visual system during walking provides travel path and environmental information. Although the manual wheelchair (MWC) is also a frequent mode of locomotion, its underlying visuo-locomotor control is not well understood. This study begins to understand the visuo-locomotor coordination for MWC navigation in relation to biped gait during direction changes in healthy subjects. Eight healthy male subjects (26.9±6.4 years) were asked to walk as well as to propel a MWC straight ahead and while changing direction by 45° to the right guided by a vertical pole. Body and MWC movement (speed, minimal clearance, point of deviation, temporal body coordination, relative timing of body rotations) and gaze behavior were analysed. There was a main speed effect for direction and a direction by mode interaction with slower speeds for MWC direction change. Point of deviation was later for MWC direction change and always involved a counter movement (seen for vehicular control) with greater minimal distance from the vertical pole as compared to biped gait. In straight ahead locomotion, subjects predominantly fixed their gaze on the end target for both locomotor modes while there was a clear trend for subjects to fixate on the vertical pole more for MWC direction change. When changing direction, head movement always preceded gaze changes, which was followed by trunk movement for both modes. Yet while subjects turned the trunk at the same time during approach regardless of locomotor mode, head movement was earlier for MWC locomotion. These results suggest that MWC navigation combines both biped locomotor and vehicular-based movement control. Head movement to anticipate path deviations and lead steering for locomotion appears to be stereotypic across locomotor modes, while specific gaze behavior predominantly depends on the environmental demands.
Collapse
Affiliation(s)
- Caroline Charette
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (CIRRIS), Quebec City Rehabilitation Institute, Quebec, Canada; Faculty of Medicine, Department of Rehabilitation, Laval University, Quebec, Canada
| | - François Routhier
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (CIRRIS), Quebec City Rehabilitation Institute, Quebec, Canada; Faculty of Medicine, Department of Rehabilitation, Laval University, Quebec, Canada
| | - Bradford J McFadyen
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (CIRRIS), Quebec City Rehabilitation Institute, Quebec, Canada; Faculty of Medicine, Department of Rehabilitation, Laval University, Quebec, Canada.
| |
Collapse
|
29
|
van Leeuwen PM, Gómez i Subils C, Jimenez AR, Happee R, de Winter JCF. Effects of visual fidelity on curve negotiation, gaze behaviour and simulator discomfort. ERGONOMICS 2015; 58:1347-1364. [PMID: 25693035 DOI: 10.1080/00140139.2015.1005172] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED Technological developments have led to increased visual fidelity of driving simulators. However, simplified visuals have potential advantages, such as improved experimental control, reduced simulator discomfort and increased generalisability of results. In this driving simulator study, we evaluated the effects of visual fidelity on driving performance, gaze behaviour and subjective discomfort ratings. Twenty-four participants drove a track with 90° corners in (1) a high fidelity, textured environment, (2) a medium fidelity, non-textured environment without scenery objects and (3) a low-fidelity monochrome environment that only showed lane markers. The high fidelity level resulted in higher steering activity on straight road segments, higher driving speeds and higher gaze variance than the lower fidelity levels. No differences were found between the two lower fidelity levels. In conclusion, textures and objects were found to affect steering activity and driving performance; however, gaze behaviour during curve negotiation and self-reported simulator discomfort were unaffected. PRACTITIONER SUMMARY In a driving simulator study, three levels of visual fidelity were evaluated. The results indicate that the highest fidelity level, characterised by a textured environment, resulted in higher steering activity, higher driving speeds and higher variance of horizontal gaze than the two lower fidelity levels without textures.
Collapse
Affiliation(s)
- Peter M van Leeuwen
- a Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology , Mekelweg 2, 2628 CD, Delft , The Netherlands
| | | | | | | | | |
Collapse
|
30
|
Tan HS, Huang J. Design of a High-Performance Automatic Steering Controller for Bus Revenue Service Based on How Drivers Steer. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2014.2331092] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
31
|
Rivers TJ, Sirota MG, Guttentag AI, Ogorodnikov DA, Shah NA, Beloozerova IN. Gaze shifts and fixations dominate gaze behavior of walking cats. Neuroscience 2014; 275:477-99. [PMID: 24973656 PMCID: PMC4169884 DOI: 10.1016/j.neuroscience.2014.06.034] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2014] [Revised: 06/11/2014] [Accepted: 06/13/2014] [Indexed: 11/20/2022]
Abstract
Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required for successful walking, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5-m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body's speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats' gaze behavior during all locomotor tasks, jointly occupying 62-84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior "gaze stepping". Each gaze shift took gaze to a site approximately 75-80cm in front of the cat, which the cat reached in 0.7-1.2s and 1.1-1.6 strides. Constant gaze occupied only 5-21% of the time cats spent looking at the walking surface.
Collapse
Affiliation(s)
- T J Rivers
- Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ 85013, USA.
| | - M G Sirota
- Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ 85013, USA
| | - A I Guttentag
- Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ 85013, USA; Department of Chemistry and Biochemistry, University of California Los Angeles, Los Angeles, CA 90024, USA
| | - D A Ogorodnikov
- Department of Neurology, Mount Sinai School of Medicine, New York, NY 10029, USA
| | - N A Shah
- Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ 85013, USA
| | - I N Beloozerova
- Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ 85013, USA
| |
Collapse
|
32
|
Vansteenkiste P, Van Hamme D, Veelaert P, Philippaerts R, Cardon G, Lenoir M. Cycling around a curve: the effect of cycling speed on steering and gaze behavior. PLoS One 2014; 9:e102792. [PMID: 25068380 PMCID: PMC4113223 DOI: 10.1371/journal.pone.0102792] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Accepted: 06/23/2014] [Indexed: 11/18/2022] Open
Abstract
Although it is generally accepted that visual information guides steering, it is still unclear whether a curvature matching strategy or a ‘look where you are going’ strategy is used while steering through a curved road. The current experiment investigated to what extent the existing models for curve driving also apply to cycling around a curve, and tested the influence of cycling speed on steering and gaze behavior. Twenty-five participants were asked to cycle through a semicircular lane three consecutive times at three different speeds while staying in the center of the lane. The observed steering behavior suggests that an anticipatory steering strategy was used at curve entrance and a compensatory strategy was used to steer through the actual bend of the curve. A shift of gaze from the center to the inside edge of the lane indicates that at low cycling speed, the ‘look where you are going’ strategy was preferred, while at higher cycling speeds participants seemed to prefer the curvature matching strategy. Authors suggest that visual information from both steering strategies contributes to the steering system and can be used in a flexible way. Based on a familiarization effect, it can be assumed that steering is not only guided by vision but that a short-term learning component should also be taken into account.
Collapse
Affiliation(s)
- Pieter Vansteenkiste
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
- * E-mail:
| | - David Van Hamme
- Department of Telecommunications and Information Processing, Ghent University, Ghent, Belgium
| | - Peter Veelaert
- Department of Telecommunications and Information Processing, Ghent University, Ghent, Belgium
| | - Renaat Philippaerts
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
| | - Greet Cardon
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
| | - Matthieu Lenoir
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
| |
Collapse
|
33
|
Higuchi T. Visuomotor control of human adaptive locomotion: understanding the anticipatory nature. Front Psychol 2013; 4:277. [PMID: 23720647 PMCID: PMC3655271 DOI: 10.3389/fpsyg.2013.00277] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2013] [Accepted: 04/29/2013] [Indexed: 12/02/2022] Open
Abstract
To maintain balance during locomotion, the central nervous system (CNS) accommodates changes in the constraints of spatial environment (e.g., existence of an obstacle or changes in the surface properties). Locomotion while modifying the basic movement patterns in response to such constraints is referred to as adaptive locomotion. The most powerful means of ensuring balance during adaptive locomotion is to visually perceive the environmental properties at a distance and modify the movement patterns in an anticipatory manner to avoid perturbation altogether. For this reason, visuomotor control of adaptive locomotion is characterized, at least in part, by its anticipatory nature. The purpose of the present article is to review the relevant studies which revealed the anticipatory nature of the visuomotor control of adaptive locomotion. The anticipatory locomotor adjustments for stationary and changeable environment, as well as the spatio-temporal patterns of gaze behavior to support the anticipatory locomotor adjustments are described. Such description will clearly show that anticipatory locomotor adjustments are initiated when an object of interest (e.g., a goal or obstacle) still exists in far space. This review also show that, as a prerequisite of anticipatory locomotor adjustments, environmental properties are accurately perceived from a distance in relation to individual’s action capabilities.
Collapse
Affiliation(s)
- Takahiro Higuchi
- Department of Health Promotion Science, Tokyo Metropolitan University Tokyo, Japan
| |
Collapse
|
34
|
Cirio G, Olivier AH, Marchal M, Pettré J. Kinematic evaluation of virtual walking trajectories. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:671-680. [PMID: 23428452 DOI: 10.1109/tvcg.2013.34] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Virtual walking, a fundamental task in Virtual Reality (VR), is greatly influenced by the locomotion interface being used, by the specificities of input and output devices, and by the way the virtual environment is represented. No matter how virtual walking is controlled, the generation of realistic virtual trajectories is absolutely required for some applications, especially those dedicated to the study of walking behaviors in VR, navigation through virtual places for architecture, rehabilitation and training. Previous studies focused on evaluating the realism of locomotion trajectories have mostly considered the result of the locomotion task (efficiency, accuracy) and its subjective perception (presence, cybersickness). Few focused on the locomotion trajectory itself, but in situation of geometrically constrained task. In this paper, we study the realism of unconstrained trajectories produced during virtual walking by addressing the following question: did the user reach his destination by virtually walking along a trajectory he would have followed in similar real conditions? To this end, we propose a comprehensive evaluation framework consisting on a set of trajectographical criteria and a locomotion model to generate reference trajectories. We consider a simple locomotion task where users walk between two oriented points in space. The travel path is analyzed both geometrically and temporally in comparison to simulated reference trajectories. In addition, we demonstrate the framework over a user study which considered an initial set of common and frequent virtual walking conditions, namely different input devices, output display devices, control laws, and visualization modalities. The study provides insight into the relative contributions of each condition to the overall realism of the resulting virtual trajectories.
Collapse
|
35
|
Vansteenkiste P, Cardon G, D'Hondt E, Philippaerts R, Lenoir M. The visual control of bicycle steering: The effects of speed and path width. ACCIDENT; ANALYSIS AND PREVENTION 2013; 51:222-227. [PMID: 23274280 DOI: 10.1016/j.aap.2012.11.025] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2012] [Revised: 11/19/2012] [Accepted: 11/28/2012] [Indexed: 06/01/2023]
Abstract
Although cycling is a widespread form of transportation, little is known about the visual behaviour of bicycle users. This study examined whether the visual behaviour of cyclists can be explained by the two-level model of steering described for car driving, and how it is influenced by cycling speed and lane width. In addition, this study investigated whether travel fixations, described during walking, can also be found during a cycling task. Twelve adult participants were asked to cycle three 15m long cycling lanes of 10, 25 and 40cm wide at three different self-selected speeds (i.e., slow, preferred and fast). Participants' gaze behaviour was recorded at 50Hz using a head mounted eye tracker and the resulting scene video with overlay gaze cursor was analysed frame by frame. Four types of fixations were distinguished: (1) travel fixations, (2) fixations inside the cycling lane (path), (3) fixations to the final metre of the lane (goal), and (4) fixations outside of the cycling lane (external). Participants were found to mainly watch the path (41%) and goal (40%) region while very few travel fixations were made (<5%). Instead of travel fixations, an OptoKinetic Nystagmus was revealed when looking at the near path. Large variability between subjects in fixation location suggests that different strategies were used. Wider lanes resulted in a shift of gaze towards the end of the lane and to external regions, whereas higher cycling speeds resulted in a more distant gaze behaviour and more travel fixations. To conclude, the two-level model of steering as described for car driving is not fully in line with our findings during cycling, but the assumption that both the near and the far region is necessary for efficient steering seems valid. A new model for visual behaviour during goal directed locomotion is presented.
Collapse
|
36
|
Mars F, Navarro J. Where we look when we drive with or without active steering wheel control. PLoS One 2012; 7:e43858. [PMID: 22928043 PMCID: PMC3425540 DOI: 10.1371/journal.pone.0043858] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2011] [Accepted: 07/30/2012] [Indexed: 12/02/2022] Open
Abstract
Current theories on the role of visuomotor coordination in driving agree that active sampling of the road by the driver informs the arm-motor system in charge of performing actions on the steering wheel. Still under debate, however, is the nature of visual cues and gaze strategies used by drivers. In particular, the tangent point hypothesis, which states that drivers look at a specific point on the inside edge line, has recently become the object of controversy. An alternative hypothesis proposes that drivers orient gaze toward the desired future path, which happens to be often situated in the vicinity of the tangent point. The present study contributed to this debate through the analyses of the distribution of gaze orientation with respect to the tangent point. The results revealed that drivers sampled the roadway in the close vicinity of the tangent point rather than the tangent point proper. This supports the idea that drivers look at the boundary of a safe trajectory envelop near the inside edge line. Furthermore, the study investigated for the first time the reciprocal influence of manual control on gaze control in the context of driving. This was achieved through the comparison of gaze behavior when drivers actively steered the vehicle or when steering was performed by an automatic controller. The results showed an increase in look-ahead fixations in the direction of the bend exit and a small but consistent reduction in the time spent looking in the area of the tangent point when steering was passive. This may be the consequence of a change in the balance between cognitive and sensorimotor anticipatory gaze strategies. It might also reflect bidirectional coordination control between the eye and arm-motor systems, which goes beyond the common assumption that the eyes lead the hands when driving.
Collapse
Affiliation(s)
- Franck Mars
- IRCCyN (Institut de Recherche en Communication et en Cybernétique de Nantes), LUNAM Université and CNRS, Nantes, France.
| | | |
Collapse
|
37
|
Durant S, Zanker JM. Variation in the local motion statistics of real-life optic flow scenes. Neural Comput 2012; 24:1781-805. [PMID: 22428592 DOI: 10.1162/neco_a_00294] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Optic flow motion patterns can be a rich source of information about our own movement and about the structure of the environment we are moving in. We investigate the information available to the brain under real operating conditions by analyzing video sequences generated by physically moving a camera through various typical human environments. We consider to what extent the motion signal maps generated by a biologically plausible, two-dimensional array of correlation-based motion detectors (2DMD) not only depend on egomotion, but also reflect the spatial setup of such environments. We analyzed the local motion outputs by extracting the relative amounts of detected directions and comparing the spatial distribution of the motion signals to that of idealized optic flow. Using a simple template matching estimation technique, we are able to extract the focus of expansion and find relatively small errors that are distributed in characteristic patterns in different scenes. This shows that all types of scenes provide suitable motion information for extracting ego motion despite the substantial levels of noise affecting the motion signal distributions, attributed to the sparse nature of optic flow and the presence of camera jitter. However, there are large differences in the shape of the direction distributions between different types of scenes; in particular, man-made office scenes are heavily dominated by directions in the cardinal axes, which is much less apparent in outdoor forest scenes. Further examination of motion magnitudes at different scales and the location of motion information in a scene revealed different patterns across different scene categories. This suggests that self-motion patterns are not only relevant for deducing heading direction and speed but also provide a rich information source for scene structure and could be important for the rapid formation of the gist of a scene under normal human locomotion.
Collapse
Affiliation(s)
- Szonya Durant
- Department of Psychology, Royal Holloway University of London, Egham, Surrey SW116HJ, UK.
| | | |
Collapse
|
38
|
Wiener JM, Hölscher C, Büchner S, Konieczny L. Gaze behaviour during space perception and spatial decision making. PSYCHOLOGICAL RESEARCH 2011; 76:713-29. [PMID: 22139023 DOI: 10.1007/s00426-011-0397-5] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2011] [Accepted: 11/17/2011] [Indexed: 11/24/2022]
Abstract
A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screenshots of choice points taken in large virtual environments. Each screenshot depicted alternative path options. In Experiment 1, participants had to decide between them to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently, they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 and 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making.
Collapse
Affiliation(s)
- Jan M Wiener
- Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, Dorset, BH12 5BB, UK.
| | | | | | | |
Collapse
|
39
|
Authié CN, Mestre DR. Optokinetic nystagmus is elicited by curvilinear optic flow during high speed curve driving. Vision Res 2011; 51:1791-800. [PMID: 21704061 DOI: 10.1016/j.visres.2011.06.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2010] [Revised: 04/13/2011] [Accepted: 06/09/2011] [Indexed: 10/18/2022]
Abstract
When analyzing gaze behavior during curve driving, it is commonly accepted that gaze is mostly located in the vicinity of the tangent point, being the point where gaze direction tangents the curve inside edge. This approach neglects the fact that the tangent point is actually motionless only in the limit case when the trajectory precisely follows the curve's geometry. In this study, we measured gaze behavior during curve driving, with the general hypothesis that gaze is not static, when exposed to a global optical flow due to self-motion. In order to study spatio-temporal aspects of gaze during curve driving, we used a driving simulator coupled to a gaze recording system. Ten participants drove seven runs on a track composed of eight curves of various radii (50, 100, 200 and 500m), with each radius appearing in both right and left directions. Results showed that average gaze position was, as previously described, located in the vicinity of the tangent point. However, analysis also revealed the presence of a systematic optokinetic nystagmus (OKN) around the tangent point position. The OKN slow phase direction does not match the local optic flow direction, while slow phase speed is about half of the local speed. Higher directional gains are observed when averaging the entire optical flow projected on the simulation display, whereas the best speed gain is obtained for a 2° optic flow area, centered on the instantaneous gaze location. The present study confirms that the tangent point is a privileged feature in the dynamic visual scene during curve driving, and underlines a contribution of the global optical flow to gaze behavior during active self-motion.
Collapse
Affiliation(s)
- Colas N Authié
- UMR 6233, Institut des Sciences du Mouvement Etienne-Jules Marey, CNRS & Université de la Méditerranée, France.
| | | |
Collapse
|
40
|
Abstract
Mark Changizi et al. (2008) claim that it is possible systematically to organize more than 50 kinds of illusions in a 7 × 4 matrix of 28 classes. This systematization, they further maintain, can be explained by the operation of a single visual processing latency correction mechanism that they call "perceiving the present" (PTP). This brief report raises some concerns about the way a number of illusions are classified by the proposed systematization. It also poses two general problems-one empirical and one conceptual-for the PTP approach.
Collapse
|
41
|
Effect of narrowing the base of support on the gait, gaze and quiet eye of elite ballet dancers and controls. Cogn Process 2011; 12:267-76. [PMID: 21384271 DOI: 10.1007/s10339-011-0395-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2010] [Accepted: 02/21/2011] [Indexed: 10/18/2022]
Abstract
We determined the gaze and stepping behaviours of elite ballet dancers and controls as they walked normally and along progressively narrower 3-m lines (l0.0, 2.5 cm). The ballet dancers delayed the first step and then stepped more quickly through the approach area and onto the lines, which they exited more slowly than the controls, which stepped immediately but then slowed their gait to navigate the line, which they exited faster. Contrary to predictions, the ballet group did not step more precisely, perhaps due to the unique anatomical requirements of ballet dance and/or due to releasing the degrees of freedom under their feet as they fixated ahead more than the controls. The ballet group used significantly fewer fixations of longer duration, and their final quiet eye (QE) duration prior to stepping on the line was significantly longer (2,353.39 ms) than the controls (1,327.64 ms). The control group favoured a proximal gaze strategy allocating 73.33% of their QE fixations to the line/off the line and 26.66% to the exit/visual straight ahead (VSA), while the ballet group favoured a 'look-ahead' strategy allocating 55.49% of their QE fixations to the exit/VSA and 44.51% on the line/off the line. The results are discussed in the light of the development of expertise and the enhanced role of fixations and visual attention when more tasks become more constrained.
Collapse
|
42
|
Visuomotor control of steering: the artefact of the matter. Exp Brain Res 2011; 208:475-89. [DOI: 10.1007/s00221-010-2530-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2010] [Accepted: 09/20/2010] [Indexed: 10/18/2022]
|
43
|
Egger SW, Engelhardt HR, Britten KH. Monkey steering responses reveal rapid visual-motor feedback. PLoS One 2010; 5:e11975. [PMID: 20694144 PMCID: PMC2915918 DOI: 10.1371/journal.pone.0011975] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2010] [Accepted: 07/08/2010] [Indexed: 12/04/2022] Open
Abstract
The neural mechanisms underlying primate locomotion are largely unknown. While behavioral and theoretical work has provided a number of ideas of how navigation is controlled, progress will require direct physiolgical tests of the underlying mechanisms. In turn, this will require development of appropriate animal models. We trained three monkeys to track a moving visual target in a simple virtual environment, using a joystick to control their direction. The monkeys learned to quickly and accurately turn to the target, and their steering behavior was quite stereotyped and reliable. Monkeys typically responded to abrupt steps of target direction with a biphasic steering movement, exhibiting modest but transient overshoot. Response latencies averaged approximately 300 ms, and monkeys were typically back on target after about 1 s. We also exploited the variability of responses about the mean to explore the time-course of correlation between target direction and steering response. This analysis revealed a broad peak of correlation spanning approximately 400 ms in the recent past, during which steering errors provoke a compensatory response. This suggests a continuous, visual-motor loop controls steering behavior, even during the epoch surrounding transient inputs. Many results from the human literature also suggest that steering is controlled by such a closed loop. The similarity of our results to those in humans suggests the monkey is a very good animal model for human visually guided steering.
Collapse
Affiliation(s)
- Seth W. Egger
- Center for Neuroscience, University of California Davis, Davis, California, United States of America
| | - Heidi R. Engelhardt
- Center for Neuroscience, University of California Davis, Davis, California, United States of America
| | - Kenneth H. Britten
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California Davis, Davis, California, United States of America
- * E-mail:
| |
Collapse
|
44
|
Wilkie RM, Kountouriotis GK, Merat N, Wann JP. Using vision to control locomotion: looking where you want to go. Exp Brain Res 2010; 204:539-47. [PMID: 20556368 DOI: 10.1007/s00221-010-2321-4] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2010] [Accepted: 05/29/2010] [Indexed: 11/30/2022]
Abstract
Looking at the inside edge of the road when steering a bend seems to be a well-established strategy linked to using a feature called the tangent point. An alternative proposal suggests that the gaze patterns observed when steering result from looking at the points in the world through which one wishes to pass. In this explanation fixation on or near the tangent point results from trying to take a trajectory that cuts the corner. To test these accounts, we recorded gaze and steering when taking different paths along curved roadways. Participants could gauge and maintain their lateral distance, but crucially, gaze was predominantly directed to the region proximal to the desired path rather than toward the tangent point per se. These results show that successful control of high-speed locomotion requires fixations in the direction you want to steer rather than using a single road feature like the tangent point.
Collapse
Affiliation(s)
- R M Wilkie
- Institute of Psychological Sciences, University of Leeds, Leeds LS2 9JT, UK.
| | | | | | | |
Collapse
|
45
|
Royden CS, Connors EM. The detection of moving objects by moving observers. Vision Res 2010; 50:1014-24. [DOI: 10.1016/j.visres.2010.03.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2009] [Revised: 01/29/2010] [Accepted: 03/16/2010] [Indexed: 11/24/2022]
|
46
|
Changizi MA, Hsieh A, Nijhawan R, Kanai R, Shimojo S. Perceiving the Present and a Systematization of Illusions. Cogn Sci 2010; 32:459-503. [DOI: 10.1080/03640210802035191] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
47
|
Gaze behavior during locomotion through apertures: the effect of locomotion forms. Hum Mov Sci 2009; 28:760-71. [PMID: 19783059 DOI: 10.1016/j.humov.2009.07.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2009] [Revised: 07/02/2009] [Accepted: 07/15/2009] [Indexed: 11/22/2022]
Abstract
The present study investigated spatio-temporal patterns of gaze fixations for passing safely through apertures. We focused on whether fixation patterns changed in response to changes in locomotion forms. Eight participants approached and passed through a narrow doorway using the following locomotion forms: normal walking, walking while holding a 63-cm horizontal bar with or without shoulder rotations permitted, and wheelchair use (63 cm wide). All participants were naïve to wheelchair use. The results showed that the fixation patterns were dependent on whether the locomotion form was walking or wheelchair use. In the three walking conditions, fixations were almost evenly directed toward the aperture and door edges at first; however, in the final phase, fixations were exclusively directed toward the center of the aperture. In contrast, in the wheelchair condition, fixations were directed more frequently toward the door edges throughout locomotion. These findings demonstrate that spatial-temporal patterns of fixation remain unchanged during walking through apertures, irrespective of the constraints on movement. The observed fixation patterns indicate that individuals appear to rely on optic flow to guide locomotion. However, the patterns of fixation are altered when they involve a completely novel task of locomotion, such as when using a wheelchair for the first time.
Collapse
|
48
|
The effects of constraining eye movements on visually evoked steering responses during walking in a virtual environment. Exp Brain Res 2009; 197:357-67. [PMID: 19582438 DOI: 10.1007/s00221-009-1923-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2009] [Accepted: 06/20/2009] [Indexed: 10/20/2022]
Abstract
We have previously shown that participants who step in place while viewing a moving scene that simulates walking towards and turning a corner demonstrate anticipatory sequential reorientation of axial body segments with timing characteristics similar to those seen during real turning. We propose that the coordination of axial body segments during steering represents a robust pre-programmed postural synergy triggered by gaze realignment in the desired direction of travel. The primary aim of the current study was to test this hypothesis by studying the effects of constraining eye movement on visually evoked steering responses exhibited by participants stepping in place in a virtual environment. We predicted that preventing participants from generating anticipatory gaze shifts would significantly attenuate or eliminate visually evoked postural responses. A secondary aim was to investigate the nature of the visual cues that trigger the coordinated eye and whole body response by testing whether spatial (distance from the corner) or temporal (time to contact with corner) parameters modulated with the speed of the visual scene (normal, half speed and double speed). Six university graduate student (27.8 +/- 5.0 years) participants were asked to step in place at a self-selected comfortable pace while immersed in a virtual environment which simulated walking down a hallway and turning a corner. In half of the trials participants were required to maintain gaze direction on a static target placed in the middle of the viewing screen. Whole body kinematics and gaze behaviour were recorded. In support of our hypothesis, gaze fixation on a stationary target resulted in the suppression of anticipatory steering responses. Although postural adjustments were still observed during constrained gaze trials, they were reactive rather than anticipatory in nature and were significantly smaller than trials in which gaze was unconstrained. Our results further suggest that the time of eye and body reorientation is dependent on the temporal rather than spatial visual cues, i.e. visually specified estimation time to contact with the virtual corner. These results indicate that gaze redirection is a prerequisite for the initiation of a pre-programmed steering synergy and suggest that these robust postural responses are intimately linked to the oculomotor control processes within the central nervous system.
Collapse
|
49
|
Andrew Browning N, Grossberg S, Mingolla E. Cortical dynamics of navigation and steering in natural scenes: Motion-based object segmentation, heading, and obstacle avoidance. Neural Netw 2009; 22:1383-98. [PMID: 19502003 DOI: 10.1016/j.neunet.2009.05.007] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2008] [Revised: 05/07/2009] [Accepted: 05/18/2009] [Indexed: 10/20/2022]
Abstract
Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT(-)/MSTv and MT(+)/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model's retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT(+) interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT(-) interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.
Collapse
Affiliation(s)
- N Andrew Browning
- Department of Cognitive and Neural Systems, Boston University, Boston, MA 02215, USA
| | | | | |
Collapse
|
50
|
Marple-Horvat DE, Cooper HL, Gilbey SL, Watson JC, Mehta N, Kaur-Mann D, Wilson M, Keil D. Alcohol badly affects eye movements linked to steering, providing for automatic in-car detection of drink driving. Neuropsychopharmacology 2008; 33:849-58. [PMID: 17507909 DOI: 10.1038/sj.npp.1301458] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Driving is a classic example of visually guided behavior in which the eyes move before some other action. When approaching a bend in the road, a driver looks across to the inside of the curve before turning the steering wheel. Eye and steering movements are tightly linked, with the eyes leading, which allows the parts of the brain that move the eyes to assist the parts of the brain that control the hands on the wheel. We show here that this optimal relationship deteriorates with levels of breath alcohol well within the current UK legal limit for driving. The eyes move later, and coordination reduces. These changes lead to bad performance and can be detected by an automated in-car system, which warns the driver is no longer fit to drive.
Collapse
Affiliation(s)
- Dilwyn E Marple-Horvat
- Institute for Biophysical and Clinical Research into Human Movement (IRM), Manchester Metropolitan University, Cheshire, UK.
| | | | | | | | | | | | | | | |
Collapse
|