1
|
Kim S, Lazaro MJ, Kang Y. Galvanic vestibular stimulation to counteract leans illusion: comparing step and ramped waveforms. ERGONOMICS 2023; 66:432-442. [PMID: 35730683 DOI: 10.1080/00140139.2022.2093403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 06/14/2022] [Indexed: 06/15/2023]
Abstract
Leans is a common type of Spatial Disorientation (SD) illusion that causes pilots to be confused about the position of the aircraft during a flight. This illusion could lead to serious adverse effects and even flight mishaps. Therefore, an effective means to deal with leans is crucial for flight safety. This study aims to investigate the effects of Galvanic Vestibular Stimulation (GVS) technology with different waveforms as a tool to mitigate the negative effects of leans. 20 Air Force pilots participated in leans-induced flight simulation experiment with three GVS conditions (without-GVS, step-GVS, ramped-GVS). Bank angle error, subjective SD, perceived strength, and annoyance were measured as the dependent variables. Analysis revealed that step-GVS and ramped-GVS yielded lower bank angle errors and subjective SD than without-GVS. In addition, annoyance ratings were lower for ramped-GVS than step-GVS. This study suggests that GVS has the potential to be utilised as a counteracting tool to cope with leans.Practitioner summary: Galvanic Vestibular Stimulation (GVS) can be utilised as a tool to counteract the detrimental effects of leans illusion, specifically the ramped style GVS, considering that it is less annoying and distracting for the pilots. In general, GVS induces a roll sensation that can offset the false sensation caused by the leans, which can potentially help maintain flight safety and avoid spatial disorientation-related accidents.Abbreviations: SD: spatial disorientation; GVS: galvanic vestibular stimulation; MSSQ: motion sickness susceptibility questionniare; SSQ: simulator sickness questionnaire; BLE: bluetooth low energy; PCB: printed circuit board; RPM: revolution per minute.
Collapse
Affiliation(s)
- Sungho Kim
- Department of Systems Engineering, Republic of Korea Air Force Academy, Cheongju, South Korea
| | - May Jorella Lazaro
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul, South Korea
| | - Yohan Kang
- Department of Industrial Engineering, Seoul National University, Seoul, South Korea
| |
Collapse
|
2
|
“Attention! A Door Could Open.”—Introducing Awareness Messages for Cyclists to Safely Evade Potential Hazards. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti6010003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Numerous statistics show that cyclists are often involved in road traffic accidents, often with serious outcomes. One potential hazard of cycling, especially in cities, is “dooring”—passing parked vehicles that still have occupants inside. These occupants could open the vehicle door unexpectedly in the cyclist’s path—requiring a quick evasive response by the cyclist to avoid a collision. Dooring can be very poorly anticipated; as a possible solution, we propose in this work a system that notifies the cyclist of opening doors based on a networked intelligent transportation infrastructure. In a user study with a bicycle simulator (N = 24), we examined the effects of three user interface designs compared to a baseline (no notifications) on cycling behavior (speed and lateral position), perceived safety, and ease of use. Awareness messages (either visual message, visual message + auditory icon, or visual + voice message) were displayed on a smart bicycle helmet at different times before passing a parked, still-occupied vehicle. Our participants found the notifications of potential hazards very easy to understand and appealing and felt that the alerts could help them navigate traffic more safely. Those concepts that (additionally) used auditory icons or voice messages were preferred. In addition, the lateral distance increased significantly when a potentially opening door was indicated. In these situations, cyclists were able to safely pass the parked vehicle without braking. In summary, we are convinced that notification systems, such as the one presented here, are an important component for increasing road safety, especially for vulnerable road users.
Collapse
|
3
|
Houtenbos M, de Winter JCF, Hale AR, Wieringa PA, Hagenzieker MP. Concurrent audio-visual feedback for supporting drivers at intersections: A study using two linked driving simulators. APPLIED ERGONOMICS 2017; 60:30-42. [PMID: 28166889 DOI: 10.1016/j.apergo.2016.10.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Revised: 09/28/2016] [Accepted: 10/15/2016] [Indexed: 06/06/2023]
Abstract
A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible.
Collapse
Affiliation(s)
- M Houtenbos
- SWOV Institute for Road Safety Research, PO Box 93113, 2509 AC, The Hague, The Netherlands; Delft University of Technology, Safety Science Group, Jaffalaan 5, 2628 BX, Delft, The Netherlands
| | - J C F de Winter
- Delft University of Technology, Department of Biomechanical Engineering, Mekelweg 2, 2628 CD, Delft, The Netherlands.
| | - A R Hale
- Delft University of Technology, Safety Science Group, Jaffalaan 5, 2628 BX, Delft, The Netherlands
| | - P A Wieringa
- Delft University of Technology, Department of Biomechanical Engineering, Mekelweg 2, 2628 CD, Delft, The Netherlands
| | - M P Hagenzieker
- SWOV Institute for Road Safety Research, PO Box 93113, 2509 AC, The Hague, The Netherlands; Delft University of Technology, Department of Transport & Planning, Stevinweg 1, 2628 CN, Delft, The Netherlands
| |
Collapse
|
4
|
Tannen RS, Nelson WT, Bolia RS, Haas MW, Hettinger LJ, Warm JS, Dember WN, Stoffregen TA. Adaptive Integration of Head-Coupled Multi-Sensory Displays for Target Localization. ACTA ACUST UNITED AC 2016. [DOI: 10.1177/154193120004401320] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The purpose of this study was to determine the efficacy of providing target location information via head-coupled visual and spatial audio displays presented in adaptive and non-adaptive configurations. Twelve USAF pilots performed a simulated flight task in which they were instructed to maintain flight parameters while searching for ground and air targets. The integration of visual displays with spatial audio cueing enhanced performance efficiency, especially when targets were most difficult to detect. Several of the interface conditions were also associated with lower ratings of perceived mental workload. The benefits associated with multi-sensory cueing were equivalent in both adaptive and non-adaptive configurations.
Collapse
Affiliation(s)
| | - W. Todd Nelson
- Air Force Research Laboratory, Wright-Patterson Air Force Base, OH
| | - Robert S. Bolia
- Air Force Research Laboratory, Wright-Patterson Air Force Base, OH
| | - Michael W. Haas
- Air Force Research Laboratory, Wright-Patterson Air Force Base, OH
| | | | | | | | | |
Collapse
|
5
|
Moroney BW, Nelson WT, Hettinger LJ, Warm JS, Dember WN, Stoffregen TA, Haas MW. An Evaluation of Unisensory and Multisensory Adaptive Flight Path Navigation Displays: An Initial Investigation. ACTA ACUST UNITED AC 2016. [DOI: 10.1177/154193129904300115] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Unisensory and multisensory adaptive interfaces for precision aircraft navigation were tested under varying concurrent task demands. Participants - 12 USAF pilots - were required to perform a simulated terrain-following, terrain-avoidance navigation task, including evasive maneuvers, while also performing: (1) no additional task; (2) a visual search task; or (3) an auditory monitoring task. Real-time performance efficiency, as measured by lateral deviation from the flight course, was used to activate the adaptive navigation displays consisting of a visual azimuth steering line on the head-up display, a spatial auditory beacon, or a combination of the two displays. A completely factorial, within-subjects design was used to assess the effects of secondary task load and adaptive interface configurations on flight performance. The results indicated that the efficacy of multisensory, adaptive navigation displays is mediated not only upon the supplementary task confronting the pilots, but also upon the type of flight task performed and the strategies they adopted to acquire and use the information offered. Implications for the use of adaptive multisensory displays in tactical aircraft displays will be discussed.
Collapse
Affiliation(s)
| | - W. Todd Nelson
- Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio
| | | | | | | | | | - Michael W. Haas
- Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio
| |
Collapse
|
6
|
Frissen I, Guastavino C. Do whole-body vibrations affect spatial hearing? ERGONOMICS 2014; 57:1090-1101. [PMID: 24783989 DOI: 10.1080/00140139.2014.910611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
UNLABELLED To assist the human operator, modern auditory interfaces increasingly rely on sound spatialisation to display auditory information and warning signals. However, we often operate in environments that apply vibrations to the whole body, e.g. when driving a vehicle. Here, we report three experiments investigating the effect of sinusoidal vibrations along the vertical axis on spatial hearing. The first was a free-field, narrow-band noise localisation experiment with 5- Hz vibration at 0.88 ms(-2). The other experiments used headphone-based sound lateralisation tasks. Experiment 2 investigated the effect of vibration frequency (4 vs. 8 Hz) at two different magnitudes (0.83 vs. 1.65 ms(-2)) on a left-right discrimination one-interval forced-choice task. Experiment 3 assessed the effect on a two-interval forced-choice location discrimination task with respect to the central and two peripheral reference locations. In spite of the broad range of methods, none of the experiments show a reliable effect of whole-body vibrations on localisation performance. PRACTITIONER SUMMARY We report three experiments that used both free-field localisation and headphone lateralisation tasks to assess their sensitivity to whole-body vibrations at low frequencies. None of the experiments show a reliable effect of either frequency or magnitude of whole-body vibrations on localisation performance.
Collapse
Affiliation(s)
- Ilja Frissen
- a School of Information Studies, Centre for Interdisciplinary Research on Music Media and Technology (CIRMMT), McGill University , 3661 Peel street, Montréal , Québec , Canada , H3A 1X1
| | | |
Collapse
|
7
|
Rauter G, Sigrist R, Koch C, Crivelli F, van Raai M, Riener R, Wolf P. Transfer of complex skill learning from virtual to real rowing. PLoS One 2013; 8:e82145. [PMID: 24376518 PMCID: PMC3869668 DOI: 10.1371/journal.pone.0082145] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Accepted: 10/30/2013] [Indexed: 11/25/2022] Open
Abstract
Simulators are commonly used to train complex tasks. In particular, simulators are applied to train dangerous tasks, to save costs, and to investigate the impact of different factors on task performance. However, in most cases, the transfer of simulator training to the real task has not been investigated. Without a proof for successful skill transfer, simulators might not be helpful at all or even counter-productive for learning the real task. In this paper, the skill transfer of complex technical aspects trained on a scull rowing simulator to sculling on water was investigated. We assume if a simulator provides high fidelity rendering of the interactions with the environment even without augmented feedback, training on such a realistic simulator would allow similar skill gains as training in the real environment. These learned skills were expected to transfer to the real environment. Two groups of four recreational rowers participated. One group trained on water, the other group trained on a simulator. Within two weeks, both groups performed four training sessions with the same licensed rowing trainer. The development in performance was assessed by quantitative biomechanical performance measures and by a qualitative video evaluation of an independent, blinded trainer. In general, both groups could improve their performance on water. The used biomechanical measures seem to allow only a limited insight into the rowers' development, while the independent trainer could also rate the rowers' overall impression. The simulator quality and naturalism was confirmed by the participants in a questionnaire. In conclusion, realistic simulator training fostered skill gains to a similar extent as training in the real environment and enabled skill transfer to the real environment. In combination with augmented feedback, simulator training can be further exploited to foster motor learning even to a higher extent, which is subject to future work.
Collapse
Affiliation(s)
- Georg Rauter
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
- * E-mail:
| | - Roland Sigrist
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Claudio Koch
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Francesco Crivelli
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Mark van Raai
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Robert Riener
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Peter Wolf
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| |
Collapse
|
8
|
Lu SA, Wickens CD, Prinet JC, Hutchins SD, Sarter N, Sebok A. Supporting interruption management and multimodal interface design: three meta-analyses of task performance as a function of interrupting task modality. HUMAN FACTORS 2013; 55:697-724. [PMID: 23964412 DOI: 10.1177/0018720813476298] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
OBJECTIVE The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. BACKGROUND Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. METHOD Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. RESULTS The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. CONCLUSION The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. APPLICATIONS The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.
Collapse
Affiliation(s)
- Sara A Lu
- Department of Industrial and Operations Engineering, Center for Ergonomics, University of Michigan, Ann Arbor, USA
| | | | | | | | | | | |
Collapse
|
9
|
Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review. Psychon Bull Rev 2013; 20:21-53. [PMID: 23132605 DOI: 10.3758/s13423-012-0333-8] [Citation(s) in RCA: 463] [Impact Index Per Article: 42.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborated.
Collapse
|
10
|
Liu YC, Jhuang JW. Effects of in-vehicle warning information displays with or without spatial compatibility on driving behaviors and response performance. APPLIED ERGONOMICS 2012; 43:679-686. [PMID: 22103964 DOI: 10.1016/j.apergo.2011.10.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2010] [Revised: 10/23/2011] [Accepted: 10/27/2011] [Indexed: 05/31/2023]
Abstract
A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks.
Collapse
Affiliation(s)
- Yung-Ching Liu
- National Yunlin University of Science and Technology, Department of Industrial Engineering and Management, 123, University Road, Section 3, Douliu, Yunlin 640, Taiwan, ROC.
| | | |
Collapse
|
11
|
Huang WS, Liu CC, Hsu CC, Lai CH. Effect of Visual-Verbal Load and Spatial Compatibility on Stimulus Response. Psychol Rep 2011; 108:487-502. [DOI: 10.2466/22.pr0.108.2.487-502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study examined the effects of visual-verbal load (as measured by a visually presented reading-memory task with three levels) on a visual/auditory stimulus-response task. The three levels of load were defined as follows: “No Load” meant no other stimuli were presented concurrently; “Free Load” meant that a letter (A, B, C, or D) appeared at the same time as the visual or auditory stimulus; and “Force Load” was the same as “Free Load,” but the participants were also instructed to count how many times the letter A appeared. The stimulus-response task also had three levels: “irrelevant,” “compatible,” and “incompatible” spatial conditions. These required different key-pressing responses. The visual stimulus was a red ball presented either to the left or to the right of the display screen, and the auditory stimulus was a tone delivered from a position similar to that of the visual stimulus. Participants also processed an irrelevant stimulus. The results indicated that participants perceived auditory stimuli earlier than visual stimuli and reacted faster under stimulus-response compatible conditions. These results held even under a high visual-verbal load. These findings suggest the following guidelines for systems used in driving: an auditory source, appropriately compatible signal and manual-response positions, and a visually simplified background.
Collapse
Affiliation(s)
| | | | - Chun-Chia Hsu
- Department of Multimedia and Game Science, Lunghwa University of Science and Technology
| | - Ching-Huei Lai
- Safety Division, Institute of Transportation, Ministry of Transportation and Communications
| |
Collapse
|
12
|
Herring S, Hallbeck M. Conceptual design of a wearable radiation detector alarm system: a review of the literature. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2010. [DOI: 10.1080/14639220902853088] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Mclntire JP, Havig PR, Watamaniuk SNJ, Gilkey RH. Visual search performance with 3-D auditory cues: effects of motion, target location, and practice. HUMAN FACTORS 2010; 52:41-53. [PMID: 20653224 DOI: 10.1177/0018720810368806] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
OBJECTIVES We evaluate visual search performance in both static (nonmoving) and dynamic (moving) search environments with and without spatial (3-D) auditory cues to target location. Additionally, the effects of target trajectory, target location, and practice are assessed. BACKGROUND Previous research on aurally aided visual search has shown a significant reduction in response times when 3-D auditory cues are displayed, relative to unaided search. However, the vast majority of this research has examined only searches for static targets in static visual environments. The present experiment was conducted to examine the effect of dynamic stimuli upon aurally aided visual search performance. METHOD The 8 participants conducted repeated searches for a single visual target hidden among 15 distracting stimuli. The four main conditions of the experiment consisted of the four possible combinations of 3-D auditory cues (present or absent) and search environment (static or dynamic). RESULTS The auditory cues were comparably effective at reducing search times in dynamic environments (-25%) as in static environments (-22%). Audio cues helped all participants. The cues were most beneficial when the target appeared at large eccentricities and on the horizontal plane. After a brief initial exposure to 3-D audio, no training or practice effects with 3-D audio were found. CONCLUSION We conclude that 3-D audio is as beneficial in environments comprising moving stimuli as in those comprising static stimuli. APPLICATION Operators in dynamic environments, such as aircraft cockpits, ground vehicles, and command-and-control centers, could benefit greatly from 3-D auditory technology when searching their environments for visual targets or other time-critical information.
Collapse
Affiliation(s)
- John P Mclntire
- Air Force Research Laboratory, Wright-Patterson AFB, OH 45433, USA.
| | | | | | | |
Collapse
|
14
|
Haas MW, Nelson WT, Repperger D, Bolia R, Zacharias G. Applying Adaptive Control and Display Characteristics to Future Air Force Crew Stations. ACTA ACUST UNITED AC 2009. [DOI: 10.1207/s15327108ijap1102_06] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
15
|
Pavlovic NJ, Keillor J, Hollands JG, Chignell MH. Reference frame congruency in search-and-rescue tasks. HUMAN FACTORS 2009; 51:240-250. [PMID: 19653486 DOI: 10.1177/0018720809334917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
OBJECTIVE Our aim was to investigate how the congruency between visual displays and auditory cues affects performance on various spatial tasks. BACKGROUND Previous studies have demonstrated that spatial auditory cues, when combined with visual displays, can enhance performance and decrease workload. However, this facilitation was achieved only when auditory cues shared a common reference frame (RF) with the visual display. In complex and dynamic environments, such as airborne search and rescue (SAR), it is often difficult to ensure such congruency. METHOD In a simulated SAR operation, participants performed three spatial tasks: target search, target localization, and target recall. The interface consisted of the camera view of the terrain from the aircraft-mounted sensor, a map of the area flown over, a joystick that controlled the sensor, and a mouse. Auditory cues were used to indicate target location. While flying in the scenario, participants searched for targets, identified their locations in one of two coordinate systems, and memorized their location relative to the terrain layout. RESULTS Congruent cues produced the fastest and most accurate performance. Performance advantages were observed even with incongruent cues relative to neutral cues, and egocentric cues were more effective than exocentric cues. CONCLUSION Although the congruent cues are most effective, in cases in which the same cue is used across spatial tasks, egocentric cues are a better choice than exocentric cues. APPLICATION Egocentric auditory cues should be used in display design for tasks that involve RF transformations, such as SAR, air traffic control, and unmanned aerial vehicle operations.
Collapse
Affiliation(s)
- Nada J Pavlovic
- Adversarial Intent Section, Defence Research and Development Canada, 1133 Sheppard Ave. W., P.O. Box 2000, Toronto, ON, Canada M3M 3B9.
| | | | | | | |
Collapse
|
16
|
Spence C, Ho C. Multisensory warning signals for event perception and safe driving. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2008. [DOI: 10.1080/14639220701816765] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
17
|
Chun H, Clymer B, Sammet S, Koch RM, Stevens R, Knopp MV. Improvement in the reproducibility of region of interest using an auditory feedback loop: a pilot assessment using dynamic contrast-enhanced (DCE) breast MR images. J Magn Reson Imaging 2008; 27:27-33. [PMID: 18058928 DOI: 10.1002/jmri.21229] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
PURPOSE To augment traditional visual data perception of complex multiparametric imaging data sets by adding auditory feedback to improve the delineation of regions of interest (ROIs) in tumor assessment in dynamic contrast-enhanced (DCE) MRI. MATERIALS AND METHODS In addition to conventional display methodologies, we have created an application window which interfaces with audio output using dynamically loadable sound modules, providing goodness of fit (GF) information through auditory feedback. We have assessed effectiveness of conveying sound information with three independent readers on eight DCE-MR breast image data sets. The assessment was based on either conventional visual only mode or combined visual plus auditory mode. For statistical comparison between two sensory approaches, interobserver repeatability was measured with three different criteria. RESULTS Adding auditory feedback improves repeatability significantly (P < 0.01), and the enhanced sensory approach had higher repeatability than visual only mode in visually complex breast tumor cases. However, in easy and moderate cases, visual only mode was more reproducible than the combined mode with very high significance (P < 0.001). CONCLUSION Adding auditory information to visual based image analysis for identifying tumor ROIs provides higher interobserver repeatability for analyzing complex multidimensional/multiparametric medical image data sets with visually difficult lesions to delineate.
Collapse
Affiliation(s)
- Hee Chun
- Center for Remote Sensing of Ice Sheet, The University of Kansas, Lawrence, Kansas, USA.
| | | | | | | | | | | |
Collapse
|
18
|
Koppen C, Spence C. Audiovisual asynchrony modulates the Colavita visual dominance effect. Brain Res 2007; 1186:224-32. [DOI: 10.1016/j.brainres.2007.09.076] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2007] [Revised: 09/14/2007] [Accepted: 09/25/2007] [Indexed: 10/22/2022]
|
19
|
Chan AHS, Chan KWL, Yu RF. Auditory stimulus-response compatibility and control-display design. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2007. [DOI: 10.1080/14639220500330455] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
20
|
Chan AHS, Chan KWL. Synchronous and asynchronous presentations of auditory and visual signals: Implications for control console design. APPLIED ERGONOMICS 2006; 37:131-40. [PMID: 16102721 DOI: 10.1016/j.apergo.2005.06.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2004] [Revised: 05/15/2005] [Accepted: 06/03/2005] [Indexed: 05/04/2023]
Abstract
In this study, the effects of synchronous and asynchronous auditory and visual signal presentation on reaction times and response errors were examined to provide data for developing ergonomics recommendations for control console design. The results showed that synchronous presentation for combined visual and auditory stimulation facilitated responses, and shorter reaction times and higher response accuracies were obtained. When visual and auditory stimuli were presented synchronously but in opposing (left and right) positions, a visual dominance phenomenon was found, such that the subjects responded more often (81%) with faster responses to the visual signal. This visual dominance effect occurred even in the asynchronous condition when the auditory stimulus was presented 200 ms earlier than the visual one. It was also found that response speed and accuracy improved with increasing length of the warning time interval, and when an uncrossed hand posture was used for making responses. The above results were translated into practical ergonomic recommendations for response key layout, warning time interval, and ways of presenting visual and auditory signals to improve control console design.
Collapse
Affiliation(s)
- Alan H S Chan
- Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Hong Kong, China.
| | | |
Collapse
|
21
|
Piemo AC, Caria A, Castiello U. Crossmodal binding in localizing objects outside the field of view. VISUAL COGNITION 2006. [DOI: 10.1080/13506280544000273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
22
|
Arrabito GR. Three-dimensional auditory display for enhancing detection of passive sonar signals. HUMAN FACTORS 2006; 48:465-73. [PMID: 17063962 DOI: 10.1518/001872006778606769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
OBJECTIVE The viability of a three-dimensional (3-D) auditory display for improving signal detection of passive sonar signals was investigated. BACKGROUND Sonar operators usually have difficulty detecting targets because the sound received by the hydrophone has a low signal-to-noise ratio when coupled with the operator's headset that does not isolate well against the ambient noise. METHODS Release from masking was assessed by pairing a recording of a torpedo with diotic broadband pink noise that served as a masker, and a 400 Hz tone with the masker. Masked thresholds were measured for seven signal durations when each signal was presented dioticly and in 3-D auditory space at three positions on the horizontal plane. RESULTS The spatial separation of signal and masker yielded a significant improvement in detection. CONCLUSION A 3-D auditory display is a viable technology that could lead to a significant improvement in release from masking. The magnitude of the masking level difference will vary with respect to the characteristics of the hydrophone signal and masker and the synthesis capability of the 3-D auditory display. APPLICATION Potential applications of this research include enhanced auditory displays for processing passive sonar signals, leading to earlier detection of enemy targets.
Collapse
Affiliation(s)
- G Robert Arrabito
- Communications Group, Defence R&D Canada, Toronto, 1133 Sheppard Ave. West, P.O. Box 2000, Toronto, ON M3M 3B9, Canada.
| |
Collapse
|
23
|
Pierno AC, Caria A, Glover S, Castiello U. Effects of increasing visual load on aurally and visually guided target acquisition in a virtual environment. APPLIED ERGONOMICS 2005; 36:335-343. [PMID: 15854577 DOI: 10.1016/j.apergo.2004.11.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2003] [Accepted: 11/30/2004] [Indexed: 05/24/2023]
Abstract
The aim of the present study is to investigate interactions between vision and audition during a target acquisition task performed in a virtual environment. We measured the time taken to locate a visual target (acquisition time) signalled by auditory and/or visual cues in conditions of variable visual load. Visual load was increased by introducing a secondary visual task. The auditory cue was constructed using virtual three-dimensional (3D) sound techniques. The visual cue was constructed in the form of a 3D updating arrow. The results suggested that both auditory and visual cues reduced acquisition time as compared to an uncued condition. Whereas the visual cue elicited faster acquisition time than the auditory cue, the combination of the two cues produced the fastest acquisition time. The introduction of secondary visual task differentially affected acquisition time depending on cue modality. In conditions of high visual load, acquiring a target signalled by the auditory cue led to slower and more error-prone performance than acquiring a target signalled by either the visual cue alone or by both the visual and auditory cues.
Collapse
Affiliation(s)
- Andrea C Pierno
- Department of Psychology, Royal Holloway-University of London, Egham, Surrey, UK
| | | | | | | |
Collapse
|
24
|
Gunn DV, Warm JS, Nelson WT, Bolia RS, Schumsky DA, Corcoran KJ. Target acquisition with UAVs: vigilance displays and advanced cuing interfaces. HUMAN FACTORS 2005; 47:488-97. [PMID: 16435691 DOI: 10.1518/001872005774859971] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Vigilance and threat detection are critical human factors considerations in the control of unmanned aerial vehicles (UAVs). Utilizing a vigilance task in which threat detections (critical signals) led observers to perform a subsequent manual target acquisition task, this study provides information that might have important implications for both of these considerations in the design of future UAV systems. A sensory display format resulted in more threat detections, fewer false alarms, and faster target acquisition times and imposed a lighter workload than did a cognitive display format. Additionally, advanced visual, spatial-audio, and haptic cuing interfaces enhanced acquisition performance over no cuing in the target acquisition phase of the task, and they did so to a similar degree. Thus, in terms of potential applications, this research suggests that a sensory format may be the best display format for threat detection by future UAV operators, that advanced cuing interfaces may prove useful in future UAV systems, and that these interfaces are functionally interchangeable.
Collapse
|
25
|
Tannen RS, Nelson WT, Bolia RS, Warm JS, Dember WN. Evaluating Adaptive Multisensory Displays for Target Localization in a Flight Task. ACTA ACUST UNITED AC 2004. [DOI: 10.1207/s15327108ijap1403_5] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
26
|
|
27
|
Veltman JA, Oving AB, Bronkhorst AW. 3-D Audio in the Fighter Cockpit Improves Task Performance. ACTA ACUST UNITED AC 2004. [DOI: 10.1207/s15327108ijap1403_2] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
28
|
Pierno AC, Caria A, Castiello U. Comparing effects of 2-D and 3-D visual cues during aurally aided target acquisition. HUMAN FACTORS 2004; 46:728-737. [PMID: 15709333 DOI: 10.1518/hfes.46.4.728.56815] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The aim of the present study was to investigate interactions between vision and audition during a visual target acquisition task performed in a virtual environment. In two experiments, participants were required to perform an acquisition task guided by auditory and/or visual cues. In both experiments the auditory cues were constructed using virtual 3-D sound techniques based on nonindividualized head-related transfer functions. In Experiment 1 the visual cue was constructed in the form of a continuously updated 2-D arrow. In Experiment 2 the visual cue was a nonstereoscopic, perspective-based 3-D arrow. The results suggested that virtual spatial auditory cues reduced acquisition time but were not as effective as the virtual visual cues. Experiencing the 3-D perspective-based arrow rather than the 2-D arrow produced a faster acquisition time not only in the visually aided conditions but also when the auditory cues were presented in isolation. Suggested novel applications include providing 3-D nonstereoscopic, perspective-based visual information on radar displays, which may lead to a better integration with spatial virtual auditory information.
Collapse
Affiliation(s)
- Andrea C Pierno
- Royal Holloway, University of London, Egham, United Kingdom.
| | | | | |
Collapse
|
29
|
Bliss JP, Acton SA. Alarm mistrust in automobiles: how collision alarm reliability affects driving. APPLIED ERGONOMICS 2003; 34:499-509. [PMID: 14559409 DOI: 10.1016/j.apergo.2003.07.003] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
As roadways become more congested, there is greater potential for automobile accidents and incidents. To improve roadway safety, automobile manufacturers are now designing and incorporating collision avoidance warning systems; yet, there has been little investigation of how the reliability of alarm signals might impact driver performance. We measured driving and alarm reaction performances following alarms of various reliability levels. In Experiment One, 70 participants operated a driving simulator while being presented console emitted collision alarms that were 50%, 75%, or 100% reliable. In Experiment Two, the same participants were presented spatially generated collision alarms of the same reliability levels. The results were similar in both experiments: alarm and automobile swerving reactions were significantly better when alarms were more reliable; however, drivers still failed to avoid collisions following reliable alarms. These results emphasize that alarm designers should maximize alarm reliability while minimizing alarm invasiveness.
Collapse
Affiliation(s)
- James P Bliss
- Psychology Department (MGB 244B), Old Dominion University, Norfolk, VA 23529, USA.
| | | |
Collapse
|
30
|
Neuhoff JG, Kramer G, Wayand J. Pitch and loudness interact in auditory displays: can the data get lost in the map? J Exp Psychol Appl 2002; 8:17-25. [PMID: 12009173 DOI: 10.1037/1076-898x.8.1.17] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Many auditory displays use acoustic attributes such as frequency, intensity, and spectral content to represent different characteristics of multidimensional data. This study demonstrated a perceptual interaction between dynamic changes in pitch and loudness, as well as perceived asymmetries in directional acoustic change, that distorted the data relations represented in an auditory display. Three experiments showed that changes in loudness can influence pitch change, that changes in pitch can influence loudness change, and that increases in acoustic intensity are judged to change more than equivalent decreases. Within a sonification of stock market data, these characteristics created perceptual distortions in the data set. The results suggest that great care should be exercised when using lower level acoustic dimensions to represent multidimensional data.
Collapse
Affiliation(s)
- John G Neuhoff
- Department of Psychology, The College of Wooster, Ohio 44691, USA.
| | | | | |
Collapse
|
31
|
Langendijk EH, Kistler DJ, Wightman FL. Sound localization in the presence of one or two distracters. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2001; 109:2123-2134. [PMID: 11386564 DOI: 10.1121/1.1356025] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Localizing a target sound can be a challenge when one or more distracter sounds are present at the same time. This study measured the effect of distracter position on target localization for one distracter (17 positions) and two distracters (21 combinations of 17 positions). Listeners were instructed to point to the apparent position of a train of 30-ms noise bursts, presented at 1 of 85 positions in virtual free field. A harmonic complex and a frequency-swept complex tone served as distracters. The two distracters were turned on 40 and 80 ms after the target onset, had temporal envelopes similar to that of the target, and did not overlap temporally with the target. Virtual sounds were synthesized with individual HRTFs. Localization performance degraded as the number of distracters increased from 0 to 2. When the horizontal distance between target and a single distracter was small (i.e., the interaural differences were almost the same), the influence on the apparent position was greater than when they were far apart. In the vertical dimension, there was not a systematic effect of distracter position on target localizability. However, there was a substantial increase in localization error for targets at high elevations (above 30 degrees) when distracters were present.
Collapse
Affiliation(s)
- E H Langendijk
- TNO Human Factors Research Institute, Soesterberg, The Netherlands
| | | | | |
Collapse
|
32
|
Lee MD. Multichannel auditory search: toward understanding control processes in polychotic auditory listening. HUMAN FACTORS 2001; 43:328-342. [PMID: 11592672 DOI: 10.1518/001872001775900959] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Two experiments are presented that serve as a framework for exploring auditory information processing. The framework is referred to as polychotic listening or auditory search, and it requires a listener to scan multiple simultaneous auditory streams for the appearance of a target word (the name of a letter such as A or M). Participants' ability to scan between two and six simultaneous auditory streams of letter and digit names for the name of a target letter was examined using six loudspeakers. The main independent variable was auditory load, or the number of active audio streams on a given trial. The primary dependent variables were target localization accuracy and reaction time. Results showed that as load increased, performance decreased. The performance decrease was evident in reaction time, accuracy, and sensitivity measures. The second study required participants to practice the same task for 10 sessions, for a total of 1800 trials. Results indicated that even with extensive practice, performance was still affected by auditory load. The present results are compared with findings in the visual search literature. The implications for the use of multiple auditory displays are discussed. Potential applications include cockpit and automobile warning displays, virtual reality systems, and training systems.
Collapse
Affiliation(s)
- M D Lee
- Georgia Institute of Technology, Atlanta, USA.
| |
Collapse
|
33
|
Nelson WT, Bolia RS, Tripp LD. Auditory localization under sustained +Gz acceleration. HUMAN FACTORS 2001; 43:299-309. [PMID: 11592670 DOI: 10.1518/001872001775900896] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The ability to localize a virtual sound source in the horizontal plane was evaluated under varying levels of sustained (+Gz) acceleration. Participants were required to judge the locations of spatialized noise bursts in the horizontal plane (elevation 0 degrees) during exposure to 1.0, 1.5, 2.5, 4.0, 5.5, and 7.0 +Gz. The experiment was conducted at the U.S. Air Force Research Laboratory's Dynamic Environment Simulator, a three-axis centrifuge. No significant increases in localization error were found between 1.0 and 5.5 +Gz; however, a significant increase did occur at the 7.0 +Gz level. In addition, the percentage of front/back confusions did not vary as a function of +Gz level. Collectively, these results indicate that the ability to localize virtual sound sources is well maintained at various levels of sustained acceleration. Actual or potential applications include the incorporation of spatial audio displays into the human-computer interface for vehicles that are operated in acceleration environments.
Collapse
Affiliation(s)
- W T Nelson
- Divine, Inc, Cincinnati, Ohio 45242, USA.
| | | | | |
Collapse
|
34
|
Langendijk EH, Bronkhorst AW. Fidelity of three-dimensional-sound reproduction using a virtual auditory display. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2000; 107:528-537. [PMID: 10641661 DOI: 10.1121/1.428321] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The fidelity of reproducing free-field sounds using a virtual auditory display was investigated in two experiments. In the first experiment, listeners directly compared stimuli from an actual loudspeaker in the free field with those from small headphones placed in front of the ears. Headphone stimuli were filtered using head-related transfer functions (HRTFs), recorded while listeners were wearing the headphones, in order to reproduce the pressure signatures of the free-field sounds at the eardrum. Discriminability was investigated for six sound-source positions using broadband noise as a stimulus. The results show that the acoustic percepts of real and virtual sounds were identical. In the second experiment, discrimination between virtual sounds generated with measured and interpolated HRTFs was investigated. Interpolation was performed using HRTFs measured for loudspeaker positions with different spatial resolutions. Broadband noise bursts with flat and scrambled spectra were used as stimuli. The results indicate that, for a spatial resolution of about 6 degrees, the interpolation does not introduce audible cues. For resolutions of 20 degrees or more, the interpolation introduces audible cues related to timbre and position. For intermediate resolutions (10 degrees - 15 degrees) the data suggest that only timbre cues were used.
Collapse
Affiliation(s)
- E H Langendijk
- TNO Human Factors Research Institute, Soesterberg, The Netherlands.
| | | |
Collapse
|
35
|
Bolia RS, D'Angelo WR, McKinley RL. Aurally aided visual search in three-dimensional space. HUMAN FACTORS 1999; 41:664-669. [PMID: 10774135 DOI: 10.1518/001872099779656789] [Citation(s) in RCA: 21] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We conducted an experiment to evaluate the effectiveness of spatial audio displays on target acquisition performance. Participants performed a visual search task with and without the aid of a spatial audio display. Potential target locations ranged between plus and minus 180 degrees in azimuth and from -70 degrees to +90 degrees in elevation. Independent variables included the number of visual distractors present (1, 5, 10, 25, 50) and the spatial audio condition (no spatial audio, free-field spatial audio, virtual spatial audio). Results indicated that both free-field and virtual audio cues engendered a significant decrease in search times. Potential applications of this research include the design of spatial audio displays for aircraft cockpits and ground combat vehicles.
Collapse
Affiliation(s)
- R S Bolia
- AFRL/HECP, Wright-Patterson AFB, Ohio 45433, USA.
| | | | | |
Collapse
|
36
|
Nelson WT, Hettinger LJ, Cunningham JA, Brickman BJ, Haas MW, McKinley RL. Effects of localized auditory information on visual target detection performance using a helmet-mounted display. HUMAN FACTORS 1998; 40:452-460. [PMID: 9849103 DOI: 10.1518/001872098779591304] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
An experiment was conducted to evaluate the effects of localized auditory information on visual target detection performance. Visual targets were presented on either a wide field-of-view dome display or a helmet-mounted display and were accompanied by either localized, nonlocalized, or no auditory information. The addition of localized auditory information resulted in significant increases in target detection performance and significant reductions in workload ratings as compared with conditions in which auditory information was either nonlocalized or absent. Qualitative and quantitative analyses of participants' head motions revealed that the addition of localized auditory information resulted in extremely efficient and consistent search strategies. Implications for the development and design of multisensory virtual environments are discussed. Actual or potential applications of this research include the use of spatial auditory displays to augment visual information presented in helmet-mounted displays, thereby leading to increases in performance efficiency, reductions in physical and mental workload, and enhanced spatial awareness of objects in the environment.
Collapse
Affiliation(s)
- W T Nelson
- U.S. Air Force Research Laboratory, Wright-Patterson Air Force Base, AFRL/HECP, OH 45433-7022, USA
| | | | | | | | | | | |
Collapse
|
37
|
Flanagan P, McAnally KI, Martin RL, Meehan JW, Oldfield SR. Aurally and visually guided visual search in a virtual environment. HUMAN FACTORS 1998; 40:461-468. [PMID: 9849104 DOI: 10.1518/001872098779591331] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We investigated the time participants took to perform a visual search task for targets outside the visual field of view using a helmet-mounted display. We also measured the effectiveness of visual and auditory cues to target location. The auditory stimuli used to cue location were noise bursts previously recorded from the ear canals of the participants and were either presented briefly at the beginning of a trial or continually updated to compensate for head movements. The visual cue was a dynamic arrow that indicated the direction and angular distance from the instantaneous head position to the target. Both visual and auditory spatial cues reduced search time dramatically, compared with unaided search. The updating audio cue was more effective than the transient audio cue and was as effective as the visual cue in reducing search time. These data show that both spatial auditory and visual cues can markedly improve visual search performance. Potential applications for this research include highly visual environments, such as aviation, where there is risk of overloading the visual modality with information.
Collapse
Affiliation(s)
- P Flanagan
- School of Psychology, Deakin University, Geelong, Victoria, Australia
| | | | | | | | | |
Collapse
|