1
|
Bowden VK, Griffiths N, Strickland L, Loft S. Detecting a Single Automation Failure: The Impact of Expected (But Not Experienced) Automation Reliability. HUMAN FACTORS 2023; 65:533-545. [PMID: 34375538 DOI: 10.1177/00187208211037188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
OBJECTIVE Examine the impact of expected automation reliability on trust, workload, task disengagement, nonautomated task performance, and the detection of a single automation failure in simulated air traffic control. BACKGROUND Prior research has focused on the impact of experienced automation reliability. However, many operational settings feature automation that is reliable to the extent that operators will seldom experience automation failures. Despite this, operators must remain aware of when automation is at greater risk of failing. METHOD Participants performed the task with or without conflict detection/resolution automation. Automation failed to detect/resolve one conflict (i.e., an automation miss). Expected reliability was manipulated via instructions such that the expected level of reliability was (a) constant or variable, and (b) the single automation failure occurred when expected reliability was high or low. RESULTS Trust in automation increased with time on task prior to the automation failure. Trust was higher when expecting high relative to low reliability. Automation failure detection was improved when the failure occurred under low compared with high expected reliability. Subjective workload decreased with automation, but there was no improvement to nonautomated task performance. Automation increased perceived task disengagement. CONCLUSIONS Both automation reliability expectations and task experience played a role in determining trust. Automation failure detection was improved when the failure occurred at a time it was expected to be more likely. Participants did not effectively allocate any spared capacity to nonautomated tasks. APPLICATIONS The outcomes are applicable because operators in field settings likely form contextual expectations regarding the reliability of automation.
Collapse
Affiliation(s)
| | | | | | - Shayne Loft
- The University of Western Australia, Crawley
| |
Collapse
|
2
|
Xie Y, Zhou R, Qu J. Fitts' law on the flight deck: evaluating touchscreens for aircraft tasks in actual flight scenarios. ERGONOMICS 2023; 66:506-523. [PMID: 35786415 DOI: 10.1080/00140139.2022.2097318] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 06/28/2022] [Indexed: 06/15/2023]
Abstract
This research investigated the effects of an abnormal flight environment using touch-based navigation displays (TNDs). Fitts' law was used to compare the performance of TNDs with control display units (CDUs) and mode control panel (MCPs) under three different flight scenarios (normal, turbulence and startled). A within-subjects design involving 15 male participants was used. Data were collected in respect to accuracy, movement time, subjective feelings, choices and comments. The results showed that under abnormal conditions, TNDs showed worse operation performance and stability than CDUs and MCPs; however, it was easy to learn from TNDs, and they provided a good user experience. Moreover, this research demonstrated the application of Fitts' law to describe pilot behaviours in interactive flight devices, particularly for tasks involving real flight operations. TND designs for aviation could be developed based on these findings to improve flight crew performance when using new technology.Practitioner summary: This research built a Fitts' law model to evaluate the performance of aircraft cockpit touchscreens under normal, turbulence and startled scenarios. We compared the different touchscreens (TNDs) with other traditional interactive devices, such as CDUs and MCPs. The results have implications for the design of aircraft cockpit touchscreens and define the task scenario. Furthermore, the results contribute to the development of scenes utilising Fitts' law.
Collapse
Affiliation(s)
- Yubin Xie
- School of Economics and Management, Beihang University, Beijing, China
| | - Ronggang Zhou
- School of Economics and Management, Beihang University, Beijing, China
- Key Laboratory of Complex System Analysis, Management and Decision (Beihang University), Ministry of Education of the People's Republic of China, Beijing, China
| | - Jianhong Qu
- School of Economics and Management, Beihang University, Beijing, China
| |
Collapse
|
3
|
MacLean CL, Dror IE. Measuring base-rate bias error in workplace safety investigators. JOURNAL OF SAFETY RESEARCH 2023; 84:108-116. [PMID: 36868639 DOI: 10.1016/j.jsr.2022.10.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/25/2022] [Accepted: 10/18/2022] [Indexed: 06/18/2023]
Abstract
INTRODUCTION This study explored the magnitude of professional industrial investigators' bias to attribute cause to a person more readily than to situational factors (i.e., human error bias). Such biased opinions may relieve companies from responsibilities and liability, as well as compromise efficacy of suggested preventative measures. METHOD Professional investigators and undergraduate participants were given a summary of a workplace event and asked to allocate cause to the factors they found causal for the event. The summary was crafted to be objectively balanced in its implication of cause equally between two factors: a worker and a tire. Participants then rated their confidence and the objectivity of their judgment. We then conducted an effect size analysis, which supplemented the findings from our experiment with two previously published research studies that used the same event summary. RESULTS Professionals exhibited a human error bias, but nevertheless believed that they were objective and confident in their conclusions. The lay control group also showed this human error bias. These data, along with previous research data, revealed that, given the equivalent investigative circumstances, this bias was significantly larger with the professional investigators, with an effect size of dunb = 0.97, than the control group with an effect size of only dunb = 0.32. CONCLUSIONS The direction and strength of the human error bias can be quantified, and is shown to be larger in professional investigators compared to lay people. PRACTICAL APPLICATIONS Understanding the strength and direction of bias is a crucial step in mitigating the effects of the bias. The results of the current research demonstrate that mitigation strategies such as proper investigator training, a strong investigation culture, and standardized techniques, are potentially promising interventions to mitigate human error bias.
Collapse
Affiliation(s)
- Carla L MacLean
- Kwantlen Polytechnic University, Department of Psychology, 12666 72 Avenue, Surrey, B.C, Canada.
| | - Itiel E Dror
- University College London, London, United Kingdom.
| |
Collapse
|
4
|
MacLean CL. Cognitive bias in workplace investigation: Problems, perspectives and proposed solutions. APPLIED ERGONOMICS 2022; 105:103860. [PMID: 35963213 DOI: 10.1016/j.apergo.2022.103860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 07/16/2022] [Accepted: 07/25/2022] [Indexed: 06/15/2023]
Abstract
Psychological research demonstrates how our perceptions and cognitions are affected by context, motivation, expectation, and experience. A mounting body of research has revealed the many sources of bias that affect the judgments of experts as they execute their work. Professionals in such fields as forensic science, intelligence analysis, criminal investigation, medical and judicial decision-making find themselves at an inflection point where past professional practices are being questioned and new approaches developed. Workplace investigation is a professional domain that is in many ways analogous to the aforementioned decision-making environments. Yet, workplace investigation is also unique, as the sources, magnitude, and direction of bias are specific to workplace environments. The workplace investigation literature does not comprehensively address the many ways that the workings of honest investigators' minds may be biased when collecting evidence and/or rendering judgments; nor does the literature offer a set of strategies to address such happenings. The current paper is the first to offer a comprehensive overview of the important issue of cognitive bias in workplace investigation. In it I discuss the abilities and limitations of human cognition, provide a framework of sources of bias, as well as, offer suggestions for bias mitigation in the investigation process.
Collapse
Affiliation(s)
- Carla L MacLean
- Kwantlen Polytechnic University, Department of Psychology, 12666, 72 Avenue, Surrey, B.C, Canada.
| |
Collapse
|
5
|
Segall N, Joines JA, Baldwin RD, Bresch D, Coggins LG, Janzen S, Engel JR, Wright MC. Effect of Remote Cardiac Monitoring System Design on Response Time to Critical Arrhythmias. Simul Healthc 2022; 17:112-119. [PMID: 34506366 PMCID: PMC8904642 DOI: 10.1097/sih.0000000000000610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
INTRODUCTION In many hospitals across the country, electrocardiograms of multiple at-risk patients are monitored remotely by telemetry monitor watchers in a central location. However, there is limited evidence regarding best practices for designing these cardiac monitoring systems to ensure prompt detection and response to life-threatening events. To identify factors that may affect monitoring efficiency, we simulated critical arrhythmias in inpatient units with different monitoring systems and compared their efficiency in communicating the arrhythmias to a first responder. METHODS This was a multicenter cross-sectional in situ simulation study. Simulation participants were monitor watchers and first responders (usually nurses) in 2 inpatient units in each of 3 hospitals. Manipulated variables included: (1) number of communication nodes between monitor watchers and first responders; (2) central monitoring station location-on or off the patient care unit; (3) monitor watchers' workload; (4) nurses' workload; and (5) participants' experience. RESULTS We performed 62 arrhythmia simulations to measure response times of monitor watchers and 128 arrhythmia simulations to measure response times in patient care units. We found that systems in which an intermediary between monitor watchers and nurses communicated critical events had faster response times to simulated arrhythmias than systems in which monitor watchers communicated directly with nurses. Responses were also faster in units colocated with central monitoring stations than in those located remotely. As the perceived workload of nurses increased, response latency also increased. Experience did not affect response times. CONCLUSIONS Although limited in our ability to isolate the effects of these factors from extraneous factors on central monitoring system efficiency, our study provides a roadmap for using in situ arrhythmia simulations to assess and improve monitoring performance.
Collapse
Affiliation(s)
- Noa Segall
- From the Department of Anesthesiology (N.S.), Duke University School of Medicine, Durham; Textile Engineering, Chemistry, and Science (J.A.J.), North Carolina State University, Raleigh; Duke University Health System (R.D.B., J.R.E.); Duke Office of Clinical Research (D.B.), Duke University School of Medicine, Durham, NC; Rush University College of Nursing (L.G.C.), Chicago, IL; Saint Alphonsus Regional Medical Center (S.J.), Boise; and College of Pharmacy (M.C.W.), Idaho State University, Pocatello, ID
| | | | | | | | | | | | | | | |
Collapse
|
6
|
Cañas JJ. AI and Ethics When Human Beings Collaborate With AI Agents. Front Psychol 2022; 13:836650. [PMID: 35310226 PMCID: PMC8931455 DOI: 10.3389/fpsyg.2022.836650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 02/09/2022] [Indexed: 11/13/2022] Open
Abstract
The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is no co-supervision, neither collaborator can be held co-responsible for the actions of the other. When the collaboration is between a person and an AI system, co-supervision is also necessary to ensure that the objectives of the activity are achieved, but this also means that there is co-responsibility for the consequences of the activities. Therefore, if each agent's responsibility for the consequences of the activity depends on the effectiveness and efficiency of the supervision that that agent performs over the other agent's actions, it will be necessary to take into account the way in which that supervision is carried out and the factors on which it depends. In the case of the human supervision of the actions of an AI system, there is a wealth of psychological research that can help us to establish cognitive and non-cognitive boundaries and their relationship to the responsibility of humans collaborating with AI systems. There is also psychological research on how an external observer supervises and evaluates human actions. This research can be used to programme AI systems in such a way that the boundaries of responsibility for AI systems can be established. In this article, we will describe some examples of how such research on the task of supervising the actions of another agent can be used to establish lines of shared responsibility between a human being and an AI system. The article will conclude by proposing that we should develop a methodology for assessing responsibility based on the results of the collaboration between a human being and an AI agent during the performance of one common activity.
Collapse
|
7
|
Zhu R, Wang Z, Ma X, You X. High expectancy influences the role of cognitive load in inattentional deafness during landing decision-making. APPLIED ERGONOMICS 2022; 99:103629. [PMID: 34717070 DOI: 10.1016/j.apergo.2021.103629] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 10/19/2021] [Accepted: 10/23/2021] [Indexed: 06/13/2023]
Abstract
Neglecting a critical auditory alarm is a major obstacle to maintaining a safe environment, especially in aviation. Earlier studies have indicated that tasks with a higher perceptual or cognitive load in the visual modality influence the processing of auditory stimuli. It is unclear, however, whether other factors, such as memory failure, active neglect, or expectancy influence the effect of cognitive load on auditory alarm detection sensitivity during aeronautical decision-making. In this study, we investigated this issue in three laboratory experiments using the technique of signal detection analysis, in which participants were asked to make a landing decision based on indicators of the instrument landing system while also trying to detect an audible alarm. We found that the sensitivity of auditory alarm detection was reduced under conditions of high cognitive load and that this effect persisted even when the auditory detection response occurred first (before the landing decision response) and when the probability of an auditory alarm was 40%. However, the sensitivity of auditory detection was not influenced by cognitive load under high expectancy conditions (60% probability of alarm presentation). Furthermore, the value of the response bias was reduced under high cognitive load conditions when the probability of an auditory alarm was low (20%). With an increase in the level of expectancy (40% and 60% probability of alarm presentation), it was found that cognitive load did not influence the response bias. These findings indicate that visual cognitive load affects the sensitivity to an auditory alarm only at a low expectancy level (20% and 40% probability of alarm presentation). The effect of cognitive load on the sensitivity to an auditory alarm was not due to memory failure or active neglect and the response bias was more sensitive to the expectancy factor.
Collapse
Affiliation(s)
- Rongjuan Zhu
- Key Laboratory for Behavior and Cognitive Neuroscience of Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, 710062, China
| | - Ziyu Wang
- Key Laboratory for Behavior and Cognitive Neuroscience of Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, 710062, China
| | - Xiaoliang Ma
- Geovis Spatial Technology Co.,Ltd, Xi'an, 710100, China
| | - Xuqun You
- Key Laboratory for Behavior and Cognitive Neuroscience of Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, 710062, China.
| |
Collapse
|
8
|
Yang XJ, Schemanske C, Searle C. Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation. HUMAN FACTORS 2021:187208211034716. [PMID: 34459266 PMCID: PMC10374998 DOI: 10.1177/00187208211034716] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE We examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BACKGROUND Most existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. METHOD Seventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. RESULTS Outcome bias and contrast effect significantly influence human operators' trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. CONCLUSION Human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. APPLICATION Understanding the trust adjustment process enables accurate prediction of the operators' moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.
Collapse
|
9
|
Guo Y, Yang XJ. Modeling and Predicting Trust Dynamics in Human–Robot Teaming: A Bayesian Inference Approach. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00703-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
AbstractTrust in automation, or more recently trust in autonomy, has received extensive research attention in the past three decades. The majority of prior literature adopted a “snapshot” view of trust and typically evaluated trust through questionnaires administered at the end of an experiment. This “snapshot” view, however, does not acknowledge that trust is a dynamic variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model and learn its parameters using Bayesian inference. Our proposed model adheres to three properties of trust dynamics characterizing human agents’ trust development process de facto and thus guarantees high model explicability and generalizability. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a root mean square error of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinct types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.
Collapse
|
10
|
Meyer J, Dembinsky O, Raviv T. Alerting about possible risks vs. blocking risky choices: A quantitative model and its empirical evaluation. Comput Secur 2020. [DOI: 10.1016/j.cose.2020.101944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
11
|
Abstract
Automated medical technology is becoming an integral part of routine anesthetic practice. Automated technologies can improve patient safety, but may create new workflows with potentially surprising adverse consequences and cognitive errors that must be addressed before these technologies are adopted into clinical practice. Industries such as aviation and nuclear power have developed techniques to mitigate the unintended consequences of automation, including automation bias, skill loss, and system failures. In order to maximize the benefits of automated technology, clinicians should receive training in human–system interaction including topics such as vigilance, management of system failures, and maintaining manual skills. Medical device manufacturers now evaluate usability of equipment using the principles of human performance and should be encouraged to develop comprehensive training materials that describe possible system failures. Additional research in human–system interaction can improve the ways in which automated medical devices communicate with clinicians. These steps will ensure that medical practitioners can effectively use these new devices while being ready to assume manual control when necessary and prepare us for a future that includes automated health care.
Collapse
|
12
|
Wu M, Zhang L, Li WC, Wan L, Lu N, Zhang J. Human Performance Analysis of Processes for Retrieving Beidou Satellite Navigation System During Breakdown. Front Psychol 2020; 11:292. [PMID: 32153481 PMCID: PMC7047823 DOI: 10.3389/fpsyg.2020.00292] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 02/06/2020] [Indexed: 11/30/2022] Open
Abstract
Satellite navigation systems provide continuous, timely, and accurate signals of location, speed, and time to users all over the world. Although the running of these systems has become highly automated, the human operator is still vital for its continued operation, especially when certain equipment failures occur. In this paper, we examined 180 incidents of one particular type of equipment failure and the whole recovery process as recorded in the log files from a ground control center of the Beidou satellite navigation system. We extracted the information, including the technical description of the failure, the time when the fault occurred, the full recovery time, and the demographic information of the team members on the shift responsible for responding to the failure. We then transformed these information into the cognitive complexity of the task, time of day, shift handover period, and team skill composition. Multiple regression analysis showed that task complexity and shift handover were key predictors of recovery time. Time of day also influenced the recovery time, during midnight to 4 a.m., operators made longer responses. We also found that the fault handling processes could be improved if the team's most adept member is more skillful at that role than in other teams. We discussed the theoretical and practical implication of this study.
Collapse
Affiliation(s)
- Mo Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
- Beijing Satellite Navigation Center, Beijing, China
| | - Liang Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Wen-Chin Li
- Safety and Accident Investigation Centre, Cranfield University, Cranfield, United Kingdom
| | - Lingyun Wan
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Ning Lu
- School of Psychology and Cognitive Sciences, Peking University, Beijing, China
| | - Jingyu Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
13
|
Snyder ME, Jaynes H, Gernant SA, DiIulio J, Militello LG, Doucette WR, Adeoye OA, Russ AL. Alerts for community pharmacist-provided medication therapy management: recommendations from a heuristic evaluation. BMC Med Inform Decis Mak 2019; 19:135. [PMID: 31311532 PMCID: PMC6636156 DOI: 10.1186/s12911-019-0866-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Accepted: 07/04/2019] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Medication therapy management (MTM) is a service, most commonly provided by pharmacists, intended to identify and resolve medication therapy problems (MTPs) to enhance patient care. MTM is typically documented by the community pharmacist in an MTM vendor's web-based platform. These platforms often include integrated alerts to assist the pharmacist with assessing MTPs. In order to maximize the usability and usefulness of alerts to the end users (e.g., community pharmacists), MTM alert design should follow principles from human factors science. Therefore, the objectives of this study were to 1) evaluate the extent to which alerts for community pharmacist-delivered MTM align with established human factors principles, and 2) identify areas of opportunity and recommendations to improve MTM alert design. METHODS Five categories of MTM alerts submitted by community pharmacists were evaluated: 1) indication, 2) effectiveness; 3) safety; 4) adherence; and 5) cost-containment. This heuristic evaluation was guided by the Instrument for Evaluating Human-Factors Principles in Medication-Related Decision Support Alerts (I-MeDeSA) which we adapted and contained 32 heuristics. For each MTM alert, four analysts' individual ratings were summed and a mean score on the modified I-MeDeSA computed. For each heuristic, we also computed the percent of analyst ratings indicating alignment with the heuristic. We did this for all alerts evaluated to produce an "overall" summary of analysts' ratings for a given heuristic, and we also computed this separately for each alert category. Our results focus on heuristics where ≤50% of analysts' ratings indicated the alerts aligned with the heuristic. RESULTS I-MeDeSA scores across the five alert categories were similar. Heuristics pertaining to visibility and color were generally met. Opportunities for improvement across all MTM alert categories pertained to the principles of alert prioritization; text-based information; alarm philosophy; and corrective actions. CONCLUSIONS MTM alerts have several opportunities for improvement related to human factors principles, resulting in MTM alert design recommendations. Enhancements to MTM alert design may increase the effectiveness of MTM delivery by community pharmacists and result in improved patient outcomes.
Collapse
Affiliation(s)
- Margie E. Snyder
- Department of Pharmacy Practice, Purdue University College of Pharmacy, 640 Eskenazi Ave, Indianapolis, IN 46220 USA
| | - Heather Jaynes
- Department of Pharmacy Practice, Purdue University College of Pharmacy, 640 Eskenazi Ave, Indianapolis, IN 46220 USA
| | - Stephanie A. Gernant
- Department of Pharmacy Practice, University of Connecticut School of Pharmacy, Storrs, CT USA
| | | | | | - William R. Doucette
- Division of Health Services Research, Department of Pharmacy Practice and Science, University of Iowa College of Pharmacy, Iowa City, IA USA
| | - Omolola A. Adeoye
- Department of Pharmacy Practice, Purdue University College of Pharmacy, 640 Eskenazi Ave, Indianapolis, IN 46220 USA
| | - Alissa L. Russ
- Department of Pharmacy Practice, Purdue University College of Pharmacy, 640 Eskenazi Ave, Indianapolis, IN 46220 USA
- Regenstrief Institute, Inc, Indianapolis, IN USA
| |
Collapse
|
14
|
Canfield CI, Fischhoff B. Setting Priorities in Behavioral Interventions: An Application to Reducing Phishing Risk. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2018; 38:826-838. [PMID: 29023908 DOI: 10.1111/risa.12917] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 04/18/2017] [Accepted: 07/13/2017] [Indexed: 06/07/2023]
Abstract
Phishing risk is a growing area of concern for corporations, governments, and individuals. Given the evidence that users vary widely in their vulnerability to phishing attacks, we demonstrate an approach for assessing the benefits and costs of interventions that target the most vulnerable users. Our approach uses Monte Carlo simulation to (1) identify which users were most vulnerable, in signal detection theory terms; (2) assess the proportion of system-level risk attributable to the most vulnerable users; (3) estimate the monetary benefit and cost of behavioral interventions targeting different vulnerability levels; and (4) evaluate the sensitivity of these results to whether the attacks involve random or spear phishing. Using parameter estimates from previous research, we find that the most vulnerable users were less cautious and less able to distinguish between phishing and legitimate emails (positive response bias and low sensitivity, in signal detection theory terms). They also accounted for a large share of phishing risk for both random and spear phishing attacks. Under these conditions, our analysis estimates much greater net benefit for behavioral interventions that target these vulnerable users. Within the range of the model's assumptions, there was generally net benefit even for the least vulnerable users. However, the differences in the return on investment for interventions with users with different degrees of vulnerability indicate the importance of measuring that performance, and letting it guide interventions. This study suggests that interventions to reduce response bias, rather than to increase sensitivity, have greater net benefit.
Collapse
Affiliation(s)
- Casey Inez Canfield
- Engineering & Public Policy, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Baruch Fischhoff
- Engineering & Public Policy, Carnegie Mellon University, Pittsburgh, PA, USA
- Institute for Politics and Strategy, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
15
|
Hodgetts HM, Vachon F, Chamberland C, Tremblay S. See No Evil: Cognitive Challenges of Security Surveillance and Monitoring. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2017. [DOI: 10.1016/j.jarmac.2017.05.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
16
|
Sebok A, Wickens CD. Implementing Lumberjacks and Black Swans Into Model-Based Tools to Support Human-Automation Interaction. HUMAN FACTORS 2017; 59:189-203. [PMID: 27591210 DOI: 10.1177/0018720816665201] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
OBJECTIVE The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. BACKGROUND Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. METHOD The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. RESULTS Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. CONCLUSION The three model-based tools offer useful ways to predict operator performance in complex systems. APPLICATION The three tools offer ways to predict the effects of different automation designs on operator performance.
Collapse
|
17
|
Vallières BR, Hodgetts HM, Vachon F, Tremblay S. Supporting dynamic change detection: using the right tool for the task. Cogn Res Princ Implic 2016; 1:32. [PMID: 28180182 PMCID: PMC5256471 DOI: 10.1186/s41235-016-0033-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Accepted: 11/17/2016] [Indexed: 11/10/2022] Open
Abstract
Detecting task-relevant changes in a visual scene is necessary for successfully monitoring and managing dynamic command and control situations. Change blindness-the failure to notice visual changes-is an important source of human error. Change History EXplicit (CHEX) is a tool developed to aid change detection and maintain situation awareness; and in the current study we test the generality of its ability to facilitate the detection of changes when this subtask is embedded within a broader dynamic decision-making task. A multitasking air-warfare simulation required participants to perform radar-based subtasks, for which change detection was a necessary aspect of the higher-order goal of protecting one's own ship. In this task, however, CHEX rendered the operator even more vulnerable to attentional failures in change detection and increased perceived workload. Such support was only effective when participants performed a change detection task without concurrent subtasks. Results are interpreted in terms of the NSEEV model of attention behavior (Steelman, McCarley, & Wickens, Hum. Factors 53:142-153, 2011; J. Exp. Psychol. Appl. 19:403-419, 2013), and suggest that decision aids for use in multitasking contexts must be designed to fit within the available workload capacity of the user so that they may truly augment cognition.
Collapse
Affiliation(s)
- Benoît R. Vallières
- École de Psychologie, Université Laval, Pavillon Félix-Antoine-Savard, 2325, rue des Bibliothèques, Québec, QC G1V 0A6 Canada
| | - Helen M. Hodgetts
- École de Psychologie, Université Laval, Pavillon Félix-Antoine-Savard, 2325, rue des Bibliothèques, Québec, QC G1V 0A6 Canada
| | - François Vachon
- École de Psychologie, Université Laval, Pavillon Félix-Antoine-Savard, 2325, rue des Bibliothèques, Québec, QC G1V 0A6 Canada
| | - Sébastien Tremblay
- École de Psychologie, Université Laval, Pavillon Félix-Antoine-Savard, 2325, rue des Bibliothèques, Québec, QC G1V 0A6 Canada
| |
Collapse
|
18
|
Goodyear K, Parasuraman R, Chernyak S, Madhavan P, Deshpande G, Krueger F. Advice Taking from Humans and Machines: An fMRI and Effective Connectivity Study. Front Hum Neurosci 2016; 10:542. [PMID: 27867351 PMCID: PMC5095979 DOI: 10.3389/fnhum.2016.00542] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 10/12/2016] [Indexed: 11/23/2022] Open
Abstract
With new technological advances, advice can come from different sources such as machines or humans, but how individuals respond to such advice and the neural correlates involved need to be better understood. We combined functional MRI and multivariate Granger causality analysis with an X-ray luggage-screening task to investigate the neural basis and corresponding effective connectivity involved with advice utilization from agents framed as experts. Participants were asked to accept or reject good or bad advice from a human or machine agent with low reliability (high false alarm rate). We showed that unreliable advice decreased performance overall and participants interacting with the human agent had a greater depreciation of advice utilization during bad advice compared to the machine agent. These differences in advice utilization can be perceivably due to reevaluation of expectations arising from association of dispositional credibility for each agent. We demonstrated that differences in advice utilization engaged brain regions that may be associated with evaluation of personal characteristics and traits (precuneus, posterior cingulate cortex, temporoparietal junction) and interoception (posterior insula). We found that the right posterior insula and left precuneus were the drivers of the advice utilization network that were reciprocally connected to each other and also projected to all other regions. Our behavioral and neuroimaging results have significant implications for society because of progressions in technology and increased interactions with machines.
Collapse
Affiliation(s)
- Kimberly Goodyear
- Center for Alcohol and Addiction Studies, Department of Behavioral and Social Sciences, Brown University, ProvidenceRI, USA; Section on Clinical Psychoneuroendocrinology and Neuropsychopharmacology, National Institute on Alcohol Abuse and Alcoholism and National Institute on Drug Abuse, BethesdaMD, USA
| | - Raja Parasuraman
- Department of Psychology, George Mason University, Fairfax VA, USA
| | - Sergey Chernyak
- Molecular Neuroscience Department, George Mason University, Fairfax VA, USA
| | | | - Gopikrishna Deshpande
- Auburn University MRI Research Center, Department of Electrical & Computer Engineering, Auburn University, AuburnAL, USA; Department of Psychology, Auburn University, AuburnAL, USA; Alabama Advanced Imaging Consortium, Auburn University and University of Alabama, BirminghamAL, USA
| | - Frank Krueger
- Department of Psychology, George Mason University, FairfaxVA, USA; Molecular Neuroscience Department, George Mason University, FairfaxVA, USA
| |
Collapse
|
19
|
Trapsilawati F, Wickens CD, Qu X, Chen CH. Benefits of Imperfect Conflict Resolution Advisory Aids for Future Air Traffic Control. HUMAN FACTORS 2016; 58:1007-1019. [PMID: 27422153 DOI: 10.1177/0018720816655941] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2015] [Accepted: 05/24/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE The aim of this study was to examine the human-automation interaction issues and the interacting factors in the context of conflict detection and resolution advisory (CRA) systems. BACKGROUND The issues of imperfect automation in air traffic control (ATC) have been well documented in previous studies, particularly in conflict-alerting systems. The extent to which the prior findings can be applied to an integrated conflict detection and resolution system in future ATC remains unknown. METHOD Twenty-four participants were evenly divided into two groups corresponding to a medium- and a high-traffic density condition, respectively. In each traffic density condition, participants were instructed to perform simulated ATC tasks under four automation conditions, including reliable, unreliable with short time allowance to secondary conflict (TAS), unreliable with long TAS, and manual conditions. Dependent variables accounted for conflict resolution performance, workload, situation awareness, and trust in and dependence on the CRA aid, respectively. RESULTS Imposing the CRA automation did increase performance and reduce workload as compared with manual performance. The CRA aid did not decrease situation awareness. The benefits of the CRA aid were manifest even when it was imperfectly reliable and were apparent across traffic loads. In the unreliable blocks, trust in the CRA aid was degraded but dependence was not influenced, yet the performance was not adversely affected. CONCLUSION The use of CRA aid would benefit ATC operations across traffic densities. APPLICATION CRA aid offers benefits across traffic densities, regardless of its imperfection, as long as its reliability level is set above the threshold of assistance, suggesting its application for future ATC.
Collapse
|
20
|
Herdener N, Wickens CD, Clegg BA, Smith CAP. Overconfidence in Projecting Uncertain Spatial Trajectories. HUMAN FACTORS 2016; 58:899-914. [PMID: 27125532 DOI: 10.1177/0018720816645259] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2015] [Accepted: 03/11/2016] [Indexed: 06/05/2023]
Abstract
OBJECTIVE The aim of this study was to understand factors that influence the prediction of uncertain spatial trajectories (e.g., the future path of a hurricane or ship) and the role of human overconfidence in such prediction. BACKGROUND Research has indicated that human prediction of uncertain trajectories is difficult and may well be subject to overconfidence in the accuracy of forecasts as is found in event prediction, a finding that indicates that humans insufficiently appreciate the contributions of variance in nature to their predictions. METHOD In two experiments, our paradigm required participants to observe a starting point, a position at time T, and then make a prediction of the location of the trajectory at time NT. They experienced several trajectories from the same underlying model but perturbed by random variance in heading and speed. RESULTS In Experiment 1A, people predicted linear paths well and were better in heading predictions than in speed predictions. However, participants greatly underestimated the variance in predicted location, indicating overconfidence. In Experiment 1B, the effect was replicated with frequencies rather than probabilities used in variance estimates. In Experiment 2, people predicted nonlinear trajectories poorly, and overconfidence was again observed. Overconfidence was reduced on the more difficult predictions. In both main experiments, those better at predicting the mean were not better at predicting the variance. CONCLUSIONS Predicting the level of uncertainty in spatial trajectories is not well done and may involve qualitatively different abilities than prediction of the mean. APPLICATION Improving real-world performance at prediction demands developing better understanding of variability, not just the average case. Biases in prediction of uncertainty may be addressed through debiasing training and/or visualization tools that could assist in more calibrated action planning.
Collapse
|
21
|
Schaefer KE, Chen JYC, Szalma JL, Hancock PA. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems. HUMAN FACTORS 2016; 58:377-400. [PMID: 27005902 DOI: 10.1177/0018720816634228] [Citation(s) in RCA: 144] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 01/13/2016] [Indexed: 06/05/2023]
Abstract
OBJECTIVE We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. BACKGROUND Trust is increasingly important in the growing need for synergistic human-machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human-robot interaction to include all of automation interaction. METHOD We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. RESULTS The overall effect size of all factors on trust development was ḡ = +0.48, and the correlational effect was [Formula: see text] = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (ḡ = +0.49; [Formula: see text] = +0.16) and automation-related (ḡ = +0.53; [Formula: see text] = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. CONCLUSION Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. APPLICATION This work has important implications to the enhancement of current and future human-automation interaction, especially in high-risk or extreme performance environments.
Collapse
Affiliation(s)
| | | | - James L Szalma
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MarylandU.S. Army Research Laboratory, Orlando, FloridaUniversity of Central Florida, Orlando
| | | |
Collapse
|
22
|
Jin Y, Fraustino JD, Liu BF. The Scared, the Outraged, and the Anxious: How Crisis Emotions, Involvement, and Demographics Predict Publics’ Conative Coping. ACTA ACUST UNITED AC 2016. [DOI: 10.1080/1553118x.2016.1160401] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
23
|
Rayo MF, Moffatt-Bruce SD. Alarm system management: evidence-based guidance encouraging direct measurement of informativeness to improve alarm response. BMJ Qual Saf 2015; 24:282-6. [PMID: 25734193 DOI: 10.1136/bmjqs-2014-003373] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Although there are powerful incentives for creating alarm management programmes to reduce 'alarm fatigue', they do not provide guidance on how to reduce the likelihood that clinicians will disregard critical alarms. The literature cites numerous phenomena that contribute to alarm fatigue, although many of these, including total rate of alarms, are not supported in the literature as factors that directly impact alarm response. The contributor that is most frequently associated with alarm response is informativeness, which is defined as the proportion of total alarms that successfully conveys a specific event, and the extent to which it is a hazard. Informativeness is low across all healthcare applications, consistently ranging from 1% to 20%. Because of its likelihood and strong evidential support, informativeness should be evaluated before other contributors are considered. Methods for measuring informativeness and alarm response are discussed. Design directions for potential interventions, as well as design alternatives to traditional alarms, are also discussed. With the increased attention and investment in alarm system management that alarm interventions are currently receiving, initiatives that focus on informativeness and the other evidence-based measures identified will allow us to more effectively, efficiently and reliably redirect clinician attention, ultimately improving alarm response.
Collapse
Affiliation(s)
- Michael F Rayo
- Department of Quality and Patient Safety, The Ohio State University, Columbus, Ohio, USA
| | - Susan D Moffatt-Bruce
- Department of Thoracic Surgery, College of Medicine, The Ohio State University, Columbus, Ohio, USA
| |
Collapse
|
24
|
Merritt SM, Lee D, Unnerstall JL, Huber K. Are well-calibrated users effective users? Associations between calibration of trust and performance on an automation-aided task. HUMAN FACTORS 2015; 57:34-47. [PMID: 25790569 DOI: 10.1177/0018720814561675] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
OBJECTIVE We present alternative operationalizations of trust calibration and examine their associations with predictors and outcomes. BACKGROUND It is thought that trust calibration (correspondence between aid reliability and user trust in the aid) is a key to effective human-automation performance. We propose that calibration can be operationalized in three ways. Perceptual accuracy is the extent to which the user perceives the aid's reliability accurately at one point in time. Perceptual sensitivity and trust sensitivity reflect user adjustment of perceived reliability and trust as the aid's actual reliability changes over time. METHOD One hundred fifty-five students completed an X-ray screening task with an automated screener. Awareness of the aid's accuracy trajectory and error type was examined as predictors, and task performance and aid failure detection were examined as outcomes. RESULTS Awareness of accuracy trajectory was significantly associated with all three operationalizations of calibration, but awareness of error type was not when considered in conjunction with accuracy trajectory. Contrary to expectations, only perceptual accuracy was significantly associated with task performance and failure detection, and combined, the three operationalizations accounted for only 9% and 4% of the variance in these outcomes, respectively. CONCLUSION Our results suggest that the potential importance of trust calibration warrants further examination. Moderators may exist. APPLICATION Users who were better able to perform the task unaided were better able to identify and correct aid failure, suggesting that user task training and expertise may benefit human-automation performance.
Collapse
|
25
|
Manzey D, Gérard N, Wiczorek R. Decision-making and response strategies in interaction with alarms: the impact of alarm reliability, availability of alarm validity information and workload. ERGONOMICS 2014; 57:1833-1855. [PMID: 25224606 DOI: 10.1080/00140139.2014.957732] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Responding to alarm systems which usually commit a number of false alarms and/or misses involves decision-making under uncertainty. Four laboratory experiments including a total of 256 participants were conducted to gain comprehensive insight into humans' dealing with this uncertainty. Specifically, it was investigated how responses to alarms/non-alarms are affected by the predictive validities of these events, and to what extent response strategies depend on whether or not the validity of alarms/non-alarms can be cross-checked against other data. Among others, the results suggest that, without cross-check possibility (experiment 1), low levels of predictive validity of alarms ( ≤ 0.5) led most participants to use one of two different strategies which both involved non-responding to a significant number of alarms (cry-wolf effect). Yet, providing access to alarm validity information reduced this effect dramatically (experiment 2). This latter result emerged independent of the effort needed for cross-checkings of alarms (experiment 3), but was affected by the workload imposed by concurrent tasks (experiment 4). Theoretical and practical consequences of these results for decision-making and response selection in interaction with alarm systems, as well as the design of effective alarm systems, are discussed.
Collapse
Affiliation(s)
- Dietrich Manzey
- a Institute of Psychology and Ergonomics , Technische Universitaet Berlin , Berlin , Germany
| | | | | |
Collapse
|
26
|
Imbert JP, Hodgetts HM, Parise R, Vachon F, Dehais F, Tremblay S. Attentional costs and failures in air traffic control notifications. ERGONOMICS 2014; 57:1817-1832. [PMID: 25202855 DOI: 10.1080/00140139.2014.952680] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Large display screens are common in supervisory tasks, meaning that alerts are often perceived in peripheral vision. Five air traffic control notification designs were evaluated in their ability to capture attention during an ongoing supervisory task, as well as their impact on the primary task. A range of performance measures, eye-tracking and subjective reports showed that colour, even animated, was less effective than movement, and notifications sometimes went unnoticed. Designs that drew attention to the notified aircraft by a pulsating box, concentric circles or the opacity of the background resulted in faster perception and no missed notifications. However, the latter two designs were intrusive and impaired primary task performance, while the simpler animated box captured attention without an overhead cognitive cost. These results highlight the need for a holistic approach to evaluation, achieving a balance between the benefits for one aspect of performance against the potential costs for another. Practitioner summary: We performed a holistic examination of air traffic control notification designs regarding their ability to capture attention during an ongoing supervisory task. The combination of performance, eye-tracking and subjective measurements demonstrated that the best design achieved a balance between attentional power and the overhead cognitive cost to primary task performance.
Collapse
Affiliation(s)
- Jean-Paul Imbert
- a Laboratoire d'informatique interactive, ENAC , Toulouse , France
| | | | | | | | | | | |
Collapse
|
27
|
Dehais F, Causse M, Vachon F, Régis N, Menant E, Tremblay S. Failure to detect critical auditory alerts in the cockpit: evidence for inattentional deafness. HUMAN FACTORS 2014; 56:631-644. [PMID: 25029890 DOI: 10.1177/0018720813510735] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
OBJECTIVE The aim of this study was to test whether inattentional deafness to critical alarms would be observed in a simulated cockpit. BACKGROUND The inability of pilots to detect unexpected changes in their auditory environment (e.g., alarms) is a major safety problem in aeronautics. In aviation, the lack of response to alarms is usually not attributed to attentional limitations, but rather to pilots choosing to ignore such warnings due to decision biases, hearing issues, or conscious risk taking. METHOD Twenty-eight general aviation pilots performed two landings in a flight simulator. In one scenario an auditory alert was triggered alone, whereas in the other the auditory alert occurred while the pilots dealt with a critical windshear. RESULTS In the windshear scenario, II pilots (39.3%) did not report or react appropriately to the alarm whereas all the pilots perceived the auditory warning in the no-windshear scenario. Also, of those pilots who were first exposed to the no-windshear scenario and detected the alarm, only three suffered from inattentional deafness in the subsequent windshear scenario. CONCLUSION These findings establish inattentional deafness as a cognitive phenomenon that is critical for air safety. Pre-exposure to a critical event triggering an auditory alarm can enhance alarm detection when a similar event is encountered subsequently. APPLICATION Case-based learning is a solution to mitigate auditory alarm misperception.
Collapse
|
28
|
Sanchez J, Rogers WA, Fisk AD, Rovira E. Understanding reliance on automation: effects of error type, error distribution, age and experience. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2014; 15:134-160. [PMID: 25642142 PMCID: PMC4307024 DOI: 10.1080/1463922x.2011.611269] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
An obstacle detection task supported by "imperfect" automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation.
Collapse
Affiliation(s)
- Julian Sanchez
- Medtronic, 976 Hardwood Avenue, Shoreview, MN 55126, USA
| | | | | | | |
Collapse
|
29
|
E. Culley K, Madhavan P. Trust in automation and automation designers: Implications for HCI and HMI. COMPUTERS IN HUMAN BEHAVIOR 2013. [DOI: 10.1016/j.chb.2013.04.032] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
30
|
Scannella S, Causse M, Chauveau N, Pastor J, Dehais F. Effects of the audiovisual conflict on auditory early processes. Int J Psychophysiol 2013; 89:115-22. [PMID: 23774001 DOI: 10.1016/j.ijpsycho.2013.06.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2012] [Revised: 06/05/2013] [Accepted: 06/08/2013] [Indexed: 11/18/2022]
Abstract
Auditory alarm misperception is one of the critical events that lead aircraft pilots to an erroneous flying decision. The rarity of these alarms associated with their possible unreliability may play a role in this misperception. In order to investigate this hypothesis, we manipulated both audiovisual conflict and sound rarity in a simplified landing task. Behavioral data and event related potentials (ERPs) of thirteen healthy participants were analyzed. We found that the presentation of a rare auditory signal (i.e., an alarm), incongruent with visual information, led to a smaller amplitude of the auditory N100 (i.e., less negative) compared to the condition in which both signals were congruent. Moreover, the incongruity between the visual information and the rare sound did not significantly affect reaction times, suggesting that the rare sound was neglected. We propose that the lower N100 amplitude reflects an early visual-to-auditory gating that depends on the rarity of the sound. In complex aircraft environments, this early effect might be partly responsible for auditory alarm insensitivity. Our results provide a new basis for future aeronautic studies and the development of countermeasures.
Collapse
Affiliation(s)
- Sébastien Scannella
- INSERM, UMRS 825, Université de Toulouse, UPS, CHU Purpan, Pavillon Baudot, 31024 Toulouse cedex 3, France.
| | | | | | | | | |
Collapse
|
31
|
Ngo MK, Pierce RS, Spence C. Using multisensory cues to facilitate air traffic management. HUMAN FACTORS 2012; 54:1093-1103. [PMID: 23397817 DOI: 10.1177/0018720812446623] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
OBJECTIVE In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. BACKGROUND Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. METHOD A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. RESULTS Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. CONCLUSION Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. APPLICATION These results have important implications for the design and use of multisensory cues in air traffic management.
Collapse
Affiliation(s)
- Mary K Ngo
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, South Parks Rd., Oxford, OX1 3UD, United Kingdom.
| | | | | |
Collapse
|
32
|
Dehais F, Causse M, Régis N, Menant E, Labedan P, Vachon F, Tremblay S. Missing Critical Auditory Alarms in Aeronautics: Evidence for Inattentional Deafness? ACTA ACUST UNITED AC 2012. [DOI: 10.1177/1071181312561328] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The inability of pilots to detect unexpected changes in the environment (e.g., auditory alarms) is a critical problem in aeronautics. The lack of response to alarms is not thought to be a perceptual/attentional issue, but rather that pilots choose to ignore such warnings due to cognitive biases. In the current paper we consider an alternative explanation, by extending the phenomenon of inattentional deafness to aeronautics. Fourteen pilots equipped with an eye tracker and an electrocardiogram performed landings in a flight simulator. During the critical landing, an auditory landing gear alarm was triggered while the volunteers also faced a windshear. Eight out of 14 pilots did not report the occurrence of the critical alarm during the debriefing. Interestingly, all but one of these ‘deaf’ pilots failed to perform the adequate go-around behavior. These findings establish inattentional deafness as a cognitive phenomenon that is critical for air safety.
Collapse
|
33
|
Rovira E, Parasuraman R. Transitioning to future air traffic management: effects of imperfect automation on controller attention and performance. HUMAN FACTORS 2010; 52:411-425. [PMID: 21077563 DOI: 10.1177/0018720810375692] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE This study examined whether benefits of conflict probe automation would occur in a future air traffic scenario in which air traffic service providers (ATSPs) are not directly responsible for freely maneuvering aircraft but are controlling other nonequipped aircraft (mixed-equipage environment). The objective was to examine how the type of automation imperfection (miss vs. false alarm) affects ATSP performance and attention allocation. BACKGROUND Research has shown that the type of automation imperfection leads to differential human performance costs. METHOD Participating in four 30-min scenarios were 12 full-performance-level ATSPs. Dependent variables included conflict detection and resolution performance, eye movements, and subjective ratings of trust and self confidence. RESULTS ATSPs detected conflicts faster and more accurately with reliable automation, as compared with manual performance. When the conflict probe automation was unreliable, conflict detection performance declined with both miss (25% conflicts detected) and false alarm automation (50% conflicts detected). CONCLUSION When the primary task of conflict detection was automated, even highly reliable yet imperfect automation (miss or false alarm) resulted in serious negative effects on operator performance. APPLICATION The further in advance that conflict probe automation predicts a conflict, the greater the uncertainty of prediction; thus, designers should provide users with feedback on the state of the automation or other tools that allow for inspection and analysis of the data underlying the conflict probe algorithm.
Collapse
Affiliation(s)
- Ericka Rovira
- George Mason University, Fairfax, Virginia 22030, USA.
| | | |
Collapse
|
34
|
Smith K, Källhammer JE. Driver acceptance of false alarms to simulated encroachment. HUMAN FACTORS 2010; 52:466-476. [PMID: 21077567 DOI: 10.1177/0018720810372218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE We investigated driver acceptance of alerts to left-turn encroachment incidents that do not produce a crash. If an event that produces a crash is the criterion for a "true" alert, all the alerts we studied are technically false alarms. Our aim was to inform the design of intersection-assist active safety systems. BACKGROUND The premise of this study is that it may be possible to overcome driver resistance to alerts that are false alarms by designing systems to issue alerts when and only when drivers would expect and accept them. METHOD Participants were passengers in a driving simulator that presented left-turn encroachment incidents. Participant point of view, the direction of encroachment, and postencroachment time (PET) were manipulated to produce 36 near-crash incidents. After viewing each incident, the participant rated the relative acceptability of a hypothetical alert to it. RESULTS Repeated-measures ANOVA and logistic regression indicate that acceptability varies inversely with PET. At PET intervals less than 2.2 s, driver point of view and encroachment direction interact. At PET intervals more than 2.2 s, alerts to lateral encroachments are more acceptable than alerts to oncoming encroachments. CONCLUSION Driver acceptance of alerts by active safety systems will be sensitive to context. APPLICATION This study demonstrates the utility of eliciting subjective criteria to inform system design to match driver (user) expectations. Intersection-assist active safety systems will need to be designed to adapt to the interaction of driver point of view, the direction of encroachment, and PET.
Collapse
Affiliation(s)
- Kip Smith
- Linköping University, Linköping, Sweden.
| | | |
Collapse
|